Recent Fiction Books

Here are a few of the recent science fiction books I’ve been reading that I have enjoyed:

I liked the series above — although my only complaint is that I think they could have done a bit of editing and fit it into 3 books rather than 4 without losing any important material.

Fiction (action-thriller-drama):

Point of Impact was turned into a movie but did poorly. If you read the book, it becomes clear why doing a good movie would be tough (and why the movie did poorly).

What if there was a monetary charge for inefficient code or software?

I was briefly looking at yet another JavaScript framework this morning. It was backed by a Ruby engine for optimal “MVC goodness.” Having never measured the performance of Ruby, I’m not going to claim Ruby’s fast or slow; but I will say that it’s unlikely that it’s as fast as a compiled language.

It got me to thinking — as more and more people (from big IT to homeowners) try to save energy as the costs of energy continue to rise, …

What if the act of running an (installed or web) application incurred a precise monetary fee for the amount of energy it consumed (possibly measured as CPU time)? Certainly, there’s already a charge for electricity today, which is not precisely measured …

Poorly written applications, CPU burners and I/O thrashers bug me the most when using a laptop with only a battery. But what if web hosts charged that way, or applications you “rented” as software as a service charged for “use.” The more unreasonable the charge, the more likely users will complain about performance.

(Thinking to my self about how FAST my PC is sitting under my desktop, but how fast Windows 3.1 would boot on it vs. Windows or Mac OSX).

Are applications just becoming more and more bloated, or truly better?

Do we really need another Javascript framework for UI?

From the web site, RoughlyDrafted magazine, Cocoa for Windows + Flash Killer = SproutCore.

Apple doesn’t sell ads, it sells hardware. But if the web requires Flash or Silverlight to run, Adobe or Microsoft can either intentionally kill alternative platforms like the Mac (or Linux), or simply make them work so poorly due to their own incompetence that those platforms risk becoming non-viable. Adobe has already proven its incompetence in delivering Flash for the Mac (and really any platform outside of Windows), and I shouldn’t need to recap Microsoft’s historical readiness to destroy anything that isn’t Windows.

Right, sure. Overall the article/post was reasonably well thought through, but the paragraph above should have been a little more grounded in reality. Microsoft has written a great Office application for the Mac that have gotten very good reviews over the years. Adobe’s tools power the Mac design-types desktops (and without Adobe and Microsoft, Apple would have had slim chance of surviving years ago). So, Adobe may have created a few shoddy builds of Flash …. Nobody is perfect. Show me some proof that it was intentional on Adobe’s part.

Regarding Cocoa …

That has not only allowed Apple to advance its own rich web apps using open web standards, but also to share SproutCore, its Cocoa-inspired, cross platform JavaScript frameworks, under an open source MIT license. That sharing will help provide an open alternative to Flash in the RIA space. SproutCore doesn’t compete against the use of Flash to make animated ads or navigation applets, but rather in deploying full, highly interactive applications, the target of Adobe’s Flash-based AIR platform plans.

Seriously? No wonder we need faster computers, more memory … because “we’re” trying to push a platform designed for basic interactivity into a full application development platform.

But, I digress. I don’t want applications to be hampered by HTML limitations. The industry can slowly continue to extend the browser by adding capabilities and functionality — and it still won’t be a “future platform.” It’s like we all took a GIANT step backward in terms of computers and interactivity in some senses (look at the power in Vista and OSX from a sheer platform capability — yet it ALL GOES WASTED on the browser). Some nice stuff can be made of course, and that’s all great — but it’s still HTML, regardless of whether it’s hosted live or offline, inside an Internet connected browser or saved locally.

Users — they don’t care. Before you complain, I’m certain a lot of my readers are passionate about their favorite browser, but the vast majority of users don’t care. The more web applications that come out and work on any modern web browser, the better (for all of us). As the number of web applications increases, and the quality is “good enough” … the platform becomes less significant — even stagnant. What will drive future operating system purchases? Even Apple will stagnate – if the operating system doesn’t matter to end users.

That’s where I’d like to think that these browser plug-ins, be they Silverlight or Flash, or The Next Big Thing come into play. They can harness more of the host operating system — creating some truly exciting applications. But, unfortunately, they still leave much of the operating system untapped, even as browser plug-ins. If Windows 7 ships with Multi-touch support built in, there isn’t going to be a single application or plug-in that will be able to take advantage of it without additional development. But, it’s less likely to happen if all platforms (Mac and Windows) don’t support the functionality. It’s still uncertain that a browser plug-in would drive future operating system purchases.

Can they make revenue from selling/renting applications? Honestly, that’s a tough business as too many free options will likely continue to exist. Apple for one could create a branded experience in iTunes for purchasing a web application license, but people are cheap (I’m no different, if there’s a free option that is decent, I’ll go with that). Apple could provide developers with a web development platform (thinking like Google web application development platform), but again, there’s that little challenge of revenue. All of this may only amount to pocket change when compared to today’s revenues for big companies.

I’ll take a longer look at SproutCore, but all of the stuff I saw was about how it interacted with Ruby on Rails — so I don’t know how intertwined they all are (and I don’t have the patience to do Ruby right now).

What’s your take? Is it important that your Javascript framework mirror a client framework (like Objective-C and Cocoa)? Or is it just a gimmick (and developers should concentrate on innovation rather than imitation)?

Do you create your own User Interfaces?

If you’re creating an application with a user interface, what tools do you use — and when do you use them?

There’s different stages in design — from conception to implementation.

There’s some discussion around a 37signals post a few days ago. The discussion was about how they don’t use Photoshop as part of the design process. I can see it working well for them.


Their user interfaces are plain and simple. Not hard to mock up quickly using any basic editors.

Robby suggested that it’s the opposite of his experience (generally). By using Photoshop (or similar)…

I also end up pushing things further (visually or even at the interaction level) because the tool gets me there quicker.

I totally agree — if 37signals had richer designs and user interface requirements, I bet they’d finding themselves using some other tools to create their user interfaces. (and they recently had a job posting advertising the need for a killer designer with virtual free reign … I bet they don’t stick to a syntax highlight text editor …. :) ).

Personally, I use a wide mix of tools — whatever seems right and wouldn’t want to set artificial boundaries.

I try to start out with some sketches on paper or a whiteboard.

The next step depends a lot on the scale and challenges of the specific project. If I really want to concentrate on the user interface and the overall experience, I’ll use a tool like Photoshop or Illustrator. It allows me to make quick changes to the user interface without a lot of fuss (and create lots of different versions). If I have a bit more time, and a prototype implementation would be reasonable, I’ll often break out a development tool and mock it up and experiment. For less important projects, I’ll proceed directly to end game and work on the final user interface directly in the actual project (if I’m able to do that).

There’s definitely no right answer for everyone. It makes sense to use the right tool for the job. However, I’m sure some of my readers work with companies or on jobs where there are rules in place regarding the tools, steps, etc. If you’d care to summarize and share, I’d love to hear them!