Is RSS Feed Synching a New Need?

Google Reader will die on July 1

Humans are lazy. It’s not their fault, but the default action for any situation seems, for most people, to wait and see. It can be something as trivial as what to have for dinner (“Oh, we’ll decide when that time rolls around.”) or as enormous and threatening as Climate Change (“Let’s do more research to see if it really, really, really is the result of human activity and is something that might be dangerous…”) the standard plan for most of us is to wait until we have to do something. It looks like, for an issue somewhere in between dinner and the fate of the planet, is requiring some action on our part, and soon.

Up to now, some of us had gotten used to relying on that huge, ‘non-evil’ corporation, Google, to act as caretaker of our choices for news feeds (as well as a bunch of other pieces of info, but today I’m talking about the news stuff). Because we wanted to keep all of the ways that we consume those news feeds in different places on different devices or software packages in synch, we relied on Google to be the keeper of our choices. In addition to offering a pretty good online feed reader, Google also (and less obviously) gave us a canonical place to save all of our feeds, how they were organized, and even which articles we had already read out of each RSS feed (so as not to reread the same articles again when we switched devices/locations/software).

Google has announced that as of July 1, they’ll be killing Google Reader, the aforementioned web-based reader and keeper of our RSS feed list. That not only means that the web-access to online RSS feeds is going away, but the infrastructure that a lot of other feed readers (since they figured that everyone had a Google Reader account) was something important to support, since it provided a way to quickly synch all of those preferences (what feeds you subscribe to, what feeds have read items, etc.) Google Reader’s feed collection became, partly by default, the single place where you could count on to make sure that all of your readers (and some of them actually depended on Google Reader being their to hold those Feed preferences, ‘in the cloud’ as it were) could be set up the same way. Yes, you can export your feeds as a collection of them (called an OPML file – so nerdy it doesn’t even have a clever acronym), but that’s pain and doesn’t help reconcile 2 of those files, should you add or delete feeds in 2 different contexts. I suppose someone will come up with a method of putting your OPML/reader settings in dropbox, and then the program can use it there to show your feeds, but that already feels like a hack.

So, it looks as if Feedly is going to probably be one of my main ways to read CSS feeds, along with the more standard Reeder on iOS and Mac OS X. I also use Netvibes to read news, along with my mail, calendar, stocks and and a few other odds and ends. It’s a bit cluttered, but a fine start page/dashboard for most of the time.

Still, I like the idea of a service that keeps all of those other readers in synch. Might there be one waiting in the wings, the way iCloud is supposed to keep my calendar, contacts and mail in synch? There has been some noise about a project called ‘Normandy’ that Feedly is working on – mostly that it is a clone of the Google Reader API (but it will never have the horsepower that Google had, especially if you ever wanted to search millions or trillions of feeds in order to find something important). In some ways, my subscriptions are almost as important as my files and other information. I also expect that as computing power gets cheaper, the need to have cloud-based services to keep all of those computing points-of-contact working well together becomes more and more important, and makes the experience all the more powerful. No note-taking software on the iPhone or iPad can even approach the power of Evernote, because with that service, no matter where I take the note on my phone, when I get home it’s already on my computer. Conversely, before I leave to go somewhere, I enter all of the information I’m going to need into Evernote on the desktop, and then when I get there, access it on the phone. My feed choices should work the same way, whether I listen to them read to me in the car, see them flashed on the TV screen, or snuggle up with them on the sofa with my iPad.

Enhanced by Zemanta

The Greasepaint Approach

I’ve recently been involved with an iPhone project where we are doing a few custom UI controls, and it’s definitely proved a learning experience about the difference between designing for a computer screen and designing for the iPhone screen (either the current one or the upcoming iPhone 4 Retina Display screen).

One thing I’ve learned has to do with the characteristics of the iPhone screen, and how that influences User Interface Design choices. Over the years, I’ve become used to the what it takes to show a change on a computer monitor, which is to say, the degree to much you need to change the colour, shape, or scale so that it’s obvious, even if the user looks away for a second before the change occurs and then looks back.  This might apply to an object in its selected and unselected states, or the addition of something new on the screen, or perhaps the enabling or disabling of a button or other element.  At first, I thought this was due to the dots (or in this case, pixels) per inch of the iPhone versus computer monitors. Monitors are usually somewhere between 72 PPI (Pixels Per Inch) and perhaps 200 PPI on the best equipment. The IBM T220/T221 LCD monitors marketed from 2001–2005 were 204 PPI, and they probably set the standard for a while. These days, a 20-inch (50.8 cm) screen with a 1680×1050 resolution has a 99.06 PPI, and a garden variety Macbook (not the higher end Macbook Pros) has 113 PPI (Wikipedia has an article on how this is calculated).

However, the iPhone PPI is listed at 163 PPI, which although it’s on the high side, is certainly not significantly higher than a typical computer these days. The difference, then, must be the size of the screen. In the case of any iPhone screen 2G, 3G, 3Gs and 4G, it’s a 3.5 inch screen (compare that to the aforementioned 20-inch, and now we’re talking different.)

It might be obvious, but what I’ve noticed is that the amount of change you have to make in order to be noticeable is far more on the iPhone’s screen. The contrast must be greater, scaling or moving an object between one state and another has to be larger (or farther), and as a corollary to this rule of thumb,  it’s easy to miss subtle changes.  Several times during development of the app we’re working on, I had to report to the graphic designer that I was working with, that a selection style wasn’t distinct enough, or that a small detail of a button, such as a downward pointing arrow, had to be rendered with higher contrast (the UI had a lot of grey objects, and some of them had white or darker grey overlays).

I think the easy way to think about this is the analogy of greasepaint. What’s greasepaint? It’s the traditional makeup that actors wore (and has now been superseded by more modern stage makeup) that helps to compensate for both the washing out of facial features by the bright theatre lights, as well as help audience members to make out their faces, even though the actors were farther away (and hence, smaller in the eyes of theatregoers – perhaps the equivalent of being 4 or 5 centimeters tall depending on how far away from the stage they were sitting). I remember going backstage to a dressing room after the Play or Opera was over, and was always struck by how odd the performers looked before removing all of that extreme makeup, which brought out cheekbones or encircled their eyes (like a Raccoon, I thought!).

So User Interface Designers working on iPhone apps, remember, the computer screen is the dressing room, and the iPhone screen is the stage. Don’t forget the greasepaint!

Information Design Gone Wild

Full fathom five thy father lies;
Of his bones are coral made;
Those are pearls that were his eyes:
Nothing of him that doth fade,
But doth suffer a sea-change
Into something rich and strange.
— from Ariel’s Song, The Tempest by William Shakespeare

I loved the almost anal-retentive display of data through a heads-up display about the scenery and other details in the opening scenes of the movie ‘Stranger than Fiction‘:

Now, imagine that kind of data display about everything; The chemicals in the soil around you, the wavelengths of light as they strike your skin, the building materials of the structures you walk by; all are a sea of data that is not so much invisible as it is inaccessible. Now imagine, if you had a heads-up display on your glasses (or on contact lenses, as is suggested in Vernor Vinge’s Novel Rainbow’s End). If you are ‘wearing’ as Vinge calls it, you now have the possibility of superimposing all sorts of data on top of the reality you see around you. In fact, if you prefer, you can replace that reality with one as rich and strange as you like.

Rather than a real place, what if this were done with, say, a Fairy Tale. Tomas Nilsson, a design student at Sweden’s Linköping University, decided to do just this with the Little Red Riding Hood story, which started out as a class project:

As computing and access to data becomes more ubiquitous, I think this will start to change our view of reality. It’s a subtle thing, but the fact that many people now carry some sort of device (either a smart phone or a portable GPS device), so they are never truly lost. That’s a big change of their reality, right from the start.

The other evening, my iPhone had some problems, so I headed home to try and fix it (I did, the software needed to be reinstalled). The ride on the bus felt very strange without being able to listen to podcasts or music. I couldn’t check the time. I couldn’t call anyone, or check my email. It wasn’t until then did I realize how much I rely on this little brick of metal and glass.

Required iPhone Posting

I would be remiss if I didn’t take some note of the blockbuster iPhone introduction this past week. Many people have already grown tired of this subject (and I love Darren Barefoot’s hilarious take on iphatigue.com), but now that this much-hyped device is out in the market (at least in the US), there might be some interesting things to take note of, as they relate to ‘the big picture’ of Apple’s use of User Interface on mobile devices.

Before the iPod, Apple’s first take on a hand-held device, was the Newton. The Newton was far more innovative in some ways, at least in terms of a user interface , approach to the data (with a unique ‘data soup’) and how a user might interact with it. Here’s a Getting Started video for the Newton someone posted on YouTube:

The Newton was about written communications, but the user interface was also far more oriented toward a give-and-take interaction with the user. For instance, you’d write ‘Lunch with Matt at 1PM on Friday’ in the calendar, and the Newton would do it’s best to try and figure out what you meant, putting a calendar entry ‘Lunch with Matt’ in the 1 PM time slot in your calendar. If you highlight someone’s name in a bit of recognized text, and then chose ‘FAX’ from the menu, the device would go to your FAX address book, do its best to locate the most likely person you were faxing to (by the first match from a find, in this case) and fill in the FAX number in send box. These best guesses were not always successful, and in some ways, reflected in microcosm some of the worst failures about the Newton. By raising expectations about how much pseudo-intelligence there was in such a device, people were all the more angry or amused when it fell on it’s flat glass face. I had a Newton, and although I was no fanatic about it, I always felt that it was falling just short of some truly amazing feats of computer-human interaction.

Fast-forward to last week: Contrast the Newton Video with this more recent iPhone demo:

Where the Newton video is more of a marketing piece that tries to convince you of the device’s worth, the iPhone video is just a voyeuristic view of someone using their iPhone to listen to music, watch a video, create an ad hoc conference call, send a photo in an email, text message someone, listen to voice mail, and use the Internet, etc.

The iPhone does not try to fill in the gaps, except where it knows such synergies can usually work. For instance, in the Google Maps based application, it allows you to dial whatever business you locate on a map (if there is a phone number). Where the Newton provided a somewhat spooky interaction with a ‘magic pad’ where the device would try and perform complex tasks based on cryptic messages from you, the iPhone puts it’s processing cycles into simpler, more physical tasks , such as how to move pages around to simulate the physics of the real world, how to flip the screen automatically when the device is put on its side and how to display lots of colourful icons and other pictures on a gorgeous screen.

The Newton was ascetic and hermetic, the iPhone is gorgeous, and perhaps even a little garish. Is the iPhone a step forward in UI Design? The Newton tried to do far more with less, but clearly the market did not want that. The iPhone is far more about ‘theatre’, which is why the voyeuristic demo works so well. It is also about applying what has been learned in the desktop and iPod world (setting wallpaper, creating an email, choosing and playing a piece of music) and applying those to a new form factor.

Although it’s arguable that the iPhone is less about really revolutionary thinking about UIs (like the Newton perhaps was), I think we may be ready for some of those. For instance, something as simple as voice recognition of certain commands should be doable on the next version of the iPhone, and synthesized voice from it wouldn’t be bad, either. This has already been done on the desktop, and since the iPhone is supposedly using the same OS, Apple (or other key third party developers) should be able to port some of these technologies to this new hardware fairly easily. I want to be able to say to my iPhone: “Make a conference call between Pam and Matt” and have it call one, notify them of the conference call and then connect the two calls.

Essentially, I want the pretty face of the iPhone with the brains (or better) of the Newton.