The Greasepaint Approach

I’ve recently been involved with an iPhone project where we are doing a few custom UI controls, and it’s definitely proved a learning experience about the difference between designing for a computer screen and designing for the iPhone screen (either the current one or the upcoming iPhone 4 Retina Display screen).

One thing I’ve learned has to do with the characteristics of the iPhone screen, and how that influences User Interface Design choices. Over the years, I’ve become used to the what it takes to show a change on a computer monitor, which is to say, the degree to much you need to change the colour, shape, or scale so that it’s obvious, even if the user looks away for a second before the change occurs and then looks back.  This might apply to an object in its selected and unselected states, or the addition of something new on the screen, or perhaps the enabling or disabling of a button or other element.  At first, I thought this was due to the dots (or in this case, pixels) per inch of the iPhone versus computer monitors. Monitors are usually somewhere between 72 PPI (Pixels Per Inch) and perhaps 200 PPI on the best equipment. The IBM T220/T221 LCD monitors marketed from 2001–2005 were 204 PPI, and they probably set the standard for a while. These days, a 20-inch (50.8 cm) screen with a 1680×1050 resolution has a 99.06 PPI, and a garden variety Macbook (not the higher end Macbook Pros) has 113 PPI (Wikipedia has an article on how this is calculated).

However, the iPhone PPI is listed at 163 PPI, which although it’s on the high side, is certainly not significantly higher than a typical computer these days. The difference, then, must be the size of the screen. In the case of any iPhone screen 2G, 3G, 3Gs and 4G, it’s a 3.5 inch screen (compare that to the aforementioned 20-inch, and now we’re talking different.)

It might be obvious, but what I’ve noticed is that the amount of change you have to make in order to be noticeable is far more on the iPhone’s screen. The contrast must be greater, scaling or moving an object between one state and another has to be larger (or farther), and as a corollary to this rule of thumb,  it’s easy to miss subtle changes.  Several times during development of the app we’re working on, I had to report to the graphic designer that I was working with, that a selection style wasn’t distinct enough, or that a small detail of a button, such as a downward pointing arrow, had to be rendered with higher contrast (the UI had a lot of grey objects, and some of them had white or darker grey overlays).

I think the easy way to think about this is the analogy of greasepaint. What’s greasepaint? It’s the traditional makeup that actors wore (and has now been superseded by more modern stage makeup) that helps to compensate for both the washing out of facial features by the bright theatre lights, as well as help audience members to make out their faces, even though the actors were farther away (and hence, smaller in the eyes of theatregoers – perhaps the equivalent of being 4 or 5 centimeters tall depending on how far away from the stage they were sitting). I remember going backstage to a dressing room after the Play or Opera was over, and was always struck by how odd the performers looked before removing all of that extreme makeup, which brought out cheekbones or encircled their eyes (like a Raccoon, I thought!).

So User Interface Designers working on iPhone apps, remember, the computer screen is the dressing room, and the iPhone screen is the stage. Don’t forget the greasepaint!

iPad, You Pad, We All Pad…

I just got back from one of our local Apple Stores and the iPads on display had quite a throng around them.  I didn’t check, but suspect that they are probably  sold out for today. My visit got me thinking about how to explain why I think the iPad is both so successful (and this is not just a belief, it’s a fact: Apple has already sold a million of them, and this past Friday they first went on sale in the rest of the world, (including here in Canada), and why Apple has once again filled a need that people didn’t know they had in the first place.

First, How to Define It

In describing what the iPad is, it’s easy to get caught up in what it doesn’t have, since that may be what strikes one at first; There’s no keyboard, no mouse or trackpad, no monitor stand, and all of the rest of the stuff that goes along with the experience of using a computer or connecting to the Internet.  That also includes a desk or table, chair, mouse pad (or with the advent of optical mice, at least a surface for moving the mouse on) or the various power, video and network cabling, external hard drive or optical (DVD) drive. There’s also a lot of upkeep and maintenance that has been taken away from the iPad;  there’s no anti-virus package that you might be reminded to get shortly after starting it up (at least, not yet), no place to get software except the built-in iTunes store. You don’t have to worry about defragmenting a hard disk (there is none – it’s solid state memory) or even emptying a trash can on the screen to free up disk space. While all of this does get one closer to the uniqueness of the iPad, it circles around the issue somewhat, which I’ll get into in a bit.

It’s also common to define the iPad as just a large iPod Touch or iPhone, since those are devices we are already familiar with. The fact that Apple chose to use a very similar operating system and launching screen to the one on those devices only serves to bolster the opinion that the iPad is merely a larger version of these other gadgets, something I’ve heard especially from people already familiar with those existing products. I think this is an incorrect assessment, simply because there are activities and media that are obviously far more suited to the larger form factor (like watching movies) than the smaller ones. A wall clock is not merely a large wristwatch. It’s a completely different, but related timekeeping object. But again, I think this is looking at the wrong thing.

To paraphrase the philosopher Ludwig Wittgenstein, don’t look for the word, look for the use. Rather than try and define the iPad by what it is lacking or what it appears to be based on, define it by how it’s used. It’s here, I think, that you get to the really interesting and exciting thing about the iPad, which is the user model, or the totality of the experience under which it’s used.

Many of the most revolutionary technological advances are ones that embrace a new user model. Wi Fi and laptops freed people from being tethered to a single office or desk. The new 3G networks and hardware to connect to them on a Netbook allow one to be connected to the Internet not just in a Café with a local wi fi access point but perhaps sitting outside, by a babbling brook.

The iPhone’s size and weight meant that you didn’t have to be sitting down to use it. You could be waiting in line, walking, or sitting  in a seat on the bus or a car. In fact, the iPad is the first computer  that is almost intended to be used while slouching. It’s not a desktop or laptop;  it’s a loungetop! The idea that a computer is not necessarily for work (the Desktop and Laptop computers are ostensibly for that purpose) or for communication (all of the above plus the smartphone or PDA  – Personal Digital Assistant, a term coined by another Apple CEO –  plus phone) leaves the iPad a computer for casual use, mainly media-consumption with some email and web surfing. One could certainly do work on an iPad, and no doubt, some people will dedicate themselves to using it for their work tasks, but the iPad is first and foremost, the first computer designed to be used while a user is sitting back comfortably. That’s probably the big (if not one of the biggest) deal, in my opinion.

The lack of all of those other items (keyboard, mouse, external drive, cabling) meant that there is less to distract the user from the touchscreen and the content displayed on it. People often describe the experience of using an iPad as qualitatively different; that there is no longer ‘something’ in the way, between them and the Internet. While the day has not yet arrived where we ‘jack in’ directly to the Internet, the iPad comes a step closer to that consensual hallucination.

The iPad as Harbinger of a New Age of Human Control Interfaces

It’s even more interesting to take note of the fact that Steve Jobs conceived of the iPad first, and then realized that they could use a smaller version, with some of the scrolling behavior, as a way of building a telephone and internet device/iPod. The pure idea, that of a simple, flat, sheet of glass that displays content and interacts with the user was the original idea. You could put that foundation under any other gadget. People will now expect the iPad/iPhone touchscreen interface with it’s combination of mimicry of physical scrolls and easily changed collection of buttons or controls depending on the context as the default user interface for any number of other technologies. Your car will have a small iPad screen built into the dash (someone has already installed one, according to one of the tech blogs). You’ll set your thermostat or fade your lights with one of these glass interfaces, and you’ll program your microwave, dishwasher, or even toaster with one, once the technology becomes cheap enough to use everywhere.
By jettisoning the clutter and encumbrances of computing, the iPad pulls the rest of the world into an intelligent and software-driven set of controls. Physical knobs, along with raised physical buttons, will only be used where absolutely necessary. As for the rest, we all Pad.

Eek!

For decades there was a religious war regarding what computer users should be doing with their hands when they weren’t typing. No, not that religious war (you cheeky monkey!), the one about the pointing device, which would allow a user to make gestures on the screen, and address parts of a graphic user interface. Before I even started using a computer, I imagined that I’d be using some sort of ‘light pen’ to do Music Notation on the screen, since I’d once seen someone using that kind of a device on a documentary (and wasn’t it used in the movie The Andromeda Strain)?  Then, when I was just returning to the US from school in England, a fellow student (who was Canadian) said I should look into using ‘A Moose’. No, I misheard his Toronto accent. He wasn’t talking about the Canadian animal, but the Wee, sleekit, cowrin, tim’rous beastie of Robert Burns fame A Mouse. The original, first computer mouse, invented by Douglas Englebart in 1963 had this drawing in the patent:
Original Mouse Patent Engineering Drawing

The Original Mouse Patent Engineering Drawing

Though the drawing doesn’t show it, Englebart’s mouse, which was one small part of Engelbart’s a larger project, aimed at ‘augmenting human intellect’ had 1 button. The drawing mainly shows how the block uses multiple rollers, which sense which way the mouse is being moved in terms of X and Y coordinates.

When Apple shipped the first Lisa computer (and of course, the first Mac) , the commandment that ‘Thy mouse shall have but  1 button’ was spoken to the masses. On the other side, the X-Window System, and the IBM PC mouse had multiple buttons (2 or 3). The two to three camps dug in for years, each claiming the ergonomic, moral or practical high ground over the others. The antipathy between the 1 or many buttons groups continues to to this day, even if this division is no longer the case. Many people believe that Apple has stayed true to their gospel and only makes or supports a 1 button mouse, but the unforutnately named ‘Mighty Mouse’, which shipped in 2005, supports multiple buttons virtually rather than physically (you click on one side or other other to simulate one or the other button), and also has a roller ball and 2 physical side buttons, providing no fewer than 5 buttons. The proliferation of mouse buttons, sometimes 2, sometimes 3, sometimes 5 or more, depends on the system and software one encounters. Some trackball devices have had 5 buttons that effectively provide even more control messages by allowing a different kind of click from different combinations of those buttons. Apple’s latest mouse (the even more unfortunately named ‘Magic Mouse’ – what group is coming up with these names?) even goes farther, making the entire mouse surface another control surface in and of itself, like the trackpad on a laptop. This, to me, is akin to attaching a steering wheel to the top of a gearshift, or some other bizarre composite, but I’ll have to withhold judgement until I try one, even though it sounds like the Industrial Design equivalent of a Turducken.

The point is, complex gestural movements, involving more than a simple click (or double click) on a pointing device have pretty much been adopted by all computer makers, with at least an accepted level of complexity, although for the most part, a user can work up to that complexity, by moving from simple gestures to more complex ones over time, hence the idea of a short cut to a function instead of making  that function only executable from a complex gesture.

As a friend of my parents puts it, ‘Anything worth doing is worth overdoing’. I shouldn’t be surprised by what I thought was certainly a post on The Onion, but no, it was serious, and it was the Open Office Consortium who was proposing this mouse:


The Open Office Mouse. Really. No, really.

Holy Roller, Batman! This thing is certainly the other end of the spectrum from the mice we’ve seen up until this point, at least for the general public. (More complicated mice like this one have shown up on engineering stations, imaging systems, and countless other vertical application machinery).

If you look carefully (click on the photo to see it a bit larger), you’ll see that it has no fewer than 16 buttons and a roller that are visible. The description actually boasts that it has “18 programmable mouse buttons with double-click functionality” and “Three different button modes: Key, Keypress, and Macro”.  They even show a comparison chart comparing it to other mice on the market.

While I won’t comment on the oddness of an open software consortium designing hardware (or rather, having a designer design some for them), I have to admit that this initial paragraph, on the page ‘About the OpenOfficeMouse, caught my attention:

The OpenOfficeMouse was designed with the goal of being the best and most useful mouse the digital world has seen to date. Initially inspired by the keyboards on the Treo smartphones, it was designed by a game designer who was annoyed with the paltry number of buttons available on high-end gaming mice. Because gaming mice have historically been designed primarily for FPS¹ games, not MMO² and RTS³ games, they do not possess sufficient buttons for the dozens of commands, actions and spells that are required in games that make heavy use of icon bars and pull-down menus. After discovering that the available World of Warcraft mice were nothing more than regular two-button mice decorated with orcs, dwarves, and Night elves, the idea of the WarMouse was born. After much experimentation, it was determined that 16 buttons divided into two 8-button halves were the maximum number of buttons that could be efficiently used by feel alone. However, in the process of design and development, it quickly became apparent that many non-gaming applications would also benefit from having dozens of commands accessible directly from the mouse, especially applications with nested pull-down menus and hotkey combinations. OpenOffice.org was selected as the ideal application suite around which to design this application mouse because the usage tracking feature of OpenOffice.org 3.1 permitted the assignment of application commands to mouse buttons based on the data gathered from more than 600 million actual mouse and keystroke commands enacted by users. The OpenOfficeMouse team are advocates of Free and Open Source Software, which is why we are members of the OpenOffice.org community and have created custom profiles for other OSS applications such as Mozilla Firefox, Mozilla Thunderbird, The Battle for Wesnoth, D-Fend Reloaded, and The Gnu Image Manipulation Program.

So what we have here is a design for a gaming mouse, now re-purposed for general purpose applications (like browsing the web, email, and the Open/MS Office suite of word processing, spreadsheets and presentations).

Maybe it’s because I don’t do much gaming (and by ‘don’t do much’,  I mean hardly at all),  maybe it’s because I come from the ‘make it for a klutz’ school of UI design because I’m not very coordinated, but I think that this approach to User Interface or Industrial Design will never have much of a following. It wasn’t lost on me that I had to look up some of those acronyms to provide the footnotes here. Sure, there will always be some small group of people who want more and more direct power over their work from their hardware, and they often buy the most baroque control devices. For me, however, the whole idea of taking a piece of gaming hardware and repurposing it to work on everyday tasks is about as appealing as using a flight simulator to do your banking. Sure, you might get more fine maneuverability during a funds transfer (if you could master the controls), but it hardly seems worth the effort. Maybe that’s the key here: Having a competitive advantage from  your hardware and your skill with it during a game is far more important and more likely to have you make that effort than being a whiz at moving from cell to cell in your spreadsheet or even triggering one of the 100 or so macros you’ve created for your word processing tasks.

So to the OpenOfficeMouse folks, I say, good luck, but forget about selling one of those mice to me. Now, we start seeing the ‘direct to brain’ controllers, where I don’t have involve my arms and fingers at all with typing and gesturing on the screen but just think where I want to the cursor to go, I’ll be more interested. That would be the 0 button mouse, which I think I’m going to have to address in some future post.


¹first-person shooter
²massively multiplayer online
³real-time strategy

Solving England’s Plug Size Problem

When I lived in England, believe it or not, everybody had to be an amateur electrician. I’m really showing my age, but back in the mid 80’s there wasn’t a common universal plug throughout England, so you had to buy your plug separately from the ‘flex’ which they called the electrical cord. I’m serious. You bought your appliance, lamp or other electrical device (I remember that in my case, it was a radio/cassette tape recorder), and then you bought a plug ‘kit’, which let you splice the plug on to the flex. You had to attach your plug yourself to any consumer electronics. It’s almost laughable, but that’s what the state of electrical standards adoption was in late-20th century England.

Eventually, the UK did standardize on a plug, but it ended up being the largest and bulkiest plug you’ve ever seen, including a fuse inside the plug itself. It was almost as if the Brits only begrudgingly accepted this newfangled invention of electricity, and decided that they were going to only allow you to use it if you had the proper muscle power to hold and manage these huge electrical plugs. The notion that you’d carry around an electrical device that needed to be plugged in hadn’t even been entered into the equation.

When people started carrying around laptops, the large size of UK plugs became even more troublesome. In the case of a Macbook Air, the UK plug was several times thicker than the laptop itself. Enter a clever designer and an ingenious design to the rescue. This video shows how a folding approach not only allows one to carry around a slim plug and unfold it when needed, but actually creates a new, secondary standard, where all of the prongs are still accessible but in a folded state, so a whole bunch of these folded plugs can be plugged into an adapter, which is plugged into the wall in its unfolded state (or perhaps, a new sort of power strip, built for the folded prong arrangement). To see what I mean, have a look at the video. It shows that sometimes good industrial design can almost work miracles. Lets hope this idea catches on: