4 min read

In Praise of Single Function Technlogy

In Praise of Single Function Technlogy
Photo by Álvaro Bernal / Unsplash

The Palm Pilot. The Kindle. The iPod. These machines, to me, were near perfect executions of technology products that heavily influence my thinking about technology product design to this day. Each one of them did one thing well — the thing that they were designed to do. Nothing more, nothing less. They generally represent the Jobsian idea of saying no to features rather than creating jumbles of unfocused crap. What makes a single function device so much better and how does one incorporate these ideas into their own products? I think there are 3 core ideas that we can adopt.


First, consider the software-hardware divide. It was Microsoft who came up with the idea that software could be sold separately from hardware. Up until then, software just came with the hardware. That split is incredibly valuable for those of us in the software business, but it becomes increasingly easy to forget that one must use hardware to access software and that they cannot be so easily abstracted away from each other.

This issue manifests itself in the product’s affordances — the idea that the product provides clues to how it should be used (door handles are for pulling, buttons are for pushing, etc.). One of the tricky parts of the modern tablet, for example, is that they have only a screen and almost no buttons.

On the other hand, the Kindle has physical page turn buttons which, shockingly, are used to turn the digital pages of the book the user is reading. Single function technologies have the luxury of exposing key elements of how they are supposed to work into the physical world. In fact, Amazon’s Kindles are still fighting with themselves over this — the Touch and Paperwhite devices have no page turn buttons, but their premiere device, the Voyage, does.


Next, consider the different modes of a given technology. Let me provide a counter example to make my point. I once was in San Francisco with a handheld GPS receiver trying to drive from the airport into the city. The device told me to not get on the highway, which I thought was odd. It then took me through what seemed to be very sketchy neighbourhoods. At one point it told me to turn the wrong way on a one way street. I then knew something was up, so I stopped using it. Once I got to the hotel, I saw that the device was in walking mode (though there was no indication of that fact) and the decisions it was making were quite rational, but my mental model of how it worked completely differed from how it was actually working because it was in a different mode.

Contrast that to the iPod. The next and previous track buttons on the iPod, for example, manifest their functionality as-is within the iPod software and cannot be ignored. Next moves to the next song, previous to the previous song. Interestingly, one of the most popular early iPhone apps, Camera+, overloaded the volume buttons on the iPhone to take the photo and Apple banned them for it presumably because it messed with the user’s mental model of what the buttons are supposed to do. (Ironically, Apple added that feature itself in iOS 5, presumably because they thought users got it by then and it was the most popular camera in the world). On the iPod, however, next track meant next track. No overloading of buttons. That made it just easy to use.

Technology & Thought

Finally, I believe the best designed technology “thinks” the same way that we do. I have a Sony NEX-6, a really nice compact, high quality camera. Taking a photo when I first got it always resulted in a blurry picture, despite having autofocus and the fact that I’m actually not half bad at photography. Doing some digging, I found that there are multiple autofocus modes on the device which are not only hard to find but they’re named opaquely (phase detection auto focus vs. contrast detection autofocus). Once I read about it and tried them out, I then was able to carefully set up my camera such that it would operate in the way that I expected it to. Oh yeah and it does video and has apps too.

Contrast that with the Palm Pilot. When it was designed, Jeff Hawkins, the co-founder of Palm, walked around with a piece of wood in his pocket. He would pull it out and (virtually) write an appointment onto the wood to get the feel of the use case right. When the device was built, a user simply tapped the calendar button and started writing an appointment into a digital slot as you would with an actual appointment book. No menus, no special functions, just start writing. There was a clear mental model that was created for this focused use case, and the user didn’t have to bend their mind to figure out what the technology was actually doing since it was obvious.

So What Now?

I get that we’re currently obsessed with touchscreen-based, multi-purpose devices in order to make our technology more versatile, but the trade-offs require a more carefully crafted product. Mobile apps on each platform even started that way — a single app for a single task (we’re there at the moment with watch apps). Phone apps have become so complex, that the concept of “unbundling” was introduced and I have yet to see it done well.

So why does a machine such as the Kindle still exist today in this era of Swiss Army knife devices? The answer is focus. It does what it does really well. You pick it up and can only do one thing with it. No distraction, no forgetting the reason you picked it up in the first place.

Taking away or diluting the primary purpose of a piece of technology means that every feature or function you add to your product has to consider the affordances, modality, and humanity of the people who are using it. When crafting your product, keep that focus front and center and think about what a standalone version of your product would look like.