What if instead of designing explicit interfaces we aimed instead at eliminating them altogether? If instead of adding a screen we found ways to remove it? Wouldn’t the best user interface be the one that requires nothing of the user?
No UI, proposed here on the Journal by Cooper’s Golden Krishna, is interesting, provocative, and deeply flawed. Golden argues that no interface is best, and then explores ways strip it out. But this begins with a designer’s goal rather than the users’. First identify where users are helped or hindered by explicit interfaces: When hindered, eliminate the UI. But there’s many times when a UI really helps. When it does, make it great.
But where to start? Three questions can help you evaluate the user’s relationship with a task, product or service.
For any particular interface in the system:
- Does the user want or need control?
- Does the user get value from doing the work themselves?
- Does the user outperform technology?
If you can answer “no” to every one of these questions, then put in the effort to eliminate the interface. If you answer “yes” to any one of these you should focus on improving the interface so that it supports the user better. If it’s not unanimously “yes” or “no” carefully consider how design can meet the conflicting needs. Get to know your users well. Design a solution that’s as sophisticated and nuanced as their situation calls for.
Each of these questions helps you examine the relationship of the user with the technology. These are massively important considerations when advocating for the elimination of the interface; a product without some form of interface effectively doesn’t exist for the user. The UI is the embodiment of your relationship with it. No interface, no relationship. Sometimes this is exactly what you want. But people also value products because they bring something into their lives, or because they remove some obstacle from it. Every tool, game, or service gives people power, information, peace, pleasure, or possibility. Interactions with these should be awesome, helpful, supportive, effortless; and for this we often need a really great UI.
1. Does the user want or need control?
When users value control, need to take explicit actions, or need to make decisions, a well designed interface is the best way to give it. If they don’t, automation can really set users free.
No? Automate it.
When a user has no need, or no interest in making decisions or taking actions, automation can really simplify their lives.
The doors on your home don’t automatically open because it would be expensive. But even if you could afford it, you’d prefer the security and control that a manual door offers.
But change the location and context and you optimize for different needs. Shoppers pushing full carts would really struggle with a manual door. Automatic doors remove all control, but shoppers don’t need or want it, they prefer the convenience of unimpeded entry and exit. If designed well, the shopper never interacts with the door directly. They are unconscious of their passive interfacing with the door, and this most closely supports the relationship they want to have with the door.
Manually unlocking a car with a key isn’t a particularly difficult task. But it’s utilitarian, it’s not something anyone wishes they spent more time performing.
Walking up to your car with your keys in your pocket, pulling the handle, and getting in makes the task even easier. The hard stuff is handled by the locks and keys which automatically perform the actions in sync with the driver’s intentions.
Automation renders interactions implicit, allowing for direct and simple experiences for users, unburdened by the need for conscious decisions or extra actions.
Sometimes? Do it both ways.
Many automated systems provide some type of manual mode, which allows users to intervene if they need to. This is because it’s rare when a user’s needs are so consistent and disinterested that we can fully automate the products they use.
For people that really love to drive, there’s no substitute for a manual transmission which gives them really nuanced feedback and control. Automatics do the shifting for you. They remove control, but for most people, this was control they didn’t want or need. It’s possible to manually override the automatic, but the controls are clumsy and you only do it if you really have to.
Tiptronic and other similar transmission systems allow users to have both a fully automatic experience and-at the tap of the shifter-drop into a mode that’s a lot more like manual shifting. In commuter traffic drivers can keep it in automatic because there’s no way to enjoy the drive, but if they take their car for a spin on curvy roads on the weekend, they love the added control of the sport mode.
Delivering both an automatic and manual mode takes a lot more development time. If you want to make both experiences great, it’s going to take really dedicated focus and the investment of time and effort to not only make each mode excellent, but to make shifting between them effortless. At the very least, that shifter will need an explicit UI.
Yes? Give them a great interface.
Control allows users to make decisions and take action. They are active participants, not passive recipients. There are a lot of reasons a user may need and want control. The system may not be able to anticipate when an action should be performed. The user may desire to express themselves; creatively or in performance. Legal or cultural forces could require explicit participation.
A well-designed user interface represents and clarifies the system, improving the user’s ability to successfully use the technology to accomplish their goals. It gives them control over the things they want to act on, or information that can inform their decisions.
Cockpit instrument panels
During WWI the U.S. Military found that pilots, though highly trained, were unable cope with the increasing complexity of the airplane cockpits: This led to fatal mistakes. The solution was to abstract the controls, eliminating the direct one-to-one correspondence between mechanical controls for people and the plane’s mechanical actuators. This (at-the-time) new concept of an interface facilitated better communication between the man and machine. The newly designed cockpits addressed the complexity, putting it into systems which gave pilots better control and oversight over operations.
The cockpit still looks complex to us non-pilots because planes are sophisticated machines. For the a trained pilot these banks of knobs, gauges, and levers makes sense. They have meaning, differentiated affordance, and provide the right level of input and output to perform the inherently complex task of flying.
Your desktop computer
Consumer computers are general purpose devices. You could argue that their very success is based on their ability to become almost anything for almost anyone. A screen that can display anything in two dimensions is an incredible canvas that allows the device to transform into almost anything, instantly. Personal computers provide massive power to users, who can only really leverage it with through an interface that transforms the underlying complexity into something that’s easy to work with.
The hardware’s too complex for people to directly interact with so, like the airplane they benefit greatly from an explicit and abstracted user interface. But this flexibility requires a screen that communicates the possibilities to the user, a system of signals that guides and helps communicate available actions and information.
Well designed, explicit interfaces, are the best way to empower and support users who require agency and control.
2. Does the user get value from doing the work themselves?
When the user gets no value from the work we should strive to eliminate it. If they get value from explicit interaction, don’t take it away.
No? Take over for them.
When work has no intrinsic value it’s just a chore. Users often perform chores as they use digital products either because no one considered their perspective, or because it takes huge amounts of effort to develop products that don’t outsource chores to the user. But if you want to make a huge leap forward in your user experience, commit to making the back-end do the chores; your users will love you.
Listening to Music
Pandora is a music service that helps you introduce listeners to more music that the’ll enjoy. It can’t just guess the type of music a listener will like at first, but after they do a little work it can take over from there.Pandora Radio “Behind The Logic” Series from 1/29 on Vimeo.
Pandora learns from the listener. With simple nudges of liking, or skipping particular songs, listeners help the system learn what type of music they want to listen to. It’s work that they don’t get a lot of value from, but Pandora learns from their actions. After a while Pandora requires very little participation from them and yet delivers music that strongly fits their taste.
Setting the temperature
Nest is also a system that learns from the actions of occupants. The thermostat has been designed to learn and adapt to the specific habits of people who user it.
The adjustments that the occupant makes to the thermostat essentially help refine the device’s understanding. Making adjustments to timing and temperature rewrites the initial defaults to reflect the preferences and temperature patterns of the household.
These approaches work because they become more personalized to the user. As they become individualized they take less and less interaction. They also take lots of data crunching on the back end in order to learn and get their predictions right. You’re going to need to invest heavily in a learning system or data combing to support a great predictive experience, as well as design a UI for correcting the system when it’s wrong: thumbs up/thumbs down, or a simple dial.
Setting your itinerary
TripIt took the work out of it, travel sites made their users do lots of heavy data entry. If travelers wanted their itinerary in a useful digital format they needed to fill it in using a form.
TripIt does lots of great stuff for travellers, but one of the most amazing is that they figured out a way to handle all the data input for travellers. Travellers get a confirmation from an airlines, hotel or rental car, and simply forward it to TripIt. When they open TripIt on their smartphone, they don’t see a collection forwarded documents; they see one neat master itinerary, everything is formatted perfectly and gives a step-by-step overview of the trip. By investing in a back-end that handles the chore of data entry and organization TripIt can remove any interface that requires user input for this process.
If you’re making your users do work that gives little or no value to them, they’ll never love your product. If you take on the hard work yourself, you can give people products they fall in love with.
Yes? Give them a great interface.
Games are a strange form of activity. Players engage in something that has been designed to take effort and work, and if the design is great, they really enjoy themselves. At over 25 billion dollars in revenue, the video game industry proves that for players, games with great interfaces clearly give more than they take.
The Wii and Kinect show that gamers don’t need to sit at a keyboard to play a computer game. You can bring new ways of interacting that free up how players can interact with the system. But the goal here is finding a better interface that more fully engages the gamer, rather than striving to eliminate it. Making the game richer and more engaging helps players find new levels of enjoyment in their interactions.
When doctors work with patients they are performing complex cognitive tasks, looking for patterns that will give them insight into the disease as well as the right course of action to reverse it.
Tools that help doctors externalize their thinking, that help reveal meaningful connections and bring the entire history of the patient into the process are valuable. Doctors need interfaces to simplify the process of capturing their thinking. It is often through explicitly stating a hypothesis that they can fine tune and test it. Today few computerized tools really give physicians the kind of sophisticated and dynamic digital assistance that could help them practice smarter, better medicine. The solution isn’t to eliminate interface, but to supercharge it, to make systems that can deal with the complexity of medicine, and empower the physician rather than add work to their practice. The types of systems that can do this haven’t yet fully been realized, but once the technology arrives, the best way we can leverage it will be with an interface that’s just as powerful.
3. Does the user outperform technology?
In many situations humans have an advantage over the technology. We might be slower, make more mistakes, tire quickly, and have limits to how much we can process, but we deal pretty well with vague, irrational, surprising events (at least in comparison to a computer). If the user carries the key to success, help them leverage these uniquely human capabilities. If technology can more reliably, quickly, accurately, safely perform the task (and answers to the two previous questions don’t contradict) let the system handle it.
No? Do it for them.
When the system can do it better than the user, there’s no reason to give them an interface to muck things up.
Paying road tolls
Humans just aren’t that fast. Toll booths manned by people slow traffic down. It takes time to collect money and make change. Motorists fuss around looking for cash and are slow to reach cruising speed again.
Transponders communicating with RFID readers offer an alternative, which eliminates slowdown. The system takes over tracking toll crossings and deducting payments, allowing drivers to pass through toll collection booths without stopping or even slowing down. The interface has been moved into computers which communicate to computers, something they do really, really well.
Sawing wood safely
The Sawstop design is based entirely on getting the saw to know the difference between cutting wood and amputating a finger
You need the saw to do this, because humans just can’t react quickly enough to avoid the mistake. The saw is designed so that while cutting, at the first hint of “flesh-ness,” it acts instantly and snaps the blade away from your finger, leaving it unscathed. For split-second situations like this no explicit interface will ever be good enough or fast enough to prevent disaster. By equipping the saw itself with the smarts, trips to the emergency room can be avoided.
Driving a car takes practice, skill and attention, it’s a complex task that over time becomes second nature to drivers. We get good at it, but we still make an awful lot of accidents. We’re prone to distraction, emotional reactions, fatigue, and errors in judgment, often with fatal consequences. For a long time technology hadn’t reached the sophistication to take over for people. But Google’s showing that cars are ready to do take over this that humans just aren’t that great at.
By using sophisticated sensors and lots of computation, the car can pay attention to the road and literally drive itself. Google is paving the way for drivers to turn their attention to something other than the road.
Once technology can reliably outperform the human, there’s little value in keeping them in the loop. If the human still maintains the edge, better to give them an interface.
Yes? Give them a great interface
Technology has limits. No matter how sophisticated we make it, there are situations in which technology can’t perform well. A real danger eliminating the user interface in favor of really smart sophisticated computers is that these systems sometimes fail. At the root technology is human-driven engineering. We make mistakes, but when we are there, we can correct them. Once an autonomous system begins operation we aren’t there to intervene. It is important that the people who are there, who need to deal with the situation aren’t prevented from making decisions or taking action.
Making better maps
Google’s maps are some of the most accurate in the world. Computers can do a great first-pass on turning a satellite image into named streets and businesses, but the hard stuff is in the details. Humans still have an advantage in reading the high-def images because we can leverage our knowledge of how the world works. We can imagine how the streets connect and use our own experience to jump to conclusions that aren’t logical but best match reality of the map.
Google maps get better because humans are making the fine adjustments. Computers can learn from the changes the humans make, but for now, it’s still humans that perform the highest quality work.
The people performing this work need lots of fine controls to change the maps. Because they are working fully in the digital world, making changes in it, they need great tools that make this work as simple and smooth as possible. They need tools that let them quickly and easily make changes to the maps which can in turn teach the computer new ways of reading.
For typical flight conditions, the autopilot can largely match the skills of trained pilots. With well-placed sensors and smart programming, autopilot can keep the plane level and aimed in the right direction, and even make smooth landings. When something massively unexpected happens, the plane can no longer fly itself. Humans may make mistakes, but they are still better equipped to improvise a solution to the crisis.
In these cases humans need a really great interface that allows them to make split second decisions and take whatever actions they deem necessary. When both engines go out, well-trained pilots are better able to take in the new development and come up with a plan to deal with it.
Even when technology reaches a level of sophistication and agency that allows it to operate autonomously we may need an interface. With lives at stake, and conditions beyond the programming of the software it’s important to ensure that humans can take over and safely land the plane. This situation is clearly covered by the first question: pilots need control, but it also demonstrates technology fails, and when it does, humans need a way intervene.
tl;dr: The best UI depends.
No UI sounds great, but it oversimplifies the types of problems we’re faced with solving. The most useful principle isn’t No UI, but best UI.
The best UI comes from designers confronting the complexity, identifying needs, negotiating the trade-offs, and delivering a focused, coherent experience for users. The best UI isn’t defined in the negative. It isn’t absent for the sake of a philosophical stance. The best UI is one that fits the user and fits the task. The best UI extends people’s senses and capabilities. The best UI fits people’s mental models. The best UI enables people to engage competently and easily with their technology, to fully realize the potential of their tools.
The world is a complex place. There’s a lot we can achieve if we get past the appeal of simplistic reductions, get real about the user, and get down ways of making their lives better.
Special thanks to Chris Noessel for helping me articulate this perspective, to Doug LeMoine for asking the right questions and to Golden Krishna for his willingness to engage in a really long philosophical disagreement with me.