Close

There’s a video that we often use in lectures on service design. It’s a film about a door that people keep trying to pull instead of pushing. It’s a perfect illustration of a communication problem. The door handle says “pull me”. And that’s what people do, and then the door screams “push”, by just not opening when you pull.

Now, it is a justified question to ask what this has to do with communication. After all, there isn’t a sign saying push or anything like that. But like Elon Musk has said, “if a product needs a user manual, then it is already broken.” And signs saying push or pull are essentially user manuals, giving instructions in words, which by definition is already inefficient because it requires people to think. But people only start thinking when the first reaction is wrong.

Consider your last visit to the grocery store. You know where stuff is so you go in confidently, knowing that you’ll find what you need. It is only when you are hopelessly lost that you glance up at the signs where you are, and then you look for the signs describing what’s in each aisle until you find what you are looking for. What you regularly won’t find inside the store, is the layout map of the whole store, showing where stuff actually is. That map is conveniently placed by the front entrance of the store, where no-one looks at it, because you know where you’re going. The communication for where stuff is in the store is not based on how and when people need information, but instead presumes that people are rational and think first and do later.

Do first. Think later.

RTFM* is a much used acronym in the service call desk industry. It is the main reason why people have problems with products and services. Simply put: people don’t first read the manual and then try to connect their TV to the internet. Instead, they’ll try to connect the TV first, and when that doesn’t work look at the manual. And often they won’t be able to understand the manual because it was written by engineers for engineers and doesn’t help the rest of the 99% at all. So they’ll call the help-desk, where a trainee engineer wonders how the customer was ever able to operate the phone, to call the help-desk.

It is human nature to presume that your existing experience is enough to get you through new experiences. And when that doesn’t work, you’ll look for help. Which means, that the signs saying that your existing experience isn’t enough and you should read the manual first, weren’t clear enough. Again, a communication problem.

For example, to learn to drive, you need to be taught. Because the first time you get behind the wheel and get the car moving, all of your previous experience with moving machines (bicycles and mopeds and skateboards) are completely irrelevant. Without very clear guidance, you are likely to wrap your parents’ car around the nearest telephone pole. Which is why the process of getting driver’s licence are well established and clear. The communication that “you can’t just wing it” is adequate. And the ones that didn’t get the message fortunately compete for the Darwin award and remove themselves from the gene pool.

Design services for humans, that think they know

Designing services and products, you have to presume that the user presumes that he or she knows what he or she is doing. Consequently, if the use of the service or product actually requires new knowledge, then this should be incredibly explicit. And if it isn’t, then the fault lies with the manufacturer or service provider, not the user. This applies to everything from the new DSLR camera to a tourist’s ability to use the automatic ticket vending machine for the public transport system.

Communicating clearly to the new user, what you should know before you start, would radically improve the customer experience in many areas. Because it isn’t the fact that some products or services are complex, but rather that expectations of the users weren’t managed properly, that irritates users. Misunderstandings are frustrating. They make people feel inadequate, because they weren’t warned in any way or form, that “you won’t be able to use this ______.” Depending on whether the service or product is important, the user then will either upgrade their knowledge / understanding or simply choose something easier to use.

Manage expectations = have happy customers

The customer experience is 100% dependent on whether the product or service manages the expectations of the customer effectively. As a customer of McDonald’s you won’t be discouraged to walk into Burger King. You think the service is broadly the same and you’re likely to be right. So the expectations were managed by the customer’s previous experience. Which would lead one to presume, that most burger joints are similar. So if you were to launch a completely different burger joint, you’d have to make sure to manage the customer’s expectation of this explicitly, because otherwise they’d just be disappointed and not buy anything. So at the end of the day, it is again “just” a communication problem.

However, there’s a twist. Don’t manage expectations with instructions and signs. Manage it by building the experience in such a way, that it guides the user from his / her presumption to a new reality seamlessly, so that you build their experience one step at a time. Look at it as an on-boarding process, where every step leads logically to the next one, without overwhelming the user with too much information too fast about something they don’t even know if they like yet.

(And, if a sign is absolutely necessary, it should tell people where to start. Not micromanage the process. Like a sign for the restroom.) For more insight, check out this article.

* Read The (impolite expression referring to coitus) Manual

Arriving on a beautiful Saturday morning in London Southend airport, I proceeded quickly out of the terminal to the train station, where a train was going to take me to central London.

And I was in a hurry, because I’d managed to get out of the terminal as the first passenger, so there were no lines waiting for me.

But before I could get on the train, I needed a ticket. Being naturally asocial, I prefer to interact with screens and terminals rather than people. But since I had never ever even heard of Southend-on-Sea I actually didn’t know exactly what ticket I needed. Let alone how many zones I had to travel. And once in the city, I knew that I needed another ticket for the public transport there, but which version of which combination of tickets did I need?

Staring at the screen for about 10 seconds trying to decipher which combination of tickets would actually suit me, I quickly realised that this would require real effort on my part. And since it was likely that other passengers from my flight were about to arrive, I had to act fast. Because we know from Daniel Kahneman’s book Thinking, Fast and Slow that thinking is hard, I instead turned around and went to the human teller and just asked for what I needed by describing what I wanted to do. The human teller was immediately able to interpret my context and need and suggest a ticket combination that would take me to Liverpool St Station and from there to Paddington. And then let me wander around London for the rest of the day.

Unlike the screen interface, which could only present me with a series of pre-defined choices that require an understanding of local ticket terminology, the human teller could interpret what I said into a travel requirement and ticket combination that met my needs. In short, the human interface could process contextual information and suggest a suitable service. But I wonder, even if the computer terminal would have been able to do that, would I feel comfortable speaking to a screen like I spoke to a human? After all, I feel embarrassed talking on the phone in public.

Can artificial intelligence become the user interface of choice? In the above example, the human was clearly a better interface than the machine, but can we build a machine that can work as efficiently? So much in our everyday lives depends on our context, on what happened before and what will happen next. Being able to ask contextual questions that are correctly interpreted by a “user interface” makes us comfortable in the recommendation that is then given. We are sure that we were understood.

So much in our lives depends on the context of where we are and how we feel. Alexa misunderstanding mumbling during sleep as a purchase order for Tide detergent would not happen with a person listening to somebody talking in their sleep. AI that is able to consider the context of the user as well as the spoken information can come much closer to the human teller in the example above. In most interactions, this would make the interaction between user and machine faster, better and more accurate.

In some ways this may end up with dystopia, in the sense that there won’t be a human teller at all anymore. And then, as will probably happen one day, a bug will enter the system, then everyone will get the wrong ticket with firm assurances by the AI, that everything is correct. And the traveller will end up in Liverpool instead Liverpool St Station. This couldn’t happen with a human teller, which is autonomous and not connected to the grid. On the other hand, the AI could serve ten people at the same time while humans still have a hard time walking and chewing gum at the same time.

It is thought provoking to think of AI as UI. It makes the problems AI solves much more tangible and limited, rather than Terminator frightening. From a user experience point-of-view, however, to view every user interface design not from what combination of options are available, but rather from what contexts of the user may exist while interacting with it, would certainly open up new alternatives to UI design that just haven’t been considered before. Even before AI shows up for real.