But before I could get on the train, I needed a ticket. Being naturally asocial, I prefer to interact with screens and terminals rather than people. But since I had never ever even heard of Southend-on-Sea I actually didn’t know exactly what ticket I needed. Let alone how many zones I had to travel. And once in the city, I knew that I needed another ticket for the public transport there, but which version of which combination of tickets did I need?
Staring at the screen for about 10 seconds trying to decipher which combination of tickets would actually suit me, I quickly realised that this would require real effort on my part. And since it was likely that other passengers from my flight were about to arrive, I had to act fast. Because we know from Daniel Kahneman’s book Thinking, Fast and Slow that thinking is hard, I instead turned around and went to the human teller and just asked for what I needed by describing what I wanted to do. The human teller was immediately able to interpret my context and need and suggest a ticket combination that would take me to Liverpool St Station and from there to Paddington. And then let me wander around London for the rest of the day.
Unlike the screen interface, which could only present me with a series of pre-defined choices that require an understanding of local ticket terminology, the human teller could interpret what I said into a travel requirement and ticket combination that met my needs. In short, the human interface could process contextual information and suggest a suitable service. But I wonder, even if the computer terminal would have been able to do that, would I feel comfortable speaking to a screen like I spoke to a human? After all, I feel embarrassed talking on the phone in public.
Can artificial intelligence become the user interface of choice? In the above example, the human was clearly a better interface than the machine, but can we build a machine that can work as efficiently? So much in our everyday lives depends on our context, on what happened before and what will happen next. Being able to ask contextual questions that are correctly interpreted by a “user interface” makes us comfortable in the recommendation that is then given. We are sure that we were understood.
So much in our lives depends on the context of where we are and how we feel. Alexa misunderstanding mumbling during sleep as a purchase order for Tide detergent would not happen with a person listening to somebody talking in their sleep. AI that is able to consider the context of the user as well as the spoken information can come much closer to the human teller in the example above. In most interactions, this would make the interaction between user and machine faster, better and more accurate.
In some ways this may end up with dystopia, in the sense that there won’t be a human teller at all anymore. And then, as will probably happen one day, a bug will enter the system, then everyone will get the wrong ticket with firm assurances by the AI, that everything is correct. And the traveller will end up in Liverpool instead Liverpool St Station. This couldn’t happen with a human teller, which is autonomous and not connected to the grid. On the other hand, the AI could serve ten people at the same time while humans still have a hard time walking and chewing gum at the same time.
It is thought provoking to think of AI as UI. It makes the problems AI solves much more tangible and limited, rather than Terminator frightening. From a user experience point-of-view, however, to view every user interface design not from what combination of options are available, but rather from what contexts of the user may exist while interacting with it, would certainly open up new alternatives to UI design that just haven’t been considered before. Even before AI shows up for real.