We haven’t yet reached the final state in the design of user interfaces. Even the best user interfaces are not usable to a lot of people: People with strong impairness in intellectual capabilities, people who never used a digital interface or people with low reading skills. For most of them even a simple Smartphone is a difficult challenge. On YouTube you find a lot of videos with autistic children playing on an iPad. The truth is that some autistic people are unable to use even a simplified interface. In fact you will find many people who can’t control state of the art interfaces.
Input and Output
The important point of all interaction systems is the kind of input and output. How do we enter orders and how do we get the results or feedback? Yet all systems are based on the assumption, that all people are equal in their ability to send and receive information. Today users have to adapt to the system the better way would be when the system adapts to the user.
Let’s take for example the intelligent house. I don’t think it will ever become a mass product as long as the developers do not radically simplify the interaction with the system. A tablet is often used, but it is no good device. When you need it, it is in another room, the battery is empty or it is broken because somebody throws it from the table. This is the daily life of blind people like me. Murphy’s Law for the blind says: What can be broken will be broken.
But even if it works right, the interface can be to complex, the nature of such interfaces is that you have to have to capability to abstract. As a result the intelligent house is a game for bored and lazy nerds instead of a help for older people who can not move or see quite well.
The solution is to use the interface only for maintenance service and instead use organic interactions like speech or moving control systems. With Kinect and similar systems we already have the necessary technology and only have to implement it into the house. We do not only need organic input, but also organic output in form of not-disturbing feedback. The output has to feedback simple messages like: „I understand the order“, „I did not understand the order“, „and the order was fulfilled“ and so on.
There already exists systems which gives an alarm if a heart attack threats or if a person has fallen. This systems only will be successful if they are as easy to control as a simple toaster. The system has to be so easy, that your grandmother can control it, because she is the person who has the greatest benefit from it. Many people say that even grandmothers can use an iPad. That’s right of course, but you think of a healthy and fit person, not of a person who is for example visually impaired and who can not learn to work with a tablet because of a dementia illness or similar issues.
The next step of evolution in human Computer Interfaces is in my opinion tangible interfaces. These interfaces are although called object based interfaces. With tangible interfaces you interact with an object and with this object you control the computer. The interaction object is called a token. There are many forms of tokens, for example Lego stones.
Imagine for example a haptic model of your house or flat. The model is a representation of the status of the real building. If you for example close the window in the model, the real window closes. If you lock the door in the model, the real door locks. When somebody forgot to switch the oven off, you can see it in the model and when you switch it of in the model the real oven will be switched of.
That seems not to be realistic at the moment. But think about how you can put all this information and interaction into a tablet interface and how you can design it in a way that even humans with low computer skills can use it.
The use cases for tangible user interfaces are big especially for humans, with speech, cognitive or reading impairments. They can find or develop new ways of communication or interaction, which you can partly find in augmentative communication.
The problem is that even many accessibility experts look on computers in an expert manner. We don’t assume a group, which is not very small; there are a lot of people out there with low computer skills. They often do not more than write a few e-Mails and avoid complex interactions. For such people, tangible interaction can be part of solution to free them from the fear of computers.
A disadvantage of today’s TUIs is their strong determination to a specific use case or task. We need development kits which make it easy to adapt such systems to special needs. But I think that this is not such a big problem. Many people do great things with Raspberry Pi, Arduino and Smartphones; it should be easy to transfer this knowledge to tangible user interfaces.
Also we need more research in this area. Especially cognitive and complex disabilities were neglected in the last years. Therefore we do not have much knowledge about the needs of this group. I think, we all will benefit from such research. It is right, we can use today’s interfaces, but most people are some time and some are most time unsatisfied with today’s status of human computer interaction.