In today’s world with ever more systems and services constantly released, designing human-machine interaction is a must. The problem with interaction design is not that we are not good at interaction. Social skills vary across individuals, but seeing two strangers, possibly even without a mutual language, making up a mutual understanding step-by-step clearly indicates the human ability to interact and thus to shape interaction.
However, designing human-computer interaction is not the same as using various interaction means and constantly adapting the means according to their success or limitations in a specific interaction. Rather, programming is tantamount to determine the interaction capabilities prior to the actual interaction. This is far from the sensitive development during interaction of the means of interaction seen in human communication. Surely, people can be taught how to respond to system behaviour. But one expects greater easiness of self-services than that users would have to take courses in order to use a new service.
How to find the user interfaces that make untrained people ready to interact with a system is not easy. Usability testing is one means to ensure that working systems actually works with humans. But tests are not in themselves the means to reach good interaction designs.
What I presented as part of the SMART Seminar Series was a system for facilitating so-called Wizard-of-Oz experiments. Such experiments let users think they are interacting with a functioning system, but there is actually a human test leader who controls the output of the system. This could be used to test interaction schemes before programming the system’s interpretative parts.
However, we also stress the possibility to make softer experiments where the human wizard is allowed to go outside pre-designed interaction schemes. This because it is often in the very act of interaction that we get ideas of specific output that may be easier to understand for the users. This entails an articulation in the GUI domain rather than only outputting text (and computer speech).
Not every articulation is possible to generate by a manually controlled system. But it is anyhow easier to test and develop an interaction scheme with such wizardry than by programming every time before testing.
Using such ultra-soft methodology where the designer has to take the role of the system gives insights into the human-computer dialogue that is hard to gain outside the conversation. This said, it should be admitted that many systems developers do not even know how to use ordinary usability testing. It is like a certain kind of systems thinking that Graham Harris has written some blog posts about – seeing people interact, if only with a machine, evokes unruly complexities that system designers do not regard as belonging to the system development process.
We argue to the contrary. Various ways of interacting with prospective users gives insights that should shape the development process.
Professor John Sören Petersson visited SMART Infrastructure Facility on May 28th, 2015. During his visit, Professor John Sören Petersson presented his research discussed in this post to an audience from throughout the university, as part of the SMART Seminar Series. To read more about his presentation, including further links, visit the official web report.