Activity and Process
After secondary research on gesture input technologies, we decided to examine several gesture based systems firsthand in order to test our personal assumptions about how gesture interfaces are implemented today. We decided to do an ethnographic study of Kinect to test the accuracy of gestures, the intuitiveness of the interaction model and how it compared with the interaction models like Wii, which use a remote instead of just hand gestures. We were also curious to know how first time users do with the use of gestures. Our thought behind this activity was that these observations would help us understand gesture-based interaction styles and learn about what works and what needs improvement to incorporate in future prototypes that we will develop for the project.
We visited the Microsoft Store in University Village to interact with the Xbox One’s system menus with Kinect gestures in an environment where, just like us, the average customer would receive their first introduction to the interface. Four of us from the team who were new to the Kinect interface tried it out as first time users, and the others observed. We also observed two store employees attempt to use the Kinect.
Key Observations and Takeaway
With everyone that tried the system during this study, one prominent issue we came across was the lack of acknowledgement and feedback. There was no visual feedback in the form of a cursor to show actual hand gesture pointing position. The only visual feedback shown was a very thin box around the selection. This was causing confusion to the users who did not know when their gestures were working. In fact when there was more than one person in the vicinity, there was confusion about whose movements were actually initiating what the system was responding with. In one instance during our observation, we noticed that the user could not get out of a certain screen and used a workaround they claim to use often with the Kinect system; i.e. switching it off and back on. The previous example and other similar observations like unintentional selections, error prevention and correction were principles that were clearly important to us and the others using the system. There is also a lack of clarity on screen space mapping . A positive that we noticed was how the system differentiated and provided visual cues by showing voice commands in green (not related to gestures, but a good example of discoverability).
Implications for Design and Moving Forward
These observations gave us a lot to think about in terms of designing systems that use gesture input. For one, feedback is critical. Responsiveness and immediate feedback to right or wrong actions is essential. Findings from this ethnographic observation have given us a good understanding of what works and what does not in gesture input systems. These will become our guiding principles for the interaction design of any concept that we decide to go with as the project progresses.