Reference Information
Title: Imaginary Interfaces: Spatial Interaction with Empty Hands and without Visual Feedback
Names of authors: Sean Gustafson, Daniel Bierwirth, Patrick Baudisch
Presentation venue: UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology
Summary:
Current screen-less devices only support buttons and gestures. Screen-less wearable devices allow for the smallest form factor and thus the maximum mobility. Pointing is not supported because users have nothing to point at. However, the designers challenge the notion that spatial interaction requires a screen and propose a method for bringing spatial interaction to screen-less devices.
In this paper, the authors present Imaginary Interfaces, screen-less devices that allow users to perform spatial interaction with empty hands and without visual feedback. Unlike projection-based solutions, such as Sixth Sense, all of the spatial information is contained in the user's mind and all interaction is done relative to the user's frame of reference. Users define the origin of an imaginary space by forming an L-shaped coordinate cross with their non-dominant hand. Users then point and draw with their dominant hand in the resulting space.
With three user studies the authors investigated the question: To what extent can users interact spatially with a user interface that exists only in their imagination? Participants created simple drawings, annotated existing drawings, and pointed at locations described in imaginary space. Their findings suggested that users’ visual short-term memory can, in part, replace the feedback conventionally displayed on a screen. The authors propose a design for such a device, which works by illuminating the user's hands with infrared light, applying a luminance threshold, and discerning the structures that comprise the imaginary interface
Discussion:
The idea and the concept is very interesting. Such methods of interactions are gaining increasing importance, we have already seen the famous research - Sixth Sense done at MIT. However, the users will have to remember hundreds of gestures for performing various tasks, which can be annoying. Also, the gesture recognition has to be really accurate, since there is a good chance that two gestures might look similar and the algorithm may confuse one for another, thus annoying the user. Lastly, consider this situation ten years down the road, everyone having a wearable computer and performing gestures in air, that would be a little weird, I think....everyone waving hand and circling their hands. However, if it brings convenience, other things don't really matter.
No comments:
Post a Comment