Tuesday, September 6, 2011

Blog - Paper Reading #3

People use Microsoft's 'surface computer' in an undated handout image. Microsoft Corp. will unveil a coffee-table-shaped 'surface computer' on Wednesday in a major step towards co-founder Bill Gates's view of a future where the mouse and keyboard are replaced by more natural interaction using voice, pen and touch. REUTERS-PRNewsFoto-Microsoft Corp.-HandoutPen + Touch = New Tools

Authored by Ken Hinckley, Koji Yatani, Michel Pahud, Nicole Coddington, Jenny Rodenhouse, Andy Wilson, Hrvoje Benko, and Bill Buxton

The authors of this paper are a collection of researchers from different backgrounds working for Microsoft. All of the above are researchers with Microsoft with Koji Yatani being the exception. He is a PhD student in Toronto. This paper was presented in 23nd annual ACM symposium on User interface software and technology in New York.

Summary

Hypothesis
The authors hypothesized that an effective tool for multimodal manipulation of text, images, etc. could be designed by assigning specific roles to both the dominant and non-dominant hand. Using this as a foundation the team designed each of the major features of their pen+touch interface.

Methods
It was decided that the non-dominant hand would be responsible for "holding" while the preferred hand was responsible for mode changes. By default, the stylus would simply write on the interactive surface. However, when the offhand was used to hold down a document or other subject, new commands became available by way of the stylus. Effectively, the offhand chose the subject for the tool being invoked on it by the pen. The foundation of this paradigm came from a user study in which the team observed people interacting pen, paper, and clippings in a real life setting. They took to the behaviors they observed and used them to craft their pen+touch features.

Results
The result seems to be a very intuitive system of document manipulation on the Microsoft Surface. The suite comes complete with all of the typical means of manipulation including copying, cropping and even the ability to use documents as straightedges. The authors comment that during testing, many users felt that they were able to very naturally acquire proficiency with the software.

Discussion

I found this design to be highly intriguing, particularly in the simple but intelligent design paradigm that was used.  Thinking on my own writing habits I now recognize that utilizing the offhand as a manipulator of documents is common to most all people. Additionally the idea of holding a subject to select it for tool use is clever in that one can never get stuck in any kind of context menu. It would be virtually impossible to become lost with this software, all one would have to do is lift the non-dominant hand to revert the pen to the standard function of stylus. It seems to stand in contrast to the less intuitive attempts in Hands-On Math to implement bimodal manipulation.
The descriptions of each feature all smack of a potentially excellent piece of software. It has only been tested on 11 users however, and I expect more rigorous evaluation would be necessary to prove that it does in fact improve productivity.  Additionally, the author identifies one problem that persists across most gesture based systems. While this system once learned feels natural, the operations themselves are not "self-revealing", meaning the users require some level of prior instruction.
I feel that this paper has taken the right approach. Rather than attempt to weave in features that the designers feel would be useful, they try to create an environment that allows the user to operate naturally. It would be interesting to see if the designers are able in some way to increase the intuitiveness of the gestures themselves without compromising the backbone of the project.

No comments:

Post a Comment