Sensing Foot Gestures from the Pocket
Authored by Jeremy Scott, David Dearman, Koji Yatani, and Khai N. Truong.
The authors consist of students and faculty from the University of Torronto. This paper was published in the UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology.
Summary
Hypothesis
The authors hypothesize that foot gestures constitute a robust input method in certain hands-free applications.
Methods
There were two tests conducted over the course of this experiment. The first was callibration, and a proof of concept. Users were asked to answer when they felt they had their foot at different angles according to the experiments four types of motion. They were each fitted with a "rigid foot model" that was tracked by six motion capture cameras. The test calculated how accurately users could hit target angles, and general range of motion. The second test involved utilizing an accelerometer in a mobile phone to record gestures rather than a battery of cameras. Six test participants were asked to go through a variety of selection scenarios and their data was recorded.
Content
This paper attempts to demonstrate the feasibility of foot gestures as a mode of hands and eyes-free interaction with our mobile devices. The text describes two experiments. The first examined user's ability to use foot gestures with a functional level of accuracy. Heel rotation and angling the foot towards the floor were found to be the most effective. The second test examined the possibility of a mobile phone's accelerometer being used to track the foot gestures. Using Naive Baye's to classify the foot gestures, they achieved around 86% accuracy.
Discussion
I believe the authors achieved their purpose in this paper. They did demonstrate that foot gestures can be used as input to a handheld device. I am not convinced that it would receive any market attention though. While the idea is to keep the hands free, the feet are just as often in use. Additionally, while the users in the study were able to operate with reasonable degree of accuracy, there was nothing to indicate that they'd be able to navigate an imaginary radial menu using their feet.
The only way I could see myself using something like this is if both foot and hand gestures were optionally available. I do not like the thought of having to stop walking to input a gesture to my ipod. While foot gestures might be useful in the car, as described in the paper, it would require you to select using the non-dominant foot, which would be far less accurate/comfortable. This circumstance is specifically not tested, and I believe the results on the left foot would have returned a substantially different data set.
No comments:
Post a Comment