GestureWiz is a rapid prototyping environment for designers to create and test arbitrary, multi-modal gestures without the need for programming or training data.

A research paper about GestureWiz has been published at the 2018 ACM Conference on Human Factors in Computing Systems.
University of Michigan
Team of 3
Mar–May 2017
What I was/did
Team/Project Lead
User Research
System Design
How the process turned out
literature review
competitive analysis
Iterative Implementation (2×)
high-fidelity prototype
crowd worker experiments
user study
Human-Powered Gestures.
Designing and testing gesture-based applications is far from trivial, mostly because there is no easy way to create arbitrary, multi-modal gestures. Existing solutions usually require coding skills, lots of training data, or are restricted to just one modality (e.g., touch).

GestureWiz lets designers create and test gestures without training data or the need to write code. Say, a designer wants to prototype mid-air gestures for a HoloLens application. First, she records and saves a set of template gestures, simply by using a webcam or Microsoft Kinect. Then, she records a gesture that should be recognized, which is automatically posted to Amazon Mechanical Turk. The recruited crowd workers compare the gesture to the template set and select the correct match—simple as that.

We discovered the need for such a system through a literature review and competitive analysis of solutions like $1, JackKnife, and Zensors, none of which provides designers with the benefits of GestureWiz.
The UI for designers to record and test sets of multi-modal gestures. The numbers below the individual gestures on the right, which are shown as animated GIFs, provide information about recognition accuracy and latency. The star on the left is a touch gesture to be recognized. The green outline shows the selection of the human recognizer.
An Interface for Crowd Workers.
We created sketches, storyboards, and wireframes of potential user interfaces for the crowd workers on Amazon Mechanical Turk as well as the designers creating the gestures. It turned out that the former was the trickier part and the solution we came up with was an interface that showed the whole template set on the right and a live streamed gesture to be recognized on the left side. To evaluate our designs, we created a first high-fidelity prototype and conducted experiments with remote crowd workers and three different gesture sets. We moreover carried out an in-lab user study with 11 interaction designers, who we asked to create and test a gesture set for a slideshow application.
The UI for the human recognizers—from sketch to high-fidelity prototype. It introduces a gamification feature as a response to relatively poor accuracy and latency in our first round of studies.
Improving GestureWiz.
Based on the first round of studies, we concluded the need for a different worker UI that shows only one gesture template at a time rather than the whole gesture set. The latter was confusing for crows workers since templates are animated GIFs and resulted in suboptimal recognition performance. Besides, we decided to added a gamification component to the system to further improve recognition quality. In this way, crowd workers can earn a bonus by being especially quick and accurate. We validated these new design decision in a second round of crowd worker experiments with an improved prototype. Additionally, we conducted a second user study with another 12 interaction designers that were again asked to design gestures for a slideshow application, this time in a co-creation setting.

The interaction designers we tested with appreciated that GestureWiz is quick and easy to use and well suited for conflict resolution of ambiguous gestures as well as the support it can provide for the prototyping process of gesture-based apps.