Expanding the Input Expressivity of Smartwatches with a Mechanical Watch Face

The core idea behind TouchTools is to draw upon user familiarity and motor skill with tools from the real world, and bring them to interactive use on computers. Specifically, users replicate a tool’s corresponding real-world grasp and press it to the screen as though it was physically present. The system recognizes this pose and instantiates the virtual tool as if it was being grasped at that position (Figure 1 and Video Figure). Users can then translate, rotate and otherwise manipulate the tool as they would its physical counterpart. For example, a marker can be moved to draw, and a camera’s shutter button can be pressed to take a photograph.

Like using our hands in the real world, this approach provides fast and fluid mode switching, which is generally cumbersome in today’s interactive environments. Contemporary applications often expose a toolbar that allows users to toggle between modes (e.g., pointer, pen, eraser modes) or require use of a special physical tool, such as a stylus. TouchTools can utilize the natural modality of our hands, rendering these accessories superfluous.

Further, the gestures employed on today’s touch devices are relatively simplistic. Most pervasive is the chording of the fingers. For example, a “right click” can be triggered with a two-fingered tap. On some platforms, moving the cursor vs. scrolling is achieved with one or two finger translations respectively. On some Apple products, four-finger swipes allow users to switch between desktops or applications. Other combinations of finger gestures exist, but they generally share one commonality: the number of fingers parameterizes the action. This should be rather startling, as very few actions we perform in the real world rely on poking with different numbers of fingers.

Research Team: Robert Xiao, Gierad Laput, and Chris Harrison


Chris Harrison, Robert Xiao, Julia Schwarz, and Scott E. Hudson. 2014. TouchTools: leveraging familiarity and skill with physical tools to augment touch interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14). Association for Computing Machinery, New York, NY, USA, 2913–2916. DOI:https://doi.org/10.1145/2556288.2557012

Additional Media

Other than the paper PDF, all media on this page is shared under Creative Commons BY 3.0