The Future Interfaces Group (FIG) is an interdisciplinary research lab within the Human-Computer Interaction Institute at Carnegie Mellon University. We create new sensing and interface technologies that aim to make interactions between humans and computers more fluid, intuitive and powerful. These efforts often lie in emerging use modalities, such as wearable computing, touch interaction and gestural interfaces.
A new method that estimates a finger’s angle relative to the screen. Our approach works in tandem with conventional multitouch finger tracking, offering two additional analog degrees of freedom for a single touch point. We prototyped our solution on two platforms— a smartphone and smartwatch—each fully self-contained and operating in real-time.
Xiao, R. Schwarz, J. and Harrison, C. Estimating 3D Finger Angle on Commodity Touchscreens. In Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces (ITS ‘15). ACM, New York, NY.
A technique that uses existing, low-level touchscreen data, combined with machine learning classifiers, to provide real-time authentication and even identification of users. Our user study demonstrates twenty-participant authentication accuracies of 99.6%. For twenty-user identification, our software achieved 94.0% accuracy and 98.2% on groups of four, simulating family use.
Guo, A., Xiao, R. and Harrison, C. 2015. CapAuth: Identifying and Differentiating User Handprints on Commodity Capacitive Touchscreens. In Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces (Madeira, Portugal, November 15 - 18, 2015). ITS ‘15. ACM, New York, NY. 59-62.
By fusing gaze and gesture into a unified and fluid interaction modality, we can enable rapid, precise and expressive free-space interactions that mirror natural use. Although both approaches are independently poor for pointing tasks, combining them can achieve pointing performance superior to either method alone. This opens new interaction opportunities for gaze and gesture systems alike.
Chatterjee, I., Xiao, R. and Harrison, C. 2015. Gaze+Gesture: Expressive, Precise and Targeted Free-Space Interactions. In Proceedings of the 17th ACM International Conference on Multimodal Interaction (Seattle, Washington, November 9 - 13, 2015). ICMI '15. ACM, New York, NY. 131-138.
A sensing technology that allows a smartwatch to know what object the user is touching. When the user operates an electrical or electro-mechanical object, the electro-magnetic signals (EM) propagate through the user. These characteristic signals flow through the user and detected by the watch, which can be used for on-touch object detection.
Laput, G., Yang, C. Xiao, R, Sample, A. and Harrison, C. 2015. EM-Sense: Touch Recognition of Uninstrumented, Electrical and Electromechanical Objects. In Proceedings of the 28th Annual ACM Symposium on User interface Software and Technology (Charlotte, North Carolina, November 8 - 11, 2015). UIST '15. ACM, New York, NY. 157-166.
Tomo is a wearable, low-cost system using Electrical Impedance Tomography (EIT) to recover the interior impedance geometry of a user’s arm. We ultimately envision this technique being integrated into future smartwatches, allowing hand gestures and direct touch manipulation to work synergistically to support interactive tasks on small screens.
Zhang, Y. and Harrison, C. 2015. Tomo: Wearable, Low-Cost, Electrical Impedance Tomography for Hand Gesture Recognition. In Proceedings of the 28th Annual ACM Symposium on User interface Software and Technology (Charlotte, North Carolina, November 8 - 11, 2015). UIST '15. ACM, New York, NY. 167-173.
3D-Printed Hair (2015)
A technique for 3D printing hair, fibers and bristles, by exploiting the stringing phenomena inherent in 3D printers using fused deposition modeling. This technique extends the capabilities of 3D printing in a new and interesting way, without requiring any new hardware
Laput, G., Chen, X. and Harrison, C. 3D Printed Hair: Fused Deposition Modeling of Soft Strands, Fibers and Bristles. In Proceedings of the 28th Annual ACM Symposium on User interface Software and Technology (UIST '15). ACM, New York, NY.
Zensors is a new sensing approach that fuses real-time human intelligence from online crowd workers with automatic approaches to provide robust, adaptive, and readily deployable intelligent sensors. With Zensors, users can go from question to live sensor feed in less than 60 seconds. Through our API, Zensors can enable a variety of rich end-user applications and moves us closer to the vision of responsive, intelligent environments.
Laput, G., Lasecki, W., Wiese, J., Xiao, R., Bigham, J. and Harrison, C. 2015. Zensors: Adaptive, Rapidly Deployable, Human-Intelligent Sensor Feeds. In Proceedings of the 33nd Annual SIGCHI Conference on Human Factors in Computing Systems (CHI '15). 1935-1944.
Acoustruments are low-cost, passive, and powerless mechanisms, made from plastic, that can bring tangible functionality to handheld devices. The operational principles were inspired by wind instruments, which produce expressive musical output despite being simple in physical design. Through a structured exploration, we built an expansive vocabulary of design primitives, providing building blocks for the construction of tangible interfaces utilizing smartphonesʼ existing audio functionality (the speaker and microphone).
Laput, G., Brockmeyer, E., Hudson, S. and Harrison, C. 2015. Acoustruments: Passive, Acoustically-Driven, Interactive Controls for Handheld Devices. In Proceedings of the 33nd Annual SIGCHI Conference on Human Factors in Computing Systems (Seoul, Korea, April 18 - 23, 2015). CHI '15. ACM, New York, NY. 2161-2170.
Skin Buttons (2014)
Tiny projectors integrated into the smartwatch to render icons on the user’s skin. These icons can be made touch sensitive, significantly expanding the interactive region without increasing device size. Through a series of experiments, we show that these “skin buttons” can have high touch accuracy and recognizability, while being low cost and power-efficient.
Laput, G., Xiao, R., Chen, X., Hudson, S. and Harrison, C. 2014. Skin Buttons: Cheap, Small, Low-Power and Clickable Fixed-Icon Laser Projections. In Proceedings of the 27th Annual ACM Symposium on User interface Software and Technology (Honolulu, Hawaii, October 5 - 8, 2014). UIST '14. ACM, New York, NY. 389-394.
Air+Touch is a new class of interactions that interweave touch events with in-air gestures, offering a unified input modality with expressiveness greater than each input modality alone. We demonstrate how air and touch are highly complementary: touch is used to designate targets and segment in-air gestures, while in-air gestures add expressivity to touch events.
Chen, X., Schwarz, J. Harrison, C., Mankoff, J. and Hudson, S. 2014. Air+Touch: Interweaving Touch & In-Air Gestures. In Proceedings of the 27th Annual ACM Symposium on User interface Software and Technology (Honolulu, Hawaii, October 5 - 8, 2014). UIST '14. ACM, New York, NY. 519-525.
Toffee is a sensing approach that extends touch interaction beyond the small confines of a mobile device and onto ad hoc adjacent surfaces, most notably tabletops. This is achieved using a novel application of acoustic time differences of arrival (TDOA) correlation. This enables radial interactions in an area many times larger than a mobile device.
Xiao, R., Lew, G., Marsanico, J., Hariharan, D., Hudson, S., and Harrison, C. 2014. Toffee: Enabling Ad Hoc, Around-Device Interaction with Acoustic Time-of-Arrival Correlation. In Proceedings of the 16th International Conference on Human-Computer Interaction with Mobile Devices and Services (Toronto, Canada, September 23 - 26, 2014). MobileHCI ’14. ACM, New York, NY. 67-76.
We propose that touch gesture design be inspired by the manipulation of physical tools from the real world. In this way, we can leverage user familiarity and fluency with such tools to build a rich set of gestures for touch interaction. With only a few minutes of training on a proof-of-concept system, users were able to summon a variety of virtual tools by replicating their corresponding real-world grasps.
Harrison, C., Xiao, R., Schwarz, J., and Hudson, S. TouchTools: Leveraging Familiarity and Skill with Physical Tools to Augment Touch Interaction. In Proceedings of the 32nd Annual SIGCHI Conference on Human Factors in Computing Systems (Toronto, Canada, April 26 - May 1, 2014). CHI '14. ACM, New York, NY. 2913-2916.
Smartwatch 5DOF (2014)
We propose using the face of a smartwatch as a multi-degree-of-freedom mechanical interface. This enables rich interaction without occluding the screen with fingers, and can operate in concert with touch interaction and physical buttons. We developed a series of example applications, many of which are cumbersome – or even impossible – on today’s smartwatch devices.
Xiao, R., Laput, G., and Harrison, C. Expanding the Input Expressivity of Smartwatches with Mechanical Pan, Twist, Tilt and Click. In Proceedings of 32nd Annual SIGCHI Conference on Human Factors in Computing Systems (Toronto, Canada, April 26 - May 1, 2014). CHI '14. ACM, New York, NY. 193-196.
TapSense enables touchscreens to know how users are touching the screen - with finger tips, knuckles and nails, or even a passive stylus. TapSense moves beyond just counting the number of fingers on the screen, revolutionizing the way we interact with touch-enabled devices. By distinguishing between different parts of the hand, TapSense takes the pain out of performing these actions, making mobile devices faster and easier to use than ever.
Harrison, C., Schwarz, J. and Hudson S. E. 2011. TapSense: Enhancing Finger Interaction on Touch Surfaces. In Proceedings of the 24th Annual ACM Symposium on User interface Software and Technology. UIST '11. ACM, New York, NY. 627-636.
TeslaTouch brings rich, dynamic physical feedback to otherwise flat, featureless touchscreens. The technology is based on the electrovibration principle, which can programmatically vary the electrostatic friction between fingers and a touch panel. When combined with an interactive graphical display, this approach enables touch experiences with rich textures and physical affordances.
Bau, O., Poupyrev, I., Israr, A., and Harrison, C. 2010. TeslaTouch: Electrovibration for Touch Surfaces. In Proceedings of the 23rd Annual ACM Symposium on User interface Software and Technology (New York, New York, October 3 - 6, 2010). UIST '10. ACM, New York, NY. 283-292.