The Future Interfaces Group (FIG) is an interdisciplinary research laboratory within the Human-Computer Interaction Institute at Carnegie Mellon University. We create new sensing and interface technologies that aim to make interactions between humans and computers more fluid, intuitive, and powerful. These efforts often lie in emerging use modalities, such as mobile computing, touch interfaces and gestural interaction.
Tablet computers are often called upon to emulate classical pen-and-paper input. However, touchscreens typically lack the means to distinguish between legitimate stylus and finger touches and touches with the palm or other parts of the hand. This forces users to rest their palms elsewhere or hover above the screen, resulting in ergonomic and usability problems. We present a probabilistic touch filtering approach that uses the temporal evolution of touch contacts to reject palms. Our system improves upon previous approaches, reducing accidental palm inputs to 0.016 per pen stroke, while correctly passing 98% of stylus inputs. Published at CHI 2014.
We propose using the face of a smartwatch as a multi-degree-of-freedom mechanical interface. This enables rich interaction without occluding the screen with fingers, and can operate in concert with touch interaction and physical buttons. We built a proof-of-concept smartwatch that supports continuous 2D panning and twist, as well as binary tilt and click. To illustrate the potential of our approach, we developed a series of example applications, many of which are cumbersome – or even impossible – on today’s smartwatch devices. Published at CHI 2014.
The average person can skillfully manipulate a plethora of tools, from hammers to tweezers. However, despite this remarkable dexterity, gestures on today’s touch devices are simplistic, relying primarily on the chording of fingers: one-finger pan, two-finger pinch, four-finger swipe and similar. We propose that touch gesture design be inspired by the manipulation of physical tools from the real world. In this way, we can leverage user familiarity and fluency with such tools to build a rich set of gestures for touch interaction. With only a few minutes of training on a proof-of-concept system, users were able to summon a variety of virtual tools by replicating their corresponding real-world grasps. Published at CHI 2014.
Lumitrack is a novel motion tracking technology that uses projected structured patterns and linear optical sensors. Each sensor unit is capable of recovering 2D location within the projection area, while multiple sensors can be combined for up to six degree of freedom (DOF) tracking. Our structured light approach is based on special patterns, called m-sequences, in which any consecutive subsequence of m bits is unique. Lumitrack can utilize both digital and static projectors, as well as scalable embedded sensing configurations. The resulting system enables high-speed (500+ FPS), high precision (sub-millimeter), and low-cost motion tracking for a wide range of interactive applications. Published at UIST 2013.
WorldKit makes use of a paired depth camera and projector to make ordinary surfaces instantly interactive. Using this system, touch-based interactivity can, without prior calibration, be placed on nearly any unmodified surface literally with a wave of the hand, as can other new forms of sensed interaction. From a user perspective, such interfaces are easy enough to instantiate that they could, if desired, be recreated or modified “each time we sit down” by “painting” them next to us. From the programmer’s perspective, our system encapsulates these capabilities in a simple set of abstractions that make the creation of interfaces quick and easy. Published at CHI 2013.
The proliferation of touchscreen devices has made soft keyboards a routine part of life. However, ultra-small computing platforms like the Sony SmartWatch and Apple iPod Nano lack a means of text entry. This limits their potential, despite the fact they are capable computers. We present a soft keyboard interaction technique called ZoomBoard that enables text entry on ultra-small devices. Our approach uses iterative zooming to enlarge otherwise impossibly tiny keys to comfortable size. We ran a text entry experiment on a keyboard measuring just 16 x 6mm – smaller than a US penny. Users achieved roughly 10 words per minute, allowing users to enter short phrases both quickly and quietly. Published at CHI 2013.
We present Acoustic Barcodes, structured patterns of physical notches that, when swiped with e.g., a fingernail, produce a complex sound that can be resolved to a binary ID. A single, inexpensive contact microphone attached to a surface or object is used to capture the waveform. We present our method for decoding sounds into IDs, which handles variations in swipe velocity and other factors. Acoustic Barcodes could be used for information retrieval or to triggering interactive functions. They are passive, durable and inexpensive to produce. Further, they can be applied to a wide range of materials and objects, including plastic, wood, glass and stone. Published at UIST 2012.
Chris is an Assistant Professor of Human-Computer Interaction at Carnegie Mellon University. He broadly investigates novel sensing technologies and interaction techniques, especially those that empower people to interact with “small devices in big ways.”
Gierad is a 1st year PhD student in the Human-Computer Interaction Institute at Carnegie Mellon. He combines electrical engineering and computer science to change and empower how people use computing as a tool in more fluid and expressive ways.
Robert is a 3rd year PhD student, having extensive experience in prototyping and developing hardware and software interaction technologies. He combines computer science, mathematics, and elctronics to create novel interactive experiences.
The FIGLAB is a new, state-of-the-art facility located on bustling Craig Street, at the western edge of Carnegie Mellon’s campus. The building contains three studios for rapid ideation and prototyping, which encompasses more than 1500 square feet of shop space. Two studios are geared towards physical fabrication, primarily wood and plastics, but also textiles and metalwork. A third lab is dedicated to electronics prototyping and development. Equipment includes a large CNC milling machine, several 3D printers, laser cutter, vacuum former, saws, drills, sanders, and a variety of hand tools. Materials are also stocked for use.
Future Interfaces Group
Like our research? Consider becoming a lab sponsor. Corporate collaborators have unique access to research conducted in the lab, including early previews of new technologies, as well as sponsor only workshops.
We’re always interested in working with new students, researchers and collaborators. For independent studies and undergraduate research opportunities, please contact Professor Harrison. The lab does not accept Masters and Ph.D. students directly. This process is handled by our parent department, the Human-Computer Interaction Institute. Application materials can be found here: