The Future Interfaces Group (FIG) is an interdisciplinary research laboratory within the Human-Computer Interaction Institute at Carnegie Mellon University. We create new sensing and interface technologies that aim to make interactions between humans and computers more fluid, intuitive, and powerful. These efforts often lie in emerging use modalities, such as mobile computing, touch interfaces and gestural interaction.

Latest Research

Skin Buttons: Cheap, Small, Low-Power and Clickable Fixed-Icon Laser Projections

Smartwatches are a promising new interactive platform, but their small size makes even basic actions cumbersome. Hence, there is a great need for approaches that expand the interactive envelope around smartwatches, allowing human input to escape the small physical confines of the device. We propose using tiny projectors integrated into the smart- watch to render icons on the user’s skin. These icons can be made touch sensitive, significantly expanding the interactive region without increasing device size. Through a series of experiments, we show that these “skin buttons” can have high touch accuracy and recognizability, while being low cost and power-efficient. Published at UIST 2014.

Air+Touch: Interweaving Touch & In-Air Gestures

Air+Touch is a new class of interactions that interweave touch events with in-air gestures, offering a unified input modality with expressiveness greater than each input modality alone. We demonstrate how air and touch are highly complementary: touch is used to designate targets and segment in-air gestures, while in-air gestures add expressivity to touch events. For example, a user can draw a circle in the air and tap to trigger a context menu, do a finger 'high jump' between two touches to select a region of text, or drag and in-air ‘pigtail’ to copy text. Published at UIST 2014.

Toffee: Enabling Ad Hoc, Around-Device Interaction with Acoustic Time-of-Arrival Correlation

Toffee is a sensing approach that extends touch interaction beyond the small confines of a mobile device and onto ad hoc adjacent surfaces, most notably tabletops. This is achieved using a novel application of acoustic time differences of arrival (TDOA) correlation. Our approach requires only a hard tabletop and gravity – the latter acoustically couples mobile devices to surfaces. We conducted an evaluation, which shows that Toffee can accurately resolve the bearings of touch events (mean error of 4.3° with a laptop prototype). This enables radial interactions in an area many times larger than a mobile device; for example, virtual buttons that lie above, below and to the left and right. Published at MobileHCI 2014.

Probabilistic Palm Rejection Using Spatiotemporal Touch Features and Iterative Classification

Tablet computers are often called upon to emulate classical pen-and-paper input. However, touchscreens typically lack the means to distinguish between legitimate stylus and finger touches and touches with the palm or other parts of the hand. This forces users to rest their palms elsewhere or hover above the screen, resulting in ergonomic and usability problems. We present a probabilistic touch filtering approach that uses the temporal evolution of touch contacts to reject palms. Our system improves upon previous approaches, reducing accidental palm inputs to 0.016 per pen stroke, while correctly passing 98% of stylus inputs. Published at CHI 2014.

Expanding the Input Expressivity of Smartwatches with Mechanical Pan, Twist, Tilt and Click

We propose using the face of a smartwatch as a multi-degree-of-freedom mechanical interface. This enables rich interaction without occluding the screen with fingers, and can operate in concert with touch interaction and physical buttons. We built a proof-of-concept smartwatch that supports continuous 2D panning and twist, as well as binary tilt and click. To illustrate the potential of our approach, we developed a series of example applications, many of which are cumbersome – or even impossible – on today’s smartwatch devices. Published at CHI 2014.

TouchTools: Leveraging Familiarity and Skill with Physical Tools to Augment Touch Interaction

The average person can skillfully manipulate a plethora of tools, from hammers to tweezers. However, despite this remarkable dexterity, gestures on today’s touch devices are simplistic, relying primarily on the chording of fingers: one-finger pan, two-finger pinch, four-finger swipe and similar. We propose that touch gesture design be inspired by the manipulation of physical tools from the real world. In this way, we can leverage user familiarity and fluency with such tools to build a rich set of gestures for touch interaction. With only a few minutes of training on a proof-of-concept system, users were able to summon a variety of virtual tools by replicating their corresponding real-world grasps. Published at CHI 2014.

Lumitrack: Low Cost, High Precision and High Speed Tracking with Projected m-Sequences

Lumitrack is a novel motion tracking technology that uses projected structured patterns and linear optical sensors. Each sensor unit is capable of recovering 2D location within the projection area, while multiple sensors can be combined for up to six degree of freedom (DOF) tracking. Our structured light approach is based on special patterns, called m-sequences, in which any consecutive subsequence of m bits is unique. Lumitrack can utilize both digital and static projectors, as well as scalable embedded sensing configurations. The resulting system enables high-speed (500+ FPS), high precision (sub-millimeter), and low-cost motion tracking for a wide range of interactive applications. Published at UIST 2013.

WorldKit: Ad Hoc Interactive Applications on Everyday Surfaces

WorldKit makes use of a paired depth camera and projector to make ordinary surfaces instantly interactive. Using this system, touch-based interactivity can, without prior calibration, be placed on nearly any unmodified surface literally with a wave of the hand, as can other new forms of sensed interaction. From a user perspective, such interfaces are easy enough to instantiate that they could, if desired, be recreated or modified “each time we sit down” by “painting” them next to us. From the programmer’s perspective, our system encapsulates these capabilities in a simple set of abstractions that make the creation of interfaces quick and easy. Published at CHI 2013.

Zoomboard: A Diminutive QWERTY Keyboard for Ultra-Small Devices

The proliferation of touchscreen devices has made soft keyboards a routine part of life. However, ultra-small computing platforms like the Sony SmartWatch and Apple iPod Nano lack a means of text entry. This limits their potential, despite the fact they are capable computers. We present a soft keyboard interaction technique called ZoomBoard that enables text entry on ultra-small devices. Our approach uses iterative zooming to enlarge otherwise impossibly tiny keys to comfortable size. We ran a text entry experiment on a keyboard measuring just 16 x 6mm – smaller than a US penny. Users achieved roughly 10 words per minute, allowing users to enter short phrases both quickly and quietly. Published at CHI 2013.

Acoustic Barcodes: Passive, Durable and Inexpensive Notched Identification Tags

We present Acoustic Barcodes, structured patterns of physical notches that, when swiped with e.g., a fingernail, produce a complex sound that can be resolved to a binary ID. A single, inexpensive contact microphone attached to a surface or object is used to capture the waveform. We present our method for decoding sounds into IDs, which handles variations in swipe velocity and other factors. Acoustic Barcodes could be used for information retrieval or to triggering interactive functions. They are passive, durable and inexpensive to produce. Further, they can be applied to a wide range of materials and objects, including plastic, wood, glass and stone. Published at UIST 2012.

Team

Chris Harrison

Chris is an Assistant Professor of Human-Computer Interaction at Carnegie Mellon University. He broadly investigates novel sensing technologies and interaction techniques, especially those that empower people to interact with “small devices in big ways.”

Website CV Email: chris.harrison@cs.cmu.edu Publications on Google Scholar

Gierad Laput

Gierad is a 2nd year PhD student in the Human-Computer Interaction Institute at Carnegie Mellon. He combines electrical engineering and computer science to change and empower how people use computing as a tool in more fluid and expressive ways.

Website CV Email: gierad.laput@cs.cmu.edu Publications on Google Scholar

Robert Xiao

Robert is a 4th year PhD student, having extensive experience in prototyping and developing hardware and software interaction technologies. He combines computer science, mathematics, and elctronics to create novel interactive experiences.

Website CV Email: brx@cs.cmu.edu Publications on Google Scholar

JaRon Pitts

JaRon is an administrative coordinator at Carnegie Mellon. He handles many of the key administrative and business responsibilities at the FIGLAB, working closely with collaborators and sponsors. He is also a commissioner of PR & Marketing for a local non-profit.

Phone: 412-268-8416 Email: jpitts@cs.cmu.edu

Facilities

The FIGLAB is a new, state-of-the-art facility located on bustling Craig Street, at the western edge of Carnegie Mellon’s campus. The building contains three studios for rapid ideation and prototyping, which encompasses more than 1500 square feet of shop space. Two studios are geared towards physical fabrication, primarily wood and plastics, but also textiles and metalwork. A third lab is dedicated to electronics prototyping and development. Equipment includes a large CNC milling machine, several 3D printers, laser cutter, vacuum former, saws, drills, sanders, and a variety of hand tools. Materials are also stocked for use.

Contact

Future Interfaces Group
407 Craig Street
Pittsburgh, PA 15213

info@figlab.com
tel: 417.263.9999
fax: 412.268.1266
Find us on Google+


Sponsor the lab

Like our research? Consider becoming a lab sponsor. Corporate collaborators have unique access to research conducted in the lab, including early previews of new technologies, as well as sponsor only workshops.

Email us with more information

Apply

We’re always interested in working with new students, researchers and collaborators. For independent studies and undergraduate research opportunities, please contact Professor Harrison. The lab does not accept Masters and Ph.D. students directly. This process is handled by our parent department, the Human-Computer Interaction Institute. Application materials can be found here:

Undergrad Minor/Major in HCI
Masters of HCI
Ph.D. in HCI