Systems for providing mixed physical-virtual interaction on desktop surfaces have been proposed for decades, though no such systems have achieved widespread use. One major factor contributing to this lack of acceptance may be that these systems are not designed for the variety and complexity of actual work surfaces, which are often in flux and cluttered with physical objects. In this project, we use an elicitation study and interviews to synthesize a list of ten interactive behaviors that desk-bound, digital interfaces should implement to support responsive cohabitation with physical objects. As a proof of concept, we implemented these interactive behaviors in a working augmented desk system, demonstrating their imminent feasibility. Published at EICS 2017.

Homes, offices and many other environments will be increasingly saturated with connected, computational appliances, forming the “Internet of Things” (IoT). At present, most of these devices rely on mechanical inputs, webpages, or smartphone apps for control. However, as IoT devices proliferate, these existing interaction methods will become increasingly cumbersome. We propose an approach where users simply tap a smartphone to an appliance to discover and rapidly utilize contextual functionality. To achieve this, our prototype smartphone recognizes physical contact with uninstrumented appliances, and summons appliance-specific interfaces and contextually relevant functionality. Published at CHI 2017.

Electrick is a low-cost and versatile sensing technique that enables touch input on a wide variety of objects and surfaces, whether small or large, flat or irregular. This is achieved by using electric field tomography in concert with an electrically conductive material, which can be easily and cheaply added to objects and surfaces. We show that our technique is compatible with commonplace manufacturing methods, such as spray/brush coating, vacuum forming, and casting/molding – enabling a wide range of possible uses and outputs. Our technique can also bring touch interactivity to rapidly fabricated objects, including those that are laser cut or 3D printed. Published at CHI 2017.

The promise of smart environments and the Internet of Things (IoT) relies on robust sensing of diverse environmental facets. Traditional approaches rely on direct or distributed sensing, most often by measuring one particular aspect of an environment with special-purpose sensors. In this work, we explore the notion of general-purpose sensing, wherein a single, highly capable sensor can indirectly monitor a large context, without direct instrumentation of objects. Further, through what we call Synthetic Sensors, we can virtualize raw sensor data into actionable feeds, whilst simultaneously mitigating immediate privacy issues. We used a series of structured, formative studies to inform the development of new sensor hardware and accompanying information architecture. Published at CHI 2017.

Small, local groups who share resources have unmet authentication needs. For these groups, existing authentication strategies either create unnecessary social divisions, do not identify individuals, do not equitably distribute security responsibility, or make it difficult to share or revoke access. To explore an alternative, we designed Thumprint: inclusive group authentication with shared secret knocks. All group members share one secret knock, but individual expressions of the secret are discernible. We evaluated the usability and security of our concept, which suggest that individuals who enter the same shared thumprint are distinguishable from one another, that people can enter thumprints consistently over time, and that thumprints are resilient to casual adversaries. Published at CHI 2017.


Smartwatches are a promising new interactive platform, but their small size makes even basic actions cumbersome. Hence, there is a great need for approaches that expand the interactive envelope around smartwatches, allowing human input to escape the small physical confines of the device. We propose using tiny projectors integrated into the smart- watch to render icons on the user’s skin. These icons can be made touch sensitive, significantly expanding the interactive region without increasing device size. Through a series of experiments, we show that these “skin buttons” can have high touch accuracy and recognizability, while being low cost and power-efficient. Published at UIST 2014.

Many research systems have demonstrated that depth cameras, combined with projectors for output, can turn nearly any reasonably flat surface into an ad hoc, touch-sensitive display. However, even with the latest generation of depth cameras, it has been difficult to obtain sufficient sensing fidelity across a table-sized surface to get much beyond a proof-of-concept demonstration. In this research, we present DIRECT, a novel touch-tracking algorithm that merges depth and infrared imagery captured by a commodity sensor. Our results show that our technique boosts touch detection accuracy by 15% and reduces positional error by 55% compared to the next best-performing technique in the literature. Published at ISS 2016. 

Smartwatches and wearables are unique in that they reside on the body, presenting great potential for always-available input and interaction. Additionally, their position on the wrist makes them ideal for capturing bio-acoustic signals. We developed a custom smartwatch kernel that boosts the sampling rate of a smartwatch’s existing accelerometer, enabling many new applications. For example, we can use bio-acoustic data to classify hand gestures such as flicks, claps, scratches, and taps. Bio-acoustic sensing can also detect the vibrations of grasped mechanical or motor-powered objects, enabling object recognition. Finally, we can generate structured vibrations using a transducer, and show that data can be transmitted through the human body. Published at UIST 2016. 

AuraSense enables rich, around-device, smartwatch interactions using electric field sensing. To explore how this sensing approach could enhance smartwatch interactions, we considered different antenna configurations and how they could enable useful interaction modalities. We identified four configurations that can support six well-known modalities of particular interest and utility, including gestures above the watchface and touchscreen-like finger tracking on the skin. We quantify the feasibility of these input modalities in a series of user studies, which suggest that AuraSense can be low latency and robust across both users and environments. Published at UIST 2016. 

We recently used Electrical Impedance Tomography (EIT) to detect hand gestures using an instrumented smartwatch (see Tomo Project). This prior work demonstrated great promise for non-invasive, high accuracy recognition of gestures for interactive control. In this research, we introduce a new system that offers improved sampling speed and resolution. This, in turn, enables superior interior reconstruction and gesture recognition. More importantly, we use our new system as a vehicle for experimentation – we compare two EIT sensing methods and three different electrode resolutions. Results from in-depth empirical evaluations and a user study shed light on the future feasibility of EIT for sensing human input. Published at UIST 2016. 

SkinTrack is a wearable system that enables continuous touch tracking on the skin. It consists of a ring, which emits a continuous high frequency AC signal, and a sensing wristband with multiple electrodes. Due to the phase delay inherent in a high-frequency AC signal propagating through the body, a phase difference can be observed between pairs of electrodes. SkinTrack measures these phase differences to compute a 2D finger touch coordinate. Our approach can segment touch events at 99% accuracy, and resolve the 2D location of touches with a mean error of 7.6mm. As our approach is compact, non-invasive, low-cost and low-powered, we envision the technology being integrated into future smartwatches, supporting rich touch interactions beyond the confines of the small touchscreen. Published at CHI 2016. 

Devices can be made more intelligent if they have the ability to sense their surroundings and physical configuration. However, adding extra, special purpose sensors increases size, price and build complexity. Instead, we use speakers and microphones already present in a wide variety of devices to open new sensing opportunities. Our technique sweeps through a range of inaudible frequencies and measures the intensity of reflected sound to deduce information about the immediate environment, chiefly the materials and geometry of proximate surfaces. We offer several example uses, two of which we implemented as self-contained demos, and conclude with an evaluation that quantifies their performance and demonstrates high accuracy. Published at IUI 2016. 


We describe a new method that estimates a finger’s angle relative to the screen. The angular vector is described using two angles – altitude and azimuth – more colloquially referred to as pitch and yaw. Our approach works in tandem with conventional multitouch finger tracking, offering two additional analog degrees of freedom for a single touch point. Uniquely, our approach only needs data provided by commodity touchscreen devices, requiring no additional hardware or sensors. We prototyped our solution on two platforms – a smartphone and smartwatch – each fully self-contained and operating in real-time. We quantified the accuracy of our technique through a user study, and explored the feasibility of our approach through example applications and interactions. Published at ITS 2015. 

User identification and differentiation have implications in many application domains, including security, personalization, and co-located multiuser systems. In response, dozens of approaches have been developed, from fingerprint and retinal scans, to hand gestures and RFID tags. We propose CapAuth, a technique that uses existing, low-level touchscreen data, combined with machine learning classifiers, to provide real-time authentication and even identification of users. As a proof-of-concept, we ran our software on an off-the-shelf Nexus 5 smartphone. Our user study demonstrates twenty-participant authentication accuracies of 99.6%. For twenty-user identification, our software achieved 94.0% accuracy and 98.2% on groups of four, simulating family use. Published at ITS 2015. 

Gaze interaction is particularly well suited to rapid, coarse, absolute pointing, but lacks natural and expressive mechanisms to support modal actions. Conversely, free space hand gesturing is slow and imprecise for pointing, but has unparalleled strength in gesturing, which can be used to trigger a wide variety of interactive functions. Thus, these two modalities are highly complementary. By fusing gaze and gesture into a unified and fluid interaction modality, we can enable rapid, precise and expressive free-space interactions that mirror natural use. Moreover, although both approaches are independently poor for pointing tasks, combining them can achieve pointing performance superior to either method alone. This opens new interaction opportunities for gaze and gesture systems alike. Published at ICMI 2015. 

Most everyday electrical and electromechanical objects emit small amounts of electromagnetic (EM) noise during regular operation. When a user makes physical contact with such an object, this EM signal propagates through the user, owing to the conductivity of the human body. By modifying a small, low-cost, software-defined radio, we can detect and classify these signals in real-time, enabling robust on-touch object detection. Unlike prior work, our approach requires no instrumentation of objects or the environment; our sensor is self-contained and can be worn unobtrusively on the body. We call our technique EM-Sense and built a proof-of-concept smartwatch implementation. Our studies show that discrimination between dozens of objects is feasible, independent of wearer, time and local environment. Published at UIST 2015. 

Tomo is a wearable, low-cost system using Electrical Impedance Tomography (EIT) to recover the interior impedance geometry of a user’s arm. This is achieved by measuring the cross-sectional impedances between all pairs of eight electrodes resting on a user’s skin. Our approach is sufficiently compact and low-powered that we integrated the technology into a prototype wrist- and armband, which can monitor and classify hand gestures in real-time. We ultimately envision this technique being integrated into future smartwatches, allowing hand gestures and direct touch manipulation to work synergistically to support interactive tasks on small screens. Published at UIST 2015. 

We introduce a technique for 3D printed hair, fibers and bristles, by exploiting the stringing phenomena inherent in 3D printers using fused deposition modeling. Our approach offers a range of design parameters for controlling the properties of single strands and also of hair bundles. We further detail a list of post-processing techniques for refining the behavior and appearance of printed strands. We provide several examples of output, demonstrating the immediate feasibility of our approach using a low cost, commodity printer. Overall, this technique extends the capabilities of 3D printing in a new and interesting way, without requiring any new hardware. Published at UIST 2015. 

The promise of “smart” homes, workplaces, schools, and other environments has long been championed. Unattractive, however, has been the cost to run wires and install sensors. More critically, raw sensor data tends not to align with the types of questions humans wish to ask, e.g., do I need to restock my pantry? In response, we built Zensors, a new sensing approach that fuses real-time human intelligence from online crowd workers with automatic approaches to provide robust, adaptive, and readily deployable intelligent sensors. With Zensors, users can go from question to live sensor feed in less than 60 seconds. Through our API, Zensors can enable a variety of rich end-user applications and moves us closer to the vision of responsive, intelligent environments. Published at CHI 2015. 

Acoustruments are low-cost, passive, and powerless mechanisms, made from plastic, that can bring rich, tangible functionality to handheld devices. Through a structured exploration, we identified an expansive vocabulary of design primitives, providing building blocks for the construction of tangible interfaces utilizing smartphones’ existing audio functionality. By combining design primitives, familiar physical mechanisms can all be constructed. On top of these, we can create end-user applications with rich, tangible interactive functionalities. Acoustruments adds a new method to the toolbox HCI practitioners and researchers can draw upon, while introducing a cheap and passive method for adding interactive controls to consumer products. Published at CHI 2015. 


Smartwatches are a promising new interactive platform, but their small size makes even basic actions cumbersome. Hence, there is a great need for approaches that expand the interactive envelope around smartwatches, allowing human input to escape the small physical confines of the device. We propose using tiny projectors integrated into the smart- watch to render icons on the user’s skin. These icons can be made touch sensitive, significantly expanding the interactive region without increasing device size. Through a series of experiments, we show that these “skin buttons” can have high touch accuracy and recognizability, while being low cost and power-efficient. Published at UIST 2014.

Air+Touch is a new class of interactions that interweave touch events with in-air gestures, offering a unified input modality with expressiveness greater than each input modality alone. We demonstrate how air and touch are highly complementary: touch is used to designate targets and segment in-air gestures, while in-air gestures add expressivity to touch events. For example, a user can draw a circle in the air and tap to trigger a context menu, do a finger 'high jump' between two touches to select a region of text, or drag and in-air ‘pigtail’ to copy text to the clipboard. Published at UIST 2014.

Toffee is a sensing approach that extends touch interaction beyond the small confines of a mobile device and onto ad hoc adjacent surfaces, most notably tabletops. This is achieved using a novel application of acoustic time differences of arrival (TDOA) correlation. Our approach requires only a hard tabletop and gravity – the latter acoustically couples mobile devices to surfaces. We conducted an evaluation, which shows that Toffee can accurately resolve the bearings of touch events (mean error of 4.3° with a laptop prototype). This enables radial interactions in an area many times larger than a mobile device; for example, virtual buttons that lie above, below and to the left and right. Published at MobileHCI 2014.

The space around the body provides a large interaction volume that can allow for 'big' interactions on 'small' mobile devices. Prior work has primarily focused on distributing information in the space around a user's body. We extend this by demonstrating three new types of around-body interaction: canvas, modal and context-aware. We also present a sensing solution that uses standard smartphone hardware: a phone's front camera, accelerometer and inertial measurement units. Published at MobileHCI 2014.

Research into on-body projected interfaces has primarily focused on the fundamental question of whether or not it was technologically possible. Although considerable work remains, these systems are no longer artifacts of science fiction — prototypes have been successfully demonstrated and tested on hundreds of people. Our aim in this work is to begin shifting the question away from how, and towards where. To better understand and explore this expansive design space, we employed a mixed-methods research process involving more than two thousand individuals. This started with high-resolution, but low-detail crowdsourced data. We then combined this with rich, expert interviews, exploring aspects ranging from aesthetics to kinesthetics. Published at DIS 2014.

We propose using the face of a smartwatch as a multi-degree-of-freedom mechanical interface. This enables rich interaction without occluding the screen with fingers, and can operate in concert with touch interaction and physical buttons. We built a proof-of-concept smartwatch that supports continuous 2D panning and twist, as well as binary tilt and click. To illustrate the potential of our approach, we developed a series of example applications, many of which are cumbersome – or even impossible – on today’s smartwatch devices. Published at CHI 2014.

The average person can skillfully manipulate a plethora of tools, from hammers to tweezers. However, despite this remarkable dexterity, gestures on today’s touch devices are simplistic, relying primarily on the chording of fingers: one-finger pan, two-finger pinch, four-finger swipe and similar. We propose that touch gesture design be inspired by the manipulation of physical tools from the real world. In this way, we can leverage user familiarity and fluency with such tools to build a rich set of gestures for touch interaction. With only a few minutes of training on a proof-of-concept system, users were able to summon a variety of virtual tools by replicating their corresponding real-world grasps. Published at CHI 2014.

Tablet computers are often called upon to emulate classical pen-and-paper input. However, touchscreens typically lack the means to distinguish between legitimate stylus and finger touches and touches with the palm or other parts of the hand. This forces users to rest their palms elsewhere or hover above the screen, resulting in ergonomic and usability problems. We present a probabilistic touch filtering approach that uses the temporal evolution of touch contacts to reject palms. Our system improves upon previous approaches, reducing accidental palm inputs to 0.016 per pen stroke, while correctly passing 98% of stylus inputs. Published at CHI 2014.


Lumitrack is a novel motion tracking technology that uses projected structured patterns and linear optical sensors. Each sensor unit is capable of recovering 2D location within the projection area, while multiple sensors can be combined for up to six degree of freedom (DOF) tracking. Our structured light approach is based on special patterns, called m-sequences, in which any consecutive subsequence of m bits is unique. Lumitrack can utilize both digital and static projectors, as well as scalable embedded sensing configurations. The resulting system enables high-speed (500+ FPS), high precision (sub-millimeter), and low-cost motion tracking for a wide range of interactive applications. Published at UIST 2013.

WorldKit makes use of a paired depth camera and projector to make ordinary surfaces instantly interactive. Using this system, touch-based interactivity can, without prior calibration, be placed on nearly any unmodified surface literally with a wave of the hand, as can other new forms of sensed interaction. From a user perspective, such interfaces are easy enough to instantiate that they could, if desired, be recreated or modified “each time we sit down” by “painting” them next to us. From the programmer’s perspective, our system encapsulates these capabilities in a simple set of abstractions that make the creation of interfaces quick and easy. Published at CHI 2013.

The proliferation of touchscreen devices has made soft keyboards a routine part of life. However, ultra-small computing platforms like the Sony SmartWatch and Apple iPod Nano lack a means of text entry. This limits their potential, despite the fact they are capable computers. We present a soft keyboard interaction technique called ZoomBoard that enables text entry on ultra-small devices. Our approach uses iterative zooming to enlarge otherwise impossibly tiny keys to comfortable size. We ran a text entry experiment on a keyboard measuring just 16 x 6mm – smaller than a US penny. Users achieved roughly 10 words per minute, allowing users to enter short phrases both quickly and quietly. Published at CHI 2013.


At present, touchscreens can differentiate multiple points of contact, but not who is touching the device. We propose a novel sensing approach based on Swept Frequency Capacitive Sensing that enables touchscreens to attribute touch events to a particular user. This is achieved by measuring the impedance of a user to the environment (earth ground) across a range of AC frequencies. Natural variation in bone density, muscle mass and other biological factors, as well as clothing, impact a users' impedance profile. This is often sufficiently unique to enable user differentiation. This project has significant implications in the design of touch-centric games and collaborative applications. Published at UIST 2012.

We present Acoustic Barcodes, structured patterns of physical notches that, when swiped with e.g., a fingernail, produce a complex sound that can be resolved to a binary ID. A single, inexpensive contact microphone attached to a surface or object is used to capture the waveform. We present our method for decoding sounds into IDs, which handles variations in swipe velocity and other factors. Acoustic Barcodes could be used for information retrieval or to triggering interactive functions. They are passive, durable and inexpensive to produce. Further, they can be applied to a wide range of materials and objects, including plastic, wood, glass and stone. Published at UIST 2012.

Touché proposes a novel form of capacitive touch sensing that we call Swept Frequency Capacitive Sensing (SFCS). This technology can infuse rich touch and gesture sensitivity into a variety of analogue and digital objects. For example, Touché can not only detect touch events, but also recognize complex configurations of the hands and body. Such contextual information can enhance a broad range of applications, from conventional touchscreens to unique contexts and materials, including the human body and liquids. Finally, Touché is inexpensive, safe, low power and compact; it can be easily embedded or temporarily attached anywhere touch and gesture sensitivity is desired. Published at CHI 2012.

Touch input is constrained, typically only providing finger X/Y coordinates. We suggest augmenting touchscreens with a largely unutilized input dimension: shear (force tangential to a screen’s surface). Similar to pressure, shear can be used in concert with conventional finger positional input. However, unlike pressure, shear provides a rich, analog 2D input space, which has many powerful uses. We put forward five classes of advanced interaction that considerably expands the envelope of interaction possible on touchscreens. Published at CHI 2012.

Since the advent of the electronic age, devices have incorporated small point lights for communication purposes. This has afforded devices a simple, but reliable communication channel without the complication or expense of e.g., a screen. For example, a simple light can let a user know their stove is on, a car door is ajar, the alarm system is active, or that a battery has finished charging. Unfortunately, very few products seem to take full advantage of the expressive capability simple lights can provide. The most commonly encountered light behaviors are quite simple: light on, light off, and light blinking. Not only is this vocabulary incredibly small, but these behaviors are not particularly iconic. Published at CHI 2012.

Phone as a Pixel is a scalable, synchronization-free, platform-independent system for creating large, ad-hoc displays from a collection of smaller devices, such as smartphones. To participate, device need only a web browser. We employ a color-transition scheme to identify and locate displays. This approach has several advantages: devices can be arbitrarily arranged (i.e., not in a grid) and infrastructure consists of a single conventional camera. Further, additional devices can join at any time without re-calibration. These are desirable properties to enable collective displays in contexts like sporting events, concerts and political rallies. Published at CHI 2012.

Researchers and practitioners can now draw upon a large suite of sensing technologies for their work. Relying on thermal, chemical, electromagnetic, optical, acoustic, mechanical, and other means, these sensors can detect faces, hand gestures, humidity, blood pressure,proximities, and many other aspects of our state and environment. We present an overview of our work on an ultrasonic Doppler sensor. This technique has unique qualities that we believe make it a valuable addition to the suite of sensing approaches HCI researchers and practitioners should consider in their applications. Published in IEEE Pervasive.

We consider how the arms and hands can be used to enhance on-body interactions, which is typically finger input centric. To explore this opportunity, we developed Armura, a novel interactive on-body system, supporting both input and graphical output. Using this platform as a vehicle for exploration, we prototyped a series of applications and interactions. This helped to confirm chief use modalities, identify fruitful interaction approaches, and in general, better understand how interfaces operate on the body. This paper is the first to consider and prototype how conventional interaction issues, such as cursor control and clutching, apply to the on-body domain. Additionally, we bring to light several new and unique interaction techniques. Published at TEI 2012.


TapSense is an enhancement to touch interaction that allows conventional screens to identify how the finger is being used for input. Our system can recognize different finger locations – including the tip, pad, nail and knuckle – without the user having to wear any electronics. This opens several new and powerful interaction opportunities for touch input, especially in mobile devices, where input bandwidth is limited due to small screens and fat fingers. For example, a knuckle tap could serve as a “right click” for mobile device touch interaction, effectively doubling input bandwidth. Published at UIST 2011.

OmniTouch is a novel wearable system that enables graphical, interactive, multitouch input on arbitrary, everyday surfaces. Our body-worn implementation allows users to manipulate interfaces projected onto the environment (e.g., walls, tables), held objects (e.g., notepads, books), and their own bodies (e.g., hands, lap). A key contribution is our depth-driven fuzzy template matching and clustering approach to multitouch finger tracking. This enables on-the-go interactive capabilities, with no calibration, training or instrumentation of the environment or the user, creating an always-available projected multitouch interface. Published at UIST 2011.

Modern mobile devices are sophisticated computing platforms, enabling people to handle phone calls, listen to music, surf the web, reply to emails, compose text messages, and much more. These devices are often stored in pockets or bags, requiring users to remove them in order to access even basic functionality. This demands a high level of attention - both cognitively and visually - and is often socially disruptive. Further, physically retrieving the device incurs a non-trivial time cost, and can constitute a significant fraction of a simple operation’s total time. We developed a novel method for through-pocket interaction called PocketTouch. Published at UIST 2011.

When viewing LCD monitors from an oblique angle, it is not uncommon to witness a dramatic color shift. Engineers and designers have sought to reduce these effects for more than two decades. We take an opposite stance, embracing these optical peculiarities, and consider how they can be used in productive ways. Our paper discusses how a special palette of colors can yield visual elements that are invisible when viewed straight-on, but visible at oblique angles. In essence, this allows conventional, unmodified LCD screens to output two images simultaneously – a feature normally only available in far more complex setups. Published at UIST 2011.

In this paper, we define a new type of iconographic scheme for graphical user interfaces based on motion. We refer to these “kinetic icons” as kineticons. In contrast to static graphical icons and icons with animated graphics, kineticons do not alter the visual content of a graphical element. Although kineticons are not new – indeed, they are seen in several popular systems – we formalize their scope and utility. One powerful quality is their ability to be applied to GUI elements of varying size and shape – from a something as small as a close button, to something as large as dialog box or even the entire desktop. This allows a suite of system-wide kinetic behaviors to be reused for a variety of uses. Published at CHI 2011.

SurfaceMouse is a virtual mouse implementation for multi-touch surfaces. A key design objective was to leverage as much pre-existing knowledge (and potentially muscle memory) regarding mice as possible, making interactions immediately familiar. To invoke SurfaceMouse, a user simply places their hand on an interactive surface as if there was a mouse present (see images on right). The system recognizes this characteristic gesture and renders a virtual mouse under the hand, which can be used like a real mouse. In addition to two-dimensional movement (X and Y axes), our proof-of-concept implementation supports left and right clicking, as well as up/down scrolling. Published at TEI 2011.

We developed Pediluma, a shoe accessory designed to encourage opportunistic physical activity. It features a light that brightens the more the wearer walks and slowly dims when the wearer remains stationary. This interaction was purposely simple so as to remain lightweight, both visually and cognitively. Even simple, personal pedometers have been shown to promote walking. Pediluma takes this a step further, attempting to engage people around the wearer to elicit social effects. Although lights have been previously incorporated into shoes, this is the first time they have been used to display motivational information. Published at TEI 2011.


TeslaTouch infuses finger-driven interfaces with physical feedback. The technology is based on the electrovibration principle, which can programmatically vary the electrostatic friction between fingers and a touch panel. Importantly, there are no moving parts, unlike most tactile feedback technologies, which typically use mechanical actuators. This allows for different fingers to feel different sensations. When combined with an interactive graphical display, TeslaTouch enables the design of a wide variety of interfaces that allow the user to feel virtual elements through touch. Published at UIST 2010.

Devices with significant computational power and capability can now be easily carried with us. These devices have tremendous potential to bring the power of information, creation, and communication to a wider audience and to more aspects of our lives. However, with this potential comes new challenges for interaction design. For example, we have yet to figure out a good way to miniaturize devices without simultaneously shrinking their interactive surface area. This has lead to diminutive screens, cramped keyboards, and tiny jog wheels - all of which diminishes usability and prevents us from realizing the full potential of mobile computing. Published in IEEE Computer Magazine.

Skinput is a technology that appropriates the human body for acoustic transmission, allowing the skin to be used as a finger input surface. In particular, we resolve the location of finger taps on the arm and hand by analyzing mechanical vibrations that propagate through the body. We collect these signals using a novel array of sensors worn as an armband. This approach provides an always-available, naturally-portable, and on-body interactive surface. To illustrate the potential of our approach, we developed several proof-of-concept applications on top of our sensing and classification system. Published at CHI 2010.

Minput is a sensing and input method that enables intuitive and accurate interaction on very small devices – ones too small for practical touch screen use and with limited space to accommodate physical buttons. We achieve this by adding two, inexpensive and high-precision optical sensors (like those found in optical mice) to the underside of the device. This allows the entire device to be used as an input mechanism, instead of the screen, avoiding occlusion by fingers. In addition to x/y translation, our system also captures twisting motion, enabling many interesting interaction opportunities typically found in larger and far more complex systems. Published at CHI 2010.

Human perception of time is fluid, and can be manipulated in purposeful and productive ways. We propose and evaluate variations on two visual designs for progress bars that alter users’ perception of time passing, and “appear” faster when in fact they are not. In a series of direct comparison tests, we are able to rank how different augmentations compare to one another. We then show that these designs yield statistically significantly shorter perceived durations than conventional progress bars. Progress bars with animated ribbing that move backwards in a decelerating manner proved to have the strongest effect. We measure the effect of this particular progress bar design and show it reduces the perceived duration by 11%. Published at CHI 2010.

A cord, although simple in form, has many interesting physical affordances that make it powerful as an input device. Not only can a length of cord be grasped in different locations, but also pulled, twisted and bent — four distinct and expressive dimensions that could potentially act in concert. Such an input mechanism could be readily integrated into headphones, backpacks, and clothing. Once grasped in the hand, a cord can be used in an eyes-free manner to control mobile devices, which often feature small screens and cramped buttons. We built a proof-of-concept cord-based sensor, which senses three of the four input dimensions we propose. Published at CHI 2010.

Although network bandwidth has increased dramatically, high-resolution images often take several seconds to load, and considerably longer on mobile devices over wireless connections. Progressive image loading techniques allow for some visual content to be displayed prior to the whole file being downloaded. In this note, we present an empirical evaluation of popular progressive image loading methods, and derive one novel technique from our findings. Results suggest a spiral variation of bilinear interlacing can yield an improvement in content recognition time. Published at CHI 2010.

Mark Weiser envisioned a third wave of computing, one with “hundreds of wireless computers in every office,” which would come about as the cost of electronics fell. Two decades later, some in the Ubiquitous Computing community point to the pervasiveness of microprocessors as a realization of this dream. Without a doubt, many of the objects we interact with on a daily basis are digitally augmented – they contain microchips, buttons and even screens. But is this the one-to-many relationship of people–to-computers that Weiser envisioned? Published in IEEE Multimedia.

Whack Gestures seeks to provide a simple means to interact with devices with minimal attention from the user – in particular, without the use of fine motor skills or detailed visual attention (requirements found in nearly all conventional interaction techniques). For mobile devices, this could enable interaction without “getting it out,” grasping, or even glancing at the device. Users can simply interact with a device by striking it with open palm or heel of the hand. Published at TEI 2010.