EyeMU Interactions: Gaze + IMU Gestures on Mobile Devices

As smartphone screens have grown in size, single-handed use has become more cumbersome. It is not uncommon to have interactive targets that are impossible to reach unless the user adjusts their grip. No doubt many millions of smartphones have been dropped while performing such a grip change, especially if the user is walking or performing another action. For this reason, many users have added aftermarket rings and grips to the rear of their phones, adding weight and bulk to devices that we wish to be as thin as possible.


We present EyeMU interactions, a set of intuitive and rapid gestural actions that can be used on mobile phones, powered by a combination of state-of-the-art gaze estimation and IMU-tracked motion gestures. Importantly, our interactions require no grip change or touch input. To avoid false positive and accidental activations, as well as eliminate unnecessary computation, our system only activates when a series of conditions are met. First, a user must be present in the camera’s view, then attend to the screen, then fixate on a widget, and finally, while maintaining that fixation, perform a motion gesture. We note our technique is highly complementary with conventional touch input, and can serve to alleviate reach issues, as well as expose advanced functionality typically buried in long presses and menus.


Research Team: Andy Kong, Karan Ahuja, Mayank Goel, and Chris Harrison

Citation

Andy Kong, Karan Ahuja, Mayank Goel, and Chris Harrison. 2021. EyeMU Interactions: Gaze + IMU Gestures on Mobile Devices. In Proceedings of the 2021 International Conference on Multimodal Interaction (ICMI '21). Association for Computing Machinery, New York, NY, USA, 577–585. DOI:https://doi.org/10.1145/3462244.3479938