We present a new and accurate approach for gaze estimation on consumer computing devices. We take advantage of continued strides in the quality of user-facing cameras found in e.g., smartphones, laptops, and desktops – 4K or greater in high-end devices – such that it is now possible to capture the 2D reflection of a device's screen in the user's eyes. This alone is insufficient for accurate gaze tracking due to the near-infinite variety of screen content. Crucially, however, the device knows what is being displayed on its own screen --- in this work, we show this information allows for robust segmentation of the reflection, the location and size of which encodes the user's screen-relative gaze target. We explore several strategies to leverage this useful signal, quantifying performance in a user study. Our best performing model reduces mean tracking error by 18% compared to a baseline appearance-based model. A supplemental study reveals an additional 10-20% improvement if the gaze-tracking camera is located at the bottom of the device.
Research Team: Taejun Kim, Vimal Mollyn, Riku Arakawa, Chris Harrison
Awards: Best Paper Honorable Mention
Taejun Kim, Vimal Mollyn, Riku Arakawa, and Chris Harrison. 2026. HiFiGaze: Improving Eye Tracking Accuracy Using Screen Content Knowledge. In Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems (CHI ’26), April 13–17, 2026, Barcelona, Spain. ACM, New York, NY, USA, 12 pages. https://doi.org/10.1145/3772318.3791339