LightAnchors: Appropriating Point Lights for Spatially-Anchored Augmented Reality Interfaces
Augmented reality (AR) allows for the overlay of digital information and interactive content onto scenes and objects. In order to provide tight registration of data onto objects in a scene, it is most common for markers to be employed. Innumerable visual tagging strategies have been investigated in both academia and industry (e.g., retroreflective stickers, barcodes, ARToolKit markers, ARTags, AprilTags, QR Codes and ArUco markers).
In this paper, we present LightAnchors, a new method to display spatially-anchored data in augmented reality applications. Unlike most prior tracking methods, which instrument objects with markers (often large and/or obtrusive), we take advantage of point lights already found in many objects and environments. For example, most electrical appliances now feature small (LED) status lights, and light bulbs are common in indoor and outdoor settings. In addition to leveraging these point lights for in-view anchoring (i.e., attaching information and interfaces to specific objects), we also co-opt these lights for data transmission, blinking them rapidly to encode binary data.
Research Team: Karan Ahuja, Sujeath Pareddy, Robert Xiao, Mayank Goel, and Chris Harrison
Awards: (ANY AWARD WON LISTED HERE, otherwise delete)
Karan Ahuja, Sujeath Pareddy, Robert Xiao, Mayank Goel, and Chris Harrison. 2019. LightAnchors: Appropriating Point Lights for Spatially-Anchored Augmented Reality Interfaces. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology (UIST '19). ACM, New York, NY, USA, 189-196. DOI: https://doi.org/10.1145/3332165.3347884