Pull Gestures with Coordinated Graphics on Dual-Screen Devices

Smartphone screens have increased in size over the years to accommodate consumer demand for high quality interactivity and media consumption on-the-go. To maintain "pocketability" while still increasing screen real estate, smartphone manufacturers are now releasing single-screened devices that can fold (such as the Samsung Flip and Fold) or dual-screened devices that hinge (such as the Microsoft Duo). Simultaneously, we are also seeing dual-screened laptops emerging, like the ASUS Zenbook.


Across all of these device categories, the interactive experience is simply that of two conventional touchscreens (i.e., 2D finger input). We believe this is a missed opportunity, and that this unique and emerging device form factor creates an opportunity for interesting interactions in the 3D niche between the two screens, which can work synergetically with conventional touch gestures such as tap and pinch. Moreover, the unique orthogonality of the two screens means that out-of-plane interactions on one screen can be naturally supported with coordinated graphics on the other.


Through an elicitation study, we identified a new multimodal interaction that takes advantage of the unique geometry of these angled, dual-screen devices. We call this gesture a "pull", and it is a straightforward way to couple a conventional tap or pinch gesture with an out-of-plane manipulation, either discrete or continuous. For example, a user could eject a USB drive by pulling it from the screen's surface, rather than long-pressing to access a menu. A drop down menu could be tapped as usual to open it, or pulled from the screen in order to open it on the orthogonal screen, affording users greater flexibility in layout. In both cases, the orthogonal screen is perfectly placed to provide in-situ, coordinated visual feedback, facilitating user manipulation. We note the latter is what differentiates our work from previous above-single-screen and multi-device interactions (and thus multi-screen, but without coordinated visual feedback on orthogonal screen axes).


The contributions of this paper are multifold. First and foremost, this work presents a small but novel interaction technique, building on ideas in prior work and extending them into a new multi-screen device context. Our proposed "pull" gestures were inspired and motivated by a 13 participant elicitation study, where this manipulation was the highest-rated new interaction modality. Then, to demonstrate that such sensing is feasible in reality, we created a proof-of-concept implementation, which forms an additional contribution. Finally, to help illustrate how pull gestures might work, we used our proof-of-concept platform to build working demos of ten interactions across three use categories.


Research Team: Vivian Shen, Chris Harrison

Citation

Vivian Shen and Chris Harrison. 2022. Pull Gestures with Coordinated Graphics on Dual-Screen Devices. In Proceedings of the 2022 International Conference on Multimodal Interaction (ICMI '22). Association for Computing Machinery, New York, NY, USA, 270–277. https://doi.org/10.1145/3536221.3556620

Additional Media

Other than the paper PDF, all media on this page is shared under Creative Commons BY 3.0