Session B3 - Interaction in VR/AR

Session B3 - Interaction in VR/AR

Apr 12, 2025
Date
Apr 12, 2025 3:30 PM — 5:30 PM
Location

Yeung LT-18 (Zoom Link)


Interaction in VR/AR

​Session Host​: Yuan Xu

GazePuffer : Hands-Free Input Method Leveraging Puff Cheeks for VR [Guest Talk]

​Speaker​​: Minghui Sun, Jilin University

​Abstract​​: TBD

AiGet: Transforming Everyday Moments into Hidden Knowledge Discovery with AI Assistance on Smart Glasses

​Speaker​​: Runze Cai, National University of Singapore

​Abstract​​: Unlike the free exploration of childhood, the demands of daily life reduce our motivation to explore our surroundings, leading to missed opportunities for informal learning. Traditional tools for knowledge acquisition are reactive, relying on user initiative and limiting their ability to uncover hidden interests. Through formative studies, we introduce AiGet, a proactive AI assistant integrated with AR smart glasses, designed to seamlessly embed informal learning into low-demand daily activities (e.g., casual walking and shopping). AiGet analyzes real-time user gaze patterns, environmental context, and user profiles, leveraging large language models to deliver personalized, context-aware knowledge with low disruption to primary tasks. In-lab evaluations and real-world testing, including continued use over multiple days, demonstrate AiGet’s effectiveness in uncovering overlooked yet surprising interests, enhancing primary task enjoyment, reviving curiosity, and deepening connections with the environment. We further propose design guidelines for AI-assisted informal learning, focused on transforming everyday moments into enriching learning experiences.

Augmenting Realistic Charts with Virtual Overlays

​Speaker​​: Yao Shi, The Hong Kong University of Science and Technology (Guangzhou)

​Abstract​​: In this paper, we introduce the concept of realistic charts, referring to charts in the real world that cannot be digitally altered, such as those printed in newspapers or used in presentations. By enabling interaction with and graphical enhancement of these realistic charts as if they were digital, we transform realistic charts into “digital charts” by adding virtual graphical overlays. To achieve this, we identify 33 overlay strategies (e.g., highlights and trendlines) for five widely-used chart types (e.g., line charts) through systematic exploration and a formative study. To simplify overlay creation, we introduce a new grammar named Vega-Overlay. Leveraging this design space and grammar, we develop a system called HARVis, which allows users to generate virtual overlays through augmented reality devices using speech and optional gestures. A user study involving 33 participants from diverse fields, across 17 tasks, demonstrates the effectiveness and usability of HARVis.

Modeling Locomotion with Body Angular Movements in Virtual Reality

​Speaker​​: Zijun Mai, Jinan University

​Abstract​​: This study proposes a time prediction model for locomotion along a polyline path with body angular movements in Virtual Reality (VR). We divide such locomotion into two components: navigating in multiple line-segment paths and turning at line-segment intersections. In the first component, locomotion in each line-segment path consists of acceleration, maximum velocity, and deceleration phases. We formulate equations to estimate the locomotion time for each phase and accumulated them to model the total time. In the second component, a linear relationship was revealed between task time and turning angles. We established an integrated model based on the equations of the two components and verified the effectiveness of the model with three experiments. The results indicate that our model outperformed two baseline models with a greater ξRˆ2ξ and a smaller gap between the predicted and actual time. Our study benefits VR locomotion design with body angular movements.

VRCaptions: Design Captions for DHH Users in Multiplayer Communication in VR

​Speaker​​: Tianze Xie, Southern University of Science and Technology

​Abstract​​: Accessing auditory information remains challenging for DHH individuals in real-world situations and multiplayer VR interactions. To improve this, we investigated caption designs that specialize in the needs of DHH users in multiplayer VR settings. First, we conducted three co-design workshops with DHH participants, social workers, and designers to gather insights into the specific needs of design directions for DHH users in the context of a room escape game in VR. We further refined our designs with 13 DHH users to determine the most preferred features. Based on this, we developed VRCaptions, a caption prototype for DHH users to better experience multiplayer conversations in VR. We lastly invited two mixed-hearing groups to participate in the VR room escape game with our VRCaptions to validate. The results demonstrate that VRCaptions can enhance the ability of DHH participants to access information and reduce the barrier to communication in VR.

ReachPad: Interacting with Multiple Virtual Screens using a Single Physical Pad through Haptic Retargeting

​Speaker​​: Han Shi, Southern University of Science and Technology

​Abstract​​: The advancement of Virtual Reality (VR) has expanded 2D userinterfaces into 3D space. This change has introduced richer interaction modalities but also brought challenges, especially the lackof haptic feedback in mid-air interactions. Previous research hasexplored various methods to provide feedback for interface interactions, but most approaches require specialized haptic devices.We introduce haptic retargeting to enable users to control multiple virtual screens in VR using a simple flat pad, which servesas a single physical proxy to support seamless interaction acrossmultiple virtual screens. We conducted user studies to explore theappropriate virtual screen size and positioning under our retargeting method and then compared various drag-and-drop methods for cross-screen interaction. Finally, we compared our method withcontroller-based interaction in application scenarios.