Session A2 - Interactive Systems and Data Visualization
Interactive Systems and Data Visualization
Session Host: Xi Zheng
Making Invisible Dynamics Visible: Case Studies in Visualizing Computational and Behavioral Processes [Guest Talk]
Speaker: Yuxin Ma, Southern University of Science and Technology
Abstract: TBD
VIzTA: Enhancing Comprehension of Distributional Visualization with Visual-Lexical Fused Conversational Interface
Speaker: Liangwei Wang, The Hong Kong University of Science and Technology (Guangzhou))
Abstract: Comprehending visualizations requires readers to interpret visual encoding and the underlying meanings actively. This poses challenges for visualization novices, particularly when interpreting distributional visualizations that depict statistical uncertainty. Advancements in LLM-based conversational interfaces show promise in promoting visualization comprehension. However, they fail to provide contextual explanations at fine-grained granularity, and chart readers are still required to mentally bridge visual information and textual explanations during conversations.Our formative study highlights the expectations for both lexical and visual feedback, as well as the importance of explicitly linking these two modalities throughout the conversation. The findings motivate the design of VIzTA, a visualization teaching assistant that leverages the fusion of visual and lexical feedback to help readers better comprehend visualization. VIzTA features a semantic-aware conversational agent capable of explaining contextual information within visualizations and employs a visual-lexical fusion design to facilitate chart-centered conversation. A between-subject study with 24 participants demonstrates the effectiveness of VIzTA in supporting the understanding and reasoning tasks of distributional visualization across multiple scenarios.
InsightBridge: Enhancing Empathizing with Users through Real-Time Information Synthesis and Visual Communication
Speaker: Yue Zhang, The Hong Kong University of Science and Technology
Abstract: User-centered design necessitates researchers deeply understanding target users throughout the design process. However, during early-stage user interviews, researchers may misinterpret users due to time constraints, incorrect assumptions, and communication barriers. To address this challenge, we introduce InsightBridge, a tool that supports real-time, AI-assisted information synthesis and visual-based verification. InsightBridge automatically organizes relevant information from ongoing interview conversations into an empathy map. It further allows researchers to specify elements to generate visual abstracts depicting the selected information, and then review these visuals with users to refine the visuals as needed. We evaluated the effectiveness of InsightBridge through a within-subject study (N=32) from both the researchers’ and users’ perspectives. Our findings indicate that InsightBridge can assist researchers in note-taking and organization, as well as in-time visual checking, thereby enhancing mutual understanding with users. Additionally, users’ discussions of visuals prompt them to recall overlooked details and scenarios, leading to more insightful ideas.
InterLink: Linking Text with Code and Outputs in Computational Notebooks
Speaker: Yanna Lin, The Hong Kong University of Science and Technology
Abstract: Computational notebooks, widely used for ad-hoc analysis and often shared with others, can be difficult to understand because the standard linear layout is not optimized for reading. In particular, related text, code, and outputs may be spread across the UI making it difficult to draw connections. In response, we introduce InterLink, a plugin designed to present the relationships between text, code, and outputs, thereby making notebooks easier to understand. In a formative study, we identify pain points and derive design requirements for identifying and navigating relationships among various pieces of information within notebooks. Based on these requirements, InterLink features a new layout that separates text from code and outputs into two columns. It uses visual links to signal relationships between text and associated code and outputs and offers interactions for navigating related pieces of information. In a user study with 12 participants, those using InterLink were 13.6% more accurate at finding and integrating information from complex analyses in computational notebooks. These results show the potential of notebook layouts that make them easier to understand.
Creative Blends of Visual Concepts
Speaker: Yaozhen Zhang, Shenzhen University
Abstract: Visual blends combine elements from two distinct visual concepts into a single, integrated image, with the goal of conveying ideas through imaginative and often thought-provoking visuals. Communicating abstract concepts through visual blends poses a series of conceptual and technical challenges. To address these challenges, we introduce Creative Blends, an AI-assisted design system that leverages metaphors to visually symbolize abstract concepts by blending disparate objects. Our method harnesses commonsense knowledge bases and large language models to align designers’ conceptual intent with expressive concrete objects. Additionally, we employ generative text-to-image techniques to blend visual elements through their overlapping attributes. A user study (N=24) demonstrated that our approach reduces participants’ cognitive load, fosters creativity, and enhances the metaphorical richness of visual blend ideation. We explore the potential of our method to expand visual blends to include multiple object blending and discuss the insights gained from designing with generative AI.
GenColor: Generative Color-Concept Association in Visual Design
Speaker: Yihan Hou, The Hong Kong University of Science and Technology (Guangzhou)
Abstract: Existing approaches for color-concept association typically rely on query-based image referencing, and color extraction from image references. However, these approaches are effective only for common concepts, and are vulnerable to unstable image referencing and varying image conditions. Our formative study with designers underscores the need for primary-accent color compositions and context-dependent colors (e.g., ‘clear’ vs. ‘polluted’ sky) in design. In response, we introduce a generative approach for mining semantically resonant colors leveraging images generated by text-to-image models. Our insight is that contemporary text-to-image models can resemble visual patterns from large-scale real-world data. The framework comprises three stages: concept instancing produces generative samples using diffusion models, text-guided image segmentation identifies concept-relevant regions within the image, and color association extracts primarily accompanied by accent colors. Quantitative comparisons with expert designs validate our approach’s effectiveness, and we demonstrate the applicability through cases in various design scenarios and a gallery.