计算机科学
人机交互
虚拟机
实时计算
程序设计语言
作者
Jingyu Wu,Pengchen Chen,Shi Chen,Wei Xiang,Lingyun Sun
标识
DOI:10.1080/10447318.2024.2387398
摘要
Environment plays an important role in non-verbal communication for human-virtual agent interaction. Existing research explores the influence of an agent's appearance and attributes to enhance human-virtual agent communication. However, there is no common practice for dynamically adjusting the surrounding environments of the virtual agent. In this paper, we introduce a real-time virtual agent environment generation system (VAEnvGen), which contributes to the field by enhancing users' content perception and improving task performance through dynamic environment adjustment. The system dynamically analyzes both the appropriate communication environment and filters the key information according to the current context. Leveraging Large Language Models, it generates a pseudo-3D background space to create an engaging atmosphere and a dynamic foreground content space for vivid key information display, thereby significantly enhancing content perception. For widespread adoption and flexibility, VAEnvGen is developed as a web application. We further evaluate the impact of VAEnvGen on content perception, user attention, and subjective satisfaction through a mixed-design user study with 50 participants. Quantitative and qualitative results reveal significant improvements in content perception, task completion time, and user satisfaction when using VAEnvGen. The system effectively redistributes user attention from subtitles and the virtual agent itself to the dynamically generated background and key foreground information, leading to a more immersive and less fatiguing user experience.
科研通智能强力驱动
Strongly Powered by AbleSci AI