计算机科学
人机交互
多样性(控制论)
透视图(图形)
要价
空格(标点符号)
用户体验设计
多媒体
人工智能
操作系统
经济
经济
作者
Ziang Xiao,Sarah Mennicken,Beate Huber,Adam Shonkoff,Jennifer S. Thom
摘要
Voice assistants offer users access to an increasing variety of personalized functionalities. Researchers and engineers who build these experiences rely on various signals from users to create the machine learning models powering them. One type of signal is explicit feedback. While collecting explicit user feedback in situ via voice assistants would help improve and inspect the underlying models, from a user perspective it can be disruptive to the overall experience, and the user might not feel compelled to respond. However, careful design can help alleviate the friction in the experience. In this paper, we explore the opportunities and the design space for voice assistant explicit feedback elicitation. First, we present four usage categories of explicit feedback in situ for model evaluation and improvement, derived from interviews with machine learning practitioners. Then, using realistic scenarios generated for each category, we conducted an online study to evaluate multiple voice assistant designs. Our results reveal that when the voice assistant is introduced as a learner or a collaborator, users were more willing to respond to its request for feedback and felt less disruptive. In addition, giving users instructions on how to initiate feedback themselves can reduce the perceived disruptiveness compared to asking users for feedback directly. Based on our findings, we discuss the implications and potential future directions for designing voice assistants to elicit user feedback for personalized voice experiences.
科研通智能强力驱动
Strongly Powered by AbleSci AI