计算机科学
范围(计算机科学)
产品(数学)
联合分析
透视图(图形)
偏爱
服务(商务)
推荐系统
人工智能
价值(数学)
可视化
机器学习
营销
业务
统计
程序设计语言
数学
几何学
作者
Doha Kim,Yeosol Song,Songyie Kim,Sewang Lee,Yanqin Wu,Jungwoo Shin,Daeho Lee
标识
DOI:10.1016/j.techfore.2023.122343
摘要
Artificial intelligence (AI) has become part of our everyday lives, and its presence and influence are expected to grow exponentially. Regardless of its expanding impact, the perplexing algorithms and processes that drive AI's decision and output can lead to decreased trust, and thus impede the adoption of future AI services. Explainable AI (XAI) in recommender systems has surfaced as a solution that can help users understand how and why an AI recommended a specific product or service. However, there is no standardized explanation method that satisfies users' preferences and needs. Therefore, the main objective of this study is to explore a unified explanation method that centers around human perspective. This study examines the preference for AI interfaces by investigating the components of user-centered explainability, including scope (global and local) and format (text and visualization). A mixed logit model is used to analyze data collected by a conjoint survey. Results show that local explanation and visualization are preferred, and users dislike lengthy textual interfaces. Our findings incorporate the extraction of monetary value from each attribute.
科研通智能强力驱动
Strongly Powered by AbleSci AI