计算机科学
人工智能
透明度(行为)
机器学习
过程(计算)
选择(遗传算法)
特征选择
专家系统
特征(语言学)
领域(数学分析)
数学
计算机安全
语言学
操作系统
数学分析
哲学
作者
Jaroslaw Kornowicz,Kirsten Thommes
出处
期刊:PLOS ONE
[Public Library of Science]
日期:2025-03-07
卷期号:20 (3): e0318874-e0318874
标识
DOI:10.1371/journal.pone.0318874
摘要
The integration of users and experts in machine learning is a widely studied topic in artificial intelligence literature. Similarly, human-computer interaction research extensively explores the factors that influence the acceptance of AI as a decision support system. In this experimental study, we investigate users’ preferences regarding the integration of experts in the development of such systems and how this affects their reliance on these systems. Specifically, we focus on the process of feature selection—an element that is gaining importance due to the growing demand for transparency in machine learning models. We differentiate between three feature selection methods: algorithm-based, expert-based, and a combined approach. In the first treatment, we analyze users’ preferences for these methods. In the second treatment, we randomly assign users to one of the three methods and analyze whether the method affects advice reliance. Users prefer the combined method, followed by the expert-based and algorithm-based methods. However, the users in the second treatment rely equally on all methods. Thus, we find a remarkable difference between stated preferences and actual usage, revealing a significant attitude-behavior-gap. Moreover, allowing the users to choose their preferred method had no effect, and the preferences and the extent of reliance were domain-specific. The findings underscore the importance of understanding cognitive processes in AI-supported decisions and the need for behavioral experiments in human-AI interactions.
科研通智能强力驱动
Strongly Powered by AbleSci AI