不信任
计算机科学
人工智能
数据科学
知识管理
人机交互
机器学习
法学
政治学
作者
Maria Virvou,George A. Tsihrintzis,Evangelia-Aikaterini Tsichrintzi
标识
DOI:10.1016/j.ins.2024.120759
摘要
The rapid integration of intelligent processes and methods into information systems in the Artificial Intelligence (AI) era has led to a substantial shift towards autonomous software decision-making. This evolution necessitates robust human oversight, especially in critical domains like Healthcare, Education, and Energy. Human trust in AI plays a vital role in influencing decision-making processes of users interacting with AI. This paper presents VIRTSI (Variability and Impact of Reciprocal Trust States towards Intelligent systems), a novel rigorous computational model for human-AI Interaction. VIRTSI simulates human trust states, spanning from overtrust to distrust, through user modelling. It comprises: 1. A trust dynamics representational model based on Deterministic Finite State Automata (DFAs), illustrating transitions among cognitive trust states in response to AI-generated replies. 2. A trust evaluation model based on Confusion Matrices, originating from machine learning and Accuracy Metrics, providing a quantitative framework for analysing human trust dynamics. As a result, this is the first time that trust dynamics have been thoroughly traced in a representational model and a method has been developed to assess the impact of possibly harmful states like overtrust and distrust. An empirical study on the recently launched Large Language Model of generative AI, ChatGPT (version 3.5), provides a radical underexplored AI-generated platform for evaluating the human-AI interaction through VIRTSI. The study involved 1200 interactions of real users as well as AI experts together with experts in two very different domains of evaluation, namely software engineering and poetry. This study traces trust dynamics and the emerging human-AI interaction, in concrete examples of real user synergies with generative AI. The research reveals the vital role of maintaining normal trust states for optimal human-AI interaction and that both AI and human users need further steps towards this goal. The real-world implications of this research can guide the creation and evaluation of user interfaces with AI and the incorporation of functionalities in the development of generative AI chatbots in terms of trust by providing a new rigorous DFA representational method of trust dynamics and a corresponding new perspective of confusion matrix evaluation method of the dynamics' impact in the efficiency of human-AI dialogues.
科研通智能强力驱动
Strongly Powered by AbleSci AI