感知
心理干预
医疗保健
医疗决策
计算机科学
可扩展性
心理学
数据科学
人工智能
医学
政治学
家庭医学
数据库
神经科学
精神科
法学
作者
Romain Cadario,Chiara Longoni,Carey K. Morewedge
标识
DOI:10.31234/osf.io/4kwap
摘要
Medical artificial intelligence is cost-effective, scalable, and often outperforms human providers. One important barrier to its adoption is the perception that algorithms are a “black box”—people do not subjectively understand how algorithms make medical decisions, and we find this impairs their utilization. We argue a second barrier is that people also overestimate their objective understanding of medical decisions made by human healthcare providers. In five pre- registered experiments with convenience and nationally representative samples (N = 2,699), we find that people exhibit such an illusory understanding of human medical decision making (Study 1). This leads people to claim greater understanding of decisions made by human than algorithmic healthcare providers (Studies 2A-B), which makes people more reluctant to utilize algorithmic providers (Studies 3A-B). Fortunately, we find that asking people to explain the mechanisms underlying medical decision making reduces this illusory gap in subjective understanding (Study 1). Moreover, we test brief interventions that, by increasing subjective understanding of algorithmic decision processes, increase willingness to utilize algorithmic healthcare providers without undermining utilization of human providers (Studies 3A-B). Corroborating these results, a study on Google testing ads for an algorithmic skin cancer detection app shows that interventions that increase subjective understanding of algorithmic decision processes lead to a higher ad click-through rate (Study 4). Our findings show how reluctance to utilize medical algorithms is driven both by the difficulty of understanding algorithms, and an illusory understanding of human decision making.
科研通智能强力驱动
Strongly Powered by AbleSci AI