规范性
相关性(法律)
代理(哲学)
医疗保健
主题分析
质量(理念)
计算机科学
人工智能
心理学
实证研究
知识管理
定性研究
认识论
社会学
社会科学
哲学
政治学
法学
经济
经济增长
作者
Kristin M. Kostick,Benjamin Lang,Jared N. Smith,Meghan E. Hurley,Jennifer Blumenthal‐Barby
标识
DOI:10.1136/jme-2023-109338
摘要
Rapid advancements in artificial intelligence and machine learning (AI/ML) in healthcare raise pressing questions about how much users should trust AI/ML systems, particularly for high stakes clinical decision-making. Ensuring that user trust is properly calibrated to a tool’s computational capacities and limitations has both practical and ethical implications, given that overtrust or undertrust can influence over-reliance or under-reliance on algorithmic tools, with significant implications for patient safety and health outcomes. It is, thus, important to better understand how variability in trust criteria across stakeholders, settings, tools and use cases may influence approaches to using AI/ML tools in real settings. As part of a 5-year, multi-institutional Agency for Health Care Research and Quality-funded study, we identify trust criteria for a survival prediction algorithm intended to support clinical decision-making for left ventricular assist device therapy, using semistructured interviews (n=40) with patients and physicians, analysed via thematic analysis. Findings suggest that physicians and patients share similar empirical considerations for trust, which were primarily epistemic in nature, focused on accuracy and validity of AI/ML estimates. Trust evaluations considered the nature, integrity and relevance of training data rather than the computational nature of algorithms themselves, suggesting a need to distinguish ‘source’ from ‘functional’ explainability. To a lesser extent, trust criteria were also relational (endorsement from others) and sometimes based on personal beliefs and experience. We discuss implications for promoting appropriate and responsible trust calibration for clinical decision-making use AI/ML.
科研通智能强力驱动
Strongly Powered by AbleSci AI