Large Language Model Influence on Diagnostic Reasoning

印为红字的 医学 随机对照试验 梅德林 干预(咨询) 物理疗法 家庭医学 医学物理学 心理学 护理部 病理 数学教育 政治学 法学
作者
Ethan Goh,Robert Gallo,Jason Hom,Eric Strong,Yingjie Weng,Hannah Kerman,Joséphine Cool,Zahir Kanjee,Andrew S. Parsons,Neera Ahuja,Eric Horvitz,Daniel X. Yang,Arnold Milstein,Andrew Olson,Adam Rodman,Jonathan H. Chen
出处
期刊:JAMA network open [American Medical Association]
卷期号:7 (10): e2440969-e2440969 被引量:102
标识
DOI:10.1001/jamanetworkopen.2024.40969
摘要

Importance Large language models (LLMs) have shown promise in their performance on both multiple-choice and open-ended medical reasoning examinations, but it remains unknown whether the use of such tools improves physician diagnostic reasoning. Objective To assess the effect of an LLM on physicians’ diagnostic reasoning compared with conventional resources. Design, Setting, and Participants A single-blind randomized clinical trial was conducted from November 29 to December 29, 2023. Using remote video conferencing and in-person participation across multiple academic medical institutions, physicians with training in family medicine, internal medicine, or emergency medicine were recruited. Intervention Participants were randomized to either access the LLM in addition to conventional diagnostic resources or conventional resources only, stratified by career stage. Participants were allocated 60 minutes to review up to 6 clinical vignettes. Main Outcomes and Measures The primary outcome was performance on a standardized rubric of diagnostic performance based on differential diagnosis accuracy, appropriateness of supporting and opposing factors, and next diagnostic evaluation steps, validated and graded via blinded expert consensus. Secondary outcomes included time spent per case (in seconds) and final diagnosis accuracy. All analyses followed the intention-to-treat principle. A secondary exploratory analysis evaluated the standalone performance of the LLM by comparing the primary outcomes between the LLM alone group and the conventional resource group. Results Fifty physicians (26 attendings, 24 residents; median years in practice, 3 [IQR, 2-8]) participated virtually as well as at 1 in-person site. The median diagnostic reasoning score per case was 76% (IQR, 66%-87%) for the LLM group and 74% (IQR, 63%-84%) for the conventional resources-only group, with an adjusted difference of 2 percentage points (95% CI, −4 to 8 percentage points; P = .60). The median time spent per case for the LLM group was 519 (IQR, 371-668) seconds, compared with 565 (IQR, 456-788) seconds for the conventional resources group, with a time difference of −82 (95% CI, −195 to 31; P = .20) seconds. The LLM alone scored 16 percentage points (95% CI, 2-30 percentage points; P = .03) higher than the conventional resources group. Conclusions and Relevance In this trial, the availability of an LLM to physicians as a diagnostic aid did not significantly improve clinical reasoning compared with conventional resources. The LLM alone demonstrated higher performance than both physician groups, indicating the need for technology and workforce development to realize the potential of physician-artificial intelligence collaboration in clinical practice. Trial Registration ClinicalTrials.gov Identifier: NCT06157944
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
4秒前
4秒前
5秒前
JamesPei应助科研小贩采纳,获得10
5秒前
joey驳回了大个应助
6秒前
SYLH应助科研鬼才采纳,获得30
6秒前
舒冰发布了新的文献求助20
8秒前
8秒前
margo发布了新的文献求助10
11秒前
11秒前
自由人完成签到,获得积分10
11秒前
bitter完成签到,获得积分10
13秒前
轩轩完成签到,获得积分10
14秒前
寒冷的寻菱完成签到,获得积分10
14秒前
14秒前
彭于晏应助昏迷树袋熊采纳,获得10
16秒前
ASUNA完成签到,获得积分10
17秒前
18秒前
18秒前
Orange应助和谐诗双采纳,获得10
20秒前
riceyellow完成签到,获得积分10
20秒前
SYLH应助科研通管家采纳,获得10
21秒前
大个应助科研通管家采纳,获得10
21秒前
Rita应助科研通管家采纳,获得10
21秒前
Jasper应助ARIA采纳,获得10
21秒前
科目三应助科研通管家采纳,获得10
21秒前
MchemG应助科研通管家采纳,获得10
21秒前
Jasper应助科研通管家采纳,获得10
21秒前
星辰大海应助科研通管家采纳,获得10
21秒前
出租耳朵应助科研通管家采纳,获得10
21秒前
21秒前
Owen应助科研通管家采纳,获得10
21秒前
21秒前
21秒前
SYLH应助科研通管家采纳,获得20
21秒前
22秒前
22秒前
雪山飞狐发布了新的文献求助10
22秒前
传奇3应助拒绝的紫安采纳,获得10
22秒前
bloom发布了新的文献求助10
23秒前
高分求助中
【提示信息,请勿应助】关于scihub 10000
Les Mantodea de Guyane: Insecta, Polyneoptera [The Mantids of French Guiana] 3000
The Mother of All Tableaux: Order, Equivalence, and Geometry in the Large-scale Structure of Optimality Theory 3000
徐淮辽南地区新元古代叠层石及生物地层 2000
A new approach to the extrapolation of accelerated life test data 1000
Global Eyelash Assessment scale (GEA) 500
Robot-supported joining of reinforcement textiles with one-sided sewing heads 400
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 冶金 细胞生物学 免疫学
热门帖子
关注 科研通微信公众号,转发送积分 4026449
求助须知:如何正确求助?哪些是违规求助? 3566107
关于积分的说明 11351252
捐赠科研通 3297262
什么是DOI,文献DOI怎么找? 1816005
邀请新用户注册赠送积分活动 890357
科研通“疑难数据库(出版商)”最低求助积分说明 813555