Performance of ChatGPT, GPT-4, and Google Bard on a Neurosurgery Oral Boards Preparation Question Bank

神经外科 医学 优势比 逻辑回归 订单(交换) 内科学 外科 财务 经济
作者
Rohaid Ali,Oliver Y. Tang,Ian D. Connolly,Jared Fridley,John H. Shin,Patricia L Zadnik Sullivan,Deus Cielo,Adetokunbo A. Oyelese,Curtis E. Doberstein,Albert E. Telfeian,Ziya L. Gokaslan,Wael Asaad
出处
期刊:Neurosurgery [Oxford University Press]
卷期号:93 (5): 1090-1098 被引量:46
标识
DOI:10.1227/neu.0000000000002551
摘要

General large language models (LLMs), such as ChatGPT (GPT-3.5), have demonstrated the capability to pass multiple-choice medical board examinations. However, comparative accuracy of different LLMs and LLM performance on assessments of predominantly higher-order management questions is poorly understood. We aimed to assess the performance of 3 LLMs (GPT-3.5, GPT-4, and Google Bard) on a question bank designed specifically for neurosurgery oral boards examination preparation.The 149-question Self-Assessment Neurosurgery Examination Indications Examination was used to query LLM accuracy. Questions were inputted in a single best answer, multiple-choice format. χ2, Fisher exact, and univariable logistic regression tests assessed differences in performance by question characteristics.On a question bank with predominantly higher-order questions (85.2%), ChatGPT (GPT-3.5) and GPT-4 answered 62.4% (95% CI: 54.1%-70.1%) and 82.6% (95% CI: 75.2%-88.1%) of questions correctly, respectively. By contrast, Bard scored 44.2% (66/149, 95% CI: 36.2%-52.6%). GPT-3.5 and GPT-4 demonstrated significantly higher scores than Bard (both P < .01), and GPT-4 outperformed GPT-3.5 (P = .023). Among 6 subspecialties, GPT-4 had significantly higher accuracy in the Spine category relative to GPT-3.5 and in 4 categories relative to Bard (all P < .01). Incorporation of higher-order problem solving was associated with lower question accuracy for GPT-3.5 (odds ratio [OR] = 0.80, P = .042) and Bard (OR = 0.76, P = .014), but not GPT-4 (OR = 0.86, P = .085). GPT-4's performance on imaging-related questions surpassed GPT-3.5's (68.6% vs 47.1%, P = .044) and was comparable with Bard's (68.6% vs 66.7%, P = 1.000). However, GPT-4 demonstrated significantly lower rates of "hallucination" on imaging-related questions than both GPT-3.5 (2.3% vs 57.1%, P < .001) and Bard (2.3% vs 27.3%, P = .002). Lack of question text description for questions predicted significantly higher odds of hallucination for GPT-3.5 (OR = 1.45, P = .012) and Bard (OR = 2.09, P < .001).On a question bank of predominantly higher-order management case scenarios for neurosurgery oral boards preparation, GPT-4 achieved a score of 82.6%, outperforming ChatGPT and Google Bard.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
勤恳的烤鸡完成签到,获得积分10
刚刚
斯文灯泡完成签到 ,获得积分10
刚刚
无花果应助Lshyong采纳,获得10
1秒前
充电宝应助123采纳,获得10
2秒前
马上动起来完成签到,获得积分10
2秒前
靳先锋完成签到,获得积分20
3秒前
lanzai发布了新的文献求助10
3秒前
甜甜圈发布了新的文献求助10
3秒前
晓风残月9987完成签到,获得积分10
3秒前
3秒前
季忆完成签到,获得积分10
3秒前
4秒前
5秒前
5秒前
靳先锋发布了新的文献求助30
7秒前
cctv18应助真实的小天鹅采纳,获得30
8秒前
勤劳破茧完成签到,获得积分10
8秒前
9秒前
10秒前
10秒前
夹在菠萝里的猫应助图图采纳,获得200
11秒前
11秒前
李爱国应助二两采纳,获得10
12秒前
Hello应助菜鸟采纳,获得10
12秒前
13秒前
zhaoxu完成签到,获得积分10
13秒前
14秒前
15秒前
16秒前
16秒前
16秒前
17秒前
lokiuiw发布了新的文献求助10
17秒前
17秒前
17秒前
二两完成签到,获得积分20
19秒前
19秒前
Rion发布了新的文献求助10
20秒前
大媛媛发布了新的文献求助10
20秒前
21秒前
高分求助中
Thermodynamic data for steelmaking 3000
Teaching Social and Emotional Learning in Physical Education 900
Cardiology: Board and Certification Review 400
[Lambert-Eaton syndrome without calcium channel autoantibodies] 340
Transformerboard III 300
Differential Equations and Population Dynamics I 200
Erbium(III) Triflate: A Valuable Catalyst for the Rearrangement of Epoxides to Aldehydes and Ketones 200
热门求助领域 (近24小时)
化学 材料科学 医学 生物 有机化学 工程类 生物化学 纳米技术 物理 内科学 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 电极 光电子学 量子力学
热门帖子
关注 科研通微信公众号,转发送积分 2360178
求助须知:如何正确求助?哪些是违规求助? 2067465
关于积分的说明 5164127
捐赠科研通 1795825
什么是DOI,文献DOI怎么找? 897088
版权声明 557648
科研通“疑难数据库(出版商)”最低求助积分说明 478870