Comparing the Performance of Popular Large Language Models on the National Board of Medical Examiners Sample Questions

样品(材料) 心理学 计算机科学 色谱法 化学
作者
Abbas Abolghasemi,Maqsood Ur Rehman,Syed Shakil Ur Rehman
出处
期刊:Cureus [Cureus, Inc.]
标识
DOI:10.7759/cureus.55991
摘要

Large language models (LLMs) have transformed various domains in medicine, aiding in complex tasks and clinical decision-making, with OpenAI's GPT-4, GPT-3.5, Google's Bard, and Anthropic's Claude among the most widely used. While GPT-4 has demonstrated superior performance in some studies, comprehensive comparisons among these models remain limited. Recognizing the significance of the National Board of Medical Examiners (NBME) exams in assessing the clinical knowledge of medical students, this study aims to compare the accuracy of popular LLMs on NBME clinical subject exam sample questions.The questions used in this study were multiple-choice questions obtained from the official NBME website and are publicly available. Questions from the NBME subject exams in medicine, pediatrics, obstetrics and gynecology, clinical neurology, ambulatory care, family medicine, psychiatry, and surgery were used to query each LLM. The responses from GPT-4, GPT-3.5, Claude, and Bard were collected in October 2023. The response by each LLM was compared to the answer provided by the NBME and checked for accuracy. Statistical analysis was performed using one-way analysis of variance (ANOVA).A total of 163 questions were queried by each LLM. GPT-4 scored 163/163 (100%), GPT-3.5 scored 134/163 (82.2%), Bard scored 123/163 (75.5%), and Claude scored 138/163 (84.7%). The total performance of GPT-4 was statistically superior to that of GPT-3.5, Claude, and Bard by 17.8%, 15.3%, and 24.5%, respectively. The total performance of GPT-3.5, Claude, and Bard was not significantly different. GPT-4 significantly outperformed Bard in specific subjects, including medicine, pediatrics, family medicine, and ambulatory care, and GPT-3.5 in ambulatory care and family medicine. Across all LLMs, the surgery exam had the highest average score (18.25/20), while the family medicine exam had the lowest average score (3.75/5). Conclusion: GPT-4's superior performance on NBME clinical subject exam sample questions underscores its potential in medical education and practice. While LLMs exhibit promise, discernment in their application is crucial, considering occasional inaccuracies. As technological advancements continue, regular reassessments and refinements are imperative to maintain their reliability and relevance in medicine.

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
读博的小武完成签到,获得积分10
1秒前
顾矜应助txg采纳,获得10
1秒前
高贵碧凡完成签到 ,获得积分10
1秒前
番茄番茄完成签到 ,获得积分10
2秒前
曲聋五完成签到 ,获得积分0
2秒前
JJ完成签到,获得积分10
3秒前
陈泽涛完成签到,获得积分10
3秒前
hujin发布了新的文献求助10
3秒前
lcyss完成签到,获得积分10
3秒前
在水一方应助小李吃小孩采纳,获得10
3秒前
充电宝应助挚友采纳,获得10
4秒前
5秒前
jia发布了新的文献求助10
5秒前
llll发布了新的文献求助10
5秒前
幸福镜子发布了新的文献求助10
6秒前
6秒前
Kolanet发布了新的文献求助10
6秒前
6秒前
在水一方应助熊宇采纳,获得10
7秒前
7秒前
7秒前
dali完成签到 ,获得积分10
7秒前
Lucas应助几人得真鹿采纳,获得10
7秒前
科研通AI6.2应助liz采纳,获得10
8秒前
谦让阑悦完成签到,获得积分10
8秒前
ma完成签到 ,获得积分10
8秒前
杏仁饼干完成签到 ,获得积分10
8秒前
爰采唐矣发布了新的文献求助10
8秒前
qu完成签到 ,获得积分20
9秒前
limit完成签到,获得积分10
9秒前
Ava应助zfh采纳,获得10
9秒前
量子星尘发布了新的文献求助10
9秒前
10秒前
Yun yun发布了新的文献求助10
10秒前
xcy完成签到,获得积分10
10秒前
今后应助jiajia993采纳,获得30
10秒前
徐丽发布了新的文献求助10
11秒前
Lucas应助Kolanet采纳,获得10
11秒前
小舀发布了新的文献求助10
11秒前
12秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Handbook of pharmaceutical excipients, Ninth edition 5000
Aerospace Standards Index - 2026 ASIN2026 3000
Signals, Systems, and Signal Processing 610
Discrete-Time Signals and Systems 610
Principles of town planning : translating concepts to applications 500
Short-Wavelength Infrared Windows for Biomedical Applications 400
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 纳米技术 有机化学 物理 生物化学 化学工程 计算机科学 复合材料 内科学 催化作用 光电子学 物理化学 电极 冶金 遗传学 细胞生物学
热门帖子
关注 科研通微信公众号,转发送积分 6060555
求助须知:如何正确求助?哪些是违规求助? 7893011
关于积分的说明 16304041
捐赠科研通 5204631
什么是DOI,文献DOI怎么找? 2784484
邀请新用户注册赠送积分活动 1767031
关于科研通互助平台的介绍 1647334