Evaluating Bard Gemini Pro and GPT-4 Vision Against Student Performance in Medical Visual Question Answering: Comparative Case Study

Python(编程语言) 德国的 可视化 计算机科学 数据科学 医学教育 人工智能 医学诊断 医学 软件 医学影像学 心理学 自然语言处理 口译(哲学) 课程 数学教育 钥匙(锁) 比较案例 教育测量 统计分析 训练集
作者
Jonas Roos,Ron Martin,Robert Kaczmarczyk
出处
期刊:JMIR formative research [JMIR Publications Inc.]
卷期号:8: e57592-e57592 被引量:8
标识
DOI:10.2196/57592
摘要

Abstract Background The rapid development of large language models (LLMs) such as OpenAI’s ChatGPT has significantly impacted medical research and education. These models have shown potential in fields ranging from radiological imaging interpretation to medical licensing examination assistance. Recently, LLMs have been enhanced with image recognition capabilities. Objective This study aims to critically examine the effectiveness of these LLMs in medical diagnostics and training by assessing their accuracy and utility in answering image-based questions from medical licensing examinations. Methods This study analyzed 1070 image-based multiple-choice questions from the AMBOSS learning platform, divided into 605 in English and 465 in German. Customized prompts in both languages directed the models to interpret medical images and provide the most likely diagnosis. Student performance data were obtained from AMBOSS, including metrics such as the “student passed mean” and “majority vote.” Statistical analysis was conducted using Python (Python Software Foundation), with key libraries for data manipulation and visualization. Results GPT-4 1106 Vision Preview (OpenAI) outperformed Bard Gemini Pro (Google), correctly answering 56.9% (609/1070) of questions compared to Bard’s 44.6% (477/1070), a statistically significant difference ( χ 2 ₁=32.1, P <.001). However, GPT-4 1106 left 16.1% (172/1070) of questions unanswered, significantly higher than Bard’s 4.1% (44/1070; χ 2 ₁=83.1, P <.001). When considering only answered questions, GPT-4 1106’s accuracy increased to 67.8% (609/898), surpassing both Bard (477/1026, 46.5%; χ 2 ₁=87.7, P <.001) and the student passed mean of (674/1070, SE 1.48%; χ 2 ₁=4.8, P =.03). Language-specific analysis revealed both models performed better in German than English, with GPT-4 1106 showing greater accuracy in German (282/465, 60.65% vs 327/605, 54.1%; χ 2 ₁=4.4, P =.04) and Bard Gemini Pro exhibiting a similar trend (255/465, 54.8% vs 222/605, 36.7%; χ 2 ₁=34.3, P <.001). The student majority vote achieved an overall accuracy of 94.5% (1011/1070), significantly outperforming both artificial intelligence models (GPT-4 1106: χ 2 ₁=408.5, P <.001; Bard Gemini Pro: χ 2 ₁=626.6, P <.001). Conclusions Our study shows that GPT-4 1106 Vision Preview and Bard Gemini Pro have potential in medical visual question-answering tasks and to serve as a support for students. However, their performance varies depending on the language used, with a preference for German. They also have limitations in responding to non-English content. The accuracy rates, particularly when compared to student responses, highlight the potential of these models in medical education, yet the need for further optimization and understanding of their limitations in diverse linguistic contexts remains critical.

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
周周完成签到 ,获得积分10
1秒前
Akim应助何海采纳,获得30
1秒前
1秒前
1秒前
Mystic发布了新的文献求助10
1秒前
1秒前
云阳发布了新的文献求助10
2秒前
2秒前
2秒前
Ran发布了新的文献求助10
2秒前
风-FBDD发布了新的文献求助10
3秒前
3秒前
FashionBoy应助女爰舍予采纳,获得10
3秒前
随风完成签到,获得积分10
3秒前
3秒前
3秒前
bella完成签到,获得积分10
3秒前
3秒前
4秒前
CH发布了新的文献求助10
4秒前
4秒前
4秒前
乌冬面发布了新的文献求助10
4秒前
5秒前
正己烷完成签到 ,获得积分10
6秒前
无情芷珊完成签到,获得积分10
6秒前
蜗牛发布了新的文献求助10
6秒前
CodeCraft应助搞科研的小闫采纳,获得10
6秒前
zeno完成签到,获得积分20
6秒前
梦里花落声应助张耘硕采纳,获得10
7秒前
iNk应助Li采纳,获得10
7秒前
yu发布了新的文献求助10
7秒前
CipherSage应助彭语诺采纳,获得10
7秒前
云阳完成签到,获得积分10
8秒前
槑槑姊发布了新的文献求助10
8秒前
8秒前
完美世界应助春夏采纳,获得10
8秒前
9秒前
9秒前
zeno发布了新的文献求助10
10秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Fermented Coffee Market 2000
PARLOC2001: The update of loss containment data for offshore pipelines 500
Critical Thinking: Tools for Taking Charge of Your Learning and Your Life 4th Edition 500
Phylogenetic study of the order Polydesmida (Myriapoda: Diplopoda) 500
A Manual for the Identification of Plant Seeds and Fruits : Second revised edition 500
Constitutional and Administrative Law 400
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 纳米技术 计算机科学 内科学 化学工程 复合材料 物理化学 基因 遗传学 催化作用 冶金 量子力学 光电子学
热门帖子
关注 科研通微信公众号,转发送积分 5261651
求助须知:如何正确求助?哪些是违规求助? 4422731
关于积分的说明 13767337
捐赠科研通 4297220
什么是DOI,文献DOI怎么找? 2357773
邀请新用户注册赠送积分活动 1354169
关于科研通互助平台的介绍 1315315