Comparison of emergency medicine specialist, cardiologist, and chat-GPT in electrocardiography assessment

医学 心电图 心脏病学 内科学 医疗急救 急诊医学 急诊科 精神科
作者
Serkan Günay,Ahmet Öztürk,Hakan Özerol,Yavuz Yiğit,Ali Kemal Erenler
出处
期刊:American Journal of Emergency Medicine [Elsevier BV]
卷期号:80: 51-60 被引量:12
标识
DOI:10.1016/j.ajem.2024.03.017
摘要

ChatGPT, developed by OpenAI, represents the cutting-edge in its field with its latest model, GPT-4. Extensive research is currently being conducted in various domains, including cardiovascular diseases, using ChatGPT. Nevertheless, there is a lack of studies addressing the proficiency of GPT-4 in diagnosing conditions based on Electrocardiography (ECG) data. The goal of this study is to evaluate the diagnostic accuracy of GPT-4 when provided with ECG data, and to compare its performance with that of emergency medicine specialists and cardiologists. This study has received approval from the Clinical Research Ethics Committee of Hitit University Medical Faculty on August 21, 2023 (decision no: 2023–91). Drawing on cases from the "150 ECG Cases" book, a total of 40 ECG cases were crafted into multiple-choice questions (comprising 20 everyday and 20 more challenging ECG questions). The participant pool included 12 emergency medicine specialists and 12 cardiology specialists. GPT-4 was administered the questions in a total of 12 separate sessions. The responses from the cardiology physicians, emergency medicine physicians, and GPT-4 were evaluated separately for each of the three groups. In the everyday ECG questions, GPT-4 demonstrated superior performance compared to both the emergency medicine specialists and the cardiology specialists (p < 0.001, p = 0.001). In the more challenging ECG questions, while Chat-GPT outperformed the emergency medicine specialists (p < 0.001), no significant statistical difference was found between Chat-GPT and the cardiology specialists (p = 0.190). Upon examining the accuracy of the total ECG questions, Chat-GPT was found to be more successful compared to both the Emergency Medicine Specialists and the cardiologists (p < 0.001, p = 0.001). Our study has shown that GPT-4 is more successful than emergency medicine specialists in evaluating both everyday and more challenging ECG questions. It performed better compared to cardiologists on everyday questions, but its performance aligned closely with that of the cardiologists as the difficulty of the questions increased.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
1秒前
1秒前
HY发布了新的文献求助10
2秒前
bo发布了新的文献求助10
3秒前
JamesPei应助科研小亮采纳,获得10
3秒前
Simoody应助淡定采纳,获得10
3秒前
Simoody应助淡定采纳,获得10
3秒前
rorocris完成签到,获得积分10
3秒前
3秒前
lily完成签到,获得积分10
4秒前
5秒前
5秒前
6秒前
6秒前
阳yang完成签到,获得积分10
6秒前
6秒前
落寞觅山完成签到 ,获得积分20
6秒前
7秒前
7秒前
SciGPT应助Focus采纳,获得30
8秒前
Xhdjdbd完成签到,获得积分10
9秒前
你好完成签到,获得积分10
9秒前
科研小虫发布了新的文献求助10
9秒前
BB发布了新的文献求助10
10秒前
领导范儿应助bo采纳,获得10
10秒前
浅池星发布了新的文献求助30
10秒前
iiLI发布了新的文献求助10
10秒前
莜莜发布了新的文献求助10
11秒前
你好发布了新的文献求助10
11秒前
12秒前
meng完成签到,获得积分10
13秒前
sparkle发布了新的文献求助10
14秒前
15秒前
15秒前
16秒前
戴戴完成签到,获得积分10
16秒前
18秒前
初小花完成签到,获得积分10
18秒前
798发布了新的文献求助10
18秒前
rarfen完成签到,获得积分10
18秒前
高分求助中
Mass producing individuality 600
Algorithmic Mathematics in Machine Learning 500
Разработка метода ускоренного контроля качества электрохромных устройств 500
Getting Published in SSCI Journals: 200+ Questions and Answers for Absolute Beginners 300
Advances in Underwater Acoustics, Structural Acoustics, and Computational Methodologies 300
Worked Bone, Antler, Ivory, and Keratinous Materials 200
Evaluation of sustainable development level for front-end cold-chain logistics of fruits and vegetables: a case study on Xinjiang, China 200
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 物理 生物化学 纳米技术 计算机科学 化学工程 内科学 复合材料 物理化学 电极 遗传学 量子力学 基因 冶金 催化作用
热门帖子
关注 科研通微信公众号,转发送积分 3828014
求助须知:如何正确求助?哪些是违规求助? 3370280
关于积分的说明 10462497
捐赠科研通 3090257
什么是DOI,文献DOI怎么找? 1700281
邀请新用户注册赠送积分活动 817810
科研通“疑难数据库(出版商)”最低求助积分说明 770442