亲爱的研友该休息了!由于当前在线用户较少,发布求助请尽量完整地填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!身体可是革命的本钱,早点休息,好梦!

Privacy-ensuring Open-weights Large Language Models Are Competitive with Closed-weights GPT-4o in Extracting Chest Radiography Findings from Free-Text Reports

医学 射线照相术 放射科
作者
Sebastian Nowak,Benjamin Wulff,Yannik C. Layer,Maike Theis,Alexander Isaak,Babak Salam,Wolfgang Block,Daniel Kuetting,Claus C. Pieper,Julian A. Luetkens,Ulrike Attenberger,Alois M. Sprinkart
出处
期刊:Radiology [Radiological Society of North America]
卷期号:314 (1) 被引量:9
标识
DOI:10.1148/radiol.240895
摘要

Background Large-scale secondary use of clinical databases requires automated tools for retrospective extraction of structured content from free-text radiology reports. Purpose To share data and insights on the application of privacy-preserving open-weights large language models (LLMs) for reporting content extraction with comparison to standard rule-based systems and the closed-weights LLMs from OpenAI. Materials and Methods In this retrospective exploratory study conducted between May 2024 and September 2024, zero-shot prompting of 17 open-weights LLMs was preformed. These LLMs with model weights released under open licenses were compared with rule-based annotation and with OpenAI's GPT-4o, GPT-4o-mini, GPT-4-turbo, and GPT-3.5-turbo on a manually annotated public English chest radiography dataset (Indiana University, 3927 patients and reports). An annotated nonpublic German chest radiography dataset (18 500 reports, 16 844 patients [10 340 male; mean age, 62.6 years ± 21.5 {SD}]) was used to compare local fine-tuning of all open-weights LLMs via low-rank adaptation and 4-bit quantization to bidirectional encoder representations from transformers (BERT) with different subsets of reports (from 10 to 14 580). Nonoverlapping 95% CIs of macro-averaged F1 scores were defined as relevant differences. Results For the English reports, the highest zero-shot macro-averaged F1 score was observed for GPT-4o (92.4% [95% CI: 87.9, 95.9]); GPT-4o outperformed the rule-based CheXpert [Stanford University] (73.1% [95% CI: 65.1, 79.7]) but was comparable in performance to several open-weights LLMs (top three: Mistral-Large [Mistral AI], 92.6% [95% CI: 88.2, 96.0]; Llama-3.1-70b [Meta AI], 92.2% [95% CI: 87.1, 95.8]; and Llama-3.1-405b [Meta AI]: 90.3% [95% CI: 84.6, 94.5]). For the German reports, Mistral-Large (91.6% [95% CI: 90.5, 92.7]) had the highest zero-shot macro-averaged F1 score among the six other open-weights LLMs and outperformed the rule-based annotation (74.8% [95% CI: 73.3, 76.1]). Using 1000 reports for fine-tuning, all LLMs (top three: Mistral-Large, 94.3% [95% CI: 93.5, 95.2]; OpenBioLLM-70b [Saama]: 93.9% [95% CI: 92.9, 94.8]; and Mixtral-8×22b [Mistral AI]: 93.8% [95% CI: 92.8, 94.7]) achieved significantly higher macro-averaged F1 score than did BERT (86.7% [95% CI: 85.0, 88.3]); however, the differences were not relevant when 2000 or more reports were used for fine-tuning. Conclusion LLMs have the potential to outperform rule-based systems for zero-shot "out-of-the-box" structuring of report databases, with privacy-ensuring open-weights LLMs being competitive with closed-weights GPT-4o. Additionally, the open-weights LLM outperformed BERT when moderate numbers of reports were used for fine-tuning. Published under a CC BY 4.0 license. Supplemental material is available for this article. See also the editorial by Gee and Yao in this issue.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
建议保存本图,每天支付宝扫一扫(相册选取)领红包
实时播报
黄hhhhhhhh完成签到,获得积分10
2秒前
Jasper应助lxb采纳,获得10
3秒前
4秒前
WebCasa完成签到,获得积分10
4秒前
顾矜应助something采纳,获得10
6秒前
6秒前
8秒前
珑拚完成签到,获得积分20
9秒前
突突leolo完成签到,获得积分10
10秒前
湘君发布了新的文献求助10
10秒前
纪言七许完成签到 ,获得积分10
11秒前
12秒前
oleskarabach完成签到,获得积分20
12秒前
13秒前
14秒前
黑摄会阿Fay完成签到,获得积分10
15秒前
18秒前
lxb完成签到,获得积分10
18秒前
湘君完成签到,获得积分10
18秒前
dly完成签到 ,获得积分10
19秒前
21秒前
something发布了新的文献求助10
24秒前
tdtk发布了新的文献求助10
25秒前
浮游应助科研通管家采纳,获得10
25秒前
浮浮世世应助科研通管家采纳,获得30
25秒前
大模型应助科研通管家采纳,获得30
25秒前
科研通AI6应助科研通管家采纳,获得10
25秒前
浮游应助科研通管家采纳,获得10
26秒前
科研通AI2S应助科研通管家采纳,获得10
26秒前
浮浮世世应助科研通管家采纳,获得30
26秒前
浮浮世世应助科研通管家采纳,获得30
26秒前
浮浮世世应助科研通管家采纳,获得30
26秒前
nini完成签到,获得积分10
27秒前
加菲丰丰完成签到,获得积分0
37秒前
Sunny完成签到 ,获得积分10
37秒前
40秒前
histamin完成签到,获得积分10
42秒前
jingutaimi完成签到,获得积分10
45秒前
48秒前
50秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
List of 1,091 Public Pension Profiles by Region 1041
Mentoring for Wellbeing in Schools 1000
Binary Alloy Phase Diagrams, 2nd Edition 600
Atlas of Liver Pathology: A Pattern-Based Approach 500
A Technologist’s Guide to Performing Sleep Studies 500
EEG in Childhood Epilepsy: Initial Presentation & Long-Term Follow-Up 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 纳米技术 计算机科学 内科学 化学工程 复合材料 物理化学 基因 遗传学 催化作用 冶金 量子力学 光电子学
热门帖子
关注 科研通微信公众号,转发送积分 5493690
求助须知:如何正确求助?哪些是违规求助? 4591699
关于积分的说明 14434392
捐赠科研通 4524096
什么是DOI,文献DOI怎么找? 2478597
邀请新用户注册赠送积分活动 1463621
关于科研通互助平台的介绍 1436453