Deep Learning Approach for Negation and Speculation Detection for Automated Important Finding Flagging and Extraction in Radiology Report: Internal Validation and Technique Comparison Study

下垂 计算机科学 人工智能 否定 F1得分 自然语言处理 编码器 变压器 安全性令牌 投机 机器学习 程序设计语言 物理 宏观经济学 操作系统 历史 经济 电压 考古 量子力学 计算机安全
作者
Kung-Hsun Weng,Chung‐Feng Liu,Chia‐Jung Chen
出处
期刊:JMIR medical informatics [JMIR Publications Inc.]
卷期号:11: e46348-e46348 被引量:8
标识
DOI:10.2196/46348
摘要

Background Negation and speculation unrelated to abnormal findings can lead to false-positive alarms for automatic radiology report highlighting or flagging by laboratory information systems. Objective This internal validation study evaluated the performance of natural language processing methods (NegEx, NegBio, NegBERT, and transformers). Methods We annotated all negative and speculative statements unrelated to abnormal findings in reports. In experiment 1, we fine-tuned several transformer models (ALBERT [A Lite Bidirectional Encoder Representations from Transformers], BERT [Bidirectional Encoder Representations from Transformers], DeBERTa [Decoding-Enhanced BERT With Disentangled Attention], DistilBERT [Distilled version of BERT], ELECTRA [Efficiently Learning an Encoder That Classifies Token Replacements Accurately], ERNIE [Enhanced Representation through Knowledge Integration], RoBERTa [Robustly Optimized BERT Pretraining Approach], SpanBERT, and XLNet) and compared their performance using precision, recall, accuracy, and F1-scores. In experiment 2, we compared the best model from experiment 1 with 3 established negation and speculation-detection algorithms (NegEx, NegBio, and NegBERT). Results Our study collected 6000 radiology reports from 3 branches of the Chi Mei Hospital, covering multiple imaging modalities and body parts. A total of 15.01% (105,755/704,512) of words and 39.45% (4529/11,480) of important diagnostic keywords occurred in negative or speculative statements unrelated to abnormal findings. In experiment 1, all models achieved an accuracy of >0.98 and F1-score of >0.90 on the test data set. ALBERT exhibited the best performance (accuracy=0.991; F1-score=0.958). In experiment 2, ALBERT outperformed the optimized NegEx, NegBio, and NegBERT methods in terms of overall performance (accuracy=0.996; F1-score=0.991), in the prediction of whether diagnostic keywords occur in speculative statements unrelated to abnormal findings, and in the improvement of the performance of keyword extraction (accuracy=0.996; F1-score=0.997). Conclusions The ALBERT deep learning method showed the best performance. Our results represent a significant advancement in the clinical applications of computer-aided notification systems.

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
您好发布了新的文献求助10
刚刚
CassieBotelho应助讨厌桃子采纳,获得10
刚刚
1秒前
迷路青槐发布了新的文献求助10
1秒前
caidan发布了新的文献求助10
1秒前
夕楠枫发布了新的文献求助10
2秒前
单薄枕头完成签到,获得积分10
3秒前
wang发布了新的文献求助10
3秒前
苏栀完成签到,获得积分10
4秒前
LYJ发布了新的文献求助10
4秒前
华仔应助zby采纳,获得10
4秒前
4秒前
6秒前
Qiuqiu完成签到 ,获得积分10
6秒前
7秒前
贝泽宇完成签到,获得积分10
7秒前
L同学完成签到,获得积分10
9秒前
TKM发布了新的文献求助10
10秒前
科研民工完成签到,获得积分10
11秒前
11秒前
Lucas应助敢敢采纳,获得10
11秒前
11秒前
12秒前
亭2007发布了新的文献求助10
13秒前
14秒前
14秒前
CAOHOU举报小琛求助涉嫌违规
15秒前
15秒前
15秒前
15秒前
无限书蕾完成签到,获得积分10
16秒前
科研通AI2S应助无为采纳,获得10
16秒前
bkagyin应助CharlieYue采纳,获得10
16秒前
Orange应助CharlieYue采纳,获得10
16秒前
美好芳完成签到,获得积分10
17秒前
王一帆驳回了Mic应助
17秒前
caidan发布了新的文献求助10
18秒前
19秒前
Fyt00发布了新的文献求助10
19秒前
19秒前
高分求助中
2025-2031全球及中国金刚石触媒粉行业研究及十五五规划分析报告 40000
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Introduction to strong mixing conditions volume 1-3 5000
Ägyptische Geschichte der 21.–30. Dynastie 2500
Clinical Microbiology Procedures Handbook, Multi-Volume, 5th Edition 2000
„Semitische Wissenschaften“? 1510
从k到英国情人 1500
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5743020
求助须知:如何正确求助?哪些是违规求助? 5412098
关于积分的说明 15346567
捐赠科研通 4884017
什么是DOI,文献DOI怎么找? 2625516
邀请新用户注册赠送积分活动 1574377
关于科研通互助平台的介绍 1531274