A survey of deep-learning-based radiology report generation using multimodal inputs

深度学习 人工智能 计算机科学 模式治疗法 机器学习 计算机视觉 医学物理学 医学 外科
作者
Xinyi Wang,Grazziela P. Figueredo,Ruizhe Li,Wei Emma Zhang,Weitong Chen,Xin Chen
出处
期刊:Medical Image Analysis [Elsevier]
卷期号:: 103627-103627 被引量:3
标识
DOI:10.1016/j.media.2025.103627
摘要

Automatic radiology report generation can alleviate the workload for physicians and minimize regional disparities in medical resources, therefore becoming an important topic in the medical image analysis field. It is a challenging task, as the computational model needs to mimic physicians to obtain information from multi-modal input data (i.e., medical images, clinical information, medical knowledge, etc.), and produce comprehensive and accurate reports. Recently, numerous works have emerged to address this issue using deep-learning-based methods, such as transformers, contrastive learning, and knowledge-base construction. This survey summarizes the key techniques developed in the most recent works and proposes a general workflow for deep-learning-based report generation with five main components, including multi-modality data acquisition, data preparation, feature learning, feature fusion and interaction, and report generation. The state-of-the-art methods for each of these components are highlighted. Additionally, we summarize the latest developments in large model-based methods and model explainability, along with public datasets, evaluation methods, current challenges, and future directions in this field. We have also conducted a quantitative comparison between different methods in the same experimental setting. This is the most up-to-date survey that focuses on multi-modality inputs and data fusion for radiology report generation. The aim is to provide comprehensive and rich information for researchers interested in automatic clinical report generation and medical image analysis, especially when using multimodal inputs, and to assist them in developing new algorithms to advance the field.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
不懈奋进应助LTHT采纳,获得30
1秒前
wanci应助顺利白竹采纳,获得10
1秒前
1秒前
wzx发布了新的文献求助10
1秒前
六十的清完成签到,获得积分10
2秒前
稳重的傲芙完成签到,获得积分10
2秒前
吴念发布了新的文献求助30
2秒前
开朗艳一发布了新的文献求助10
2秒前
琉璃完成签到,获得积分10
3秒前
桐桐应助ZHANGSANQI采纳,获得10
4秒前
壮观沉鱼完成签到 ,获得积分10
4秒前
6秒前
liujunjie发布了新的文献求助10
6秒前
weiwei完成签到,获得积分10
6秒前
Kirin完成签到,获得积分10
6秒前
斯文败类应助吴未采纳,获得10
7秒前
ZYM发布了新的文献求助20
8秒前
9秒前
10秒前
研友_VZG7GZ应助Dou采纳,获得10
10秒前
10秒前
量子星尘发布了新的文献求助10
10秒前
JamesPei应助科研通管家采纳,获得10
10秒前
天天快乐应助科研通管家采纳,获得10
10秒前
小二郎应助科研通管家采纳,获得10
10秒前
无花果应助科研通管家采纳,获得10
10秒前
传奇3应助科研通管家采纳,获得10
10秒前
彭于晏应助科研通管家采纳,获得10
10秒前
浮游应助科研通管家采纳,获得10
11秒前
浮游应助科研通管家采纳,获得10
11秒前
科研通AI6应助科研通管家采纳,获得10
11秒前
小二郎应助依依采纳,获得10
11秒前
赵yy应助科研通管家采纳,获得10
11秒前
11秒前
我是老大应助科研通管家采纳,获得10
11秒前
11秒前
11秒前
11秒前
11秒前
研友_VZG7GZ应助科研通管家采纳,获得10
11秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Predation in the Hymenoptera: An Evolutionary Perspective 1800
List of 1,091 Public Pension Profiles by Region 1561
Specialist Periodical Reports - Organometallic Chemistry Organometallic Chemistry: Volume 46 1000
Foregrounding Marking Shift in Sundanese Written Narrative Segments 600
Holistic Discourse Analysis 600
Beyond the sentence: discourse and sentential form / edited by Jessica R. Wirth 600
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 纳米技术 计算机科学 内科学 化学工程 复合材料 物理化学 基因 遗传学 催化作用 冶金 量子力学 光电子学
热门帖子
关注 科研通微信公众号,转发送积分 5517689
求助须知:如何正确求助?哪些是违规求助? 4610418
关于积分的说明 14522134
捐赠科研通 4547615
什么是DOI,文献DOI怎么找? 2491698
邀请新用户注册赠送积分活动 1473284
关于科研通互助平台的介绍 1445154