Modality-level cross-connection and attentional feature fusion based deep neural network for multi-modal brain tumor segmentation

模态(人机交互) 计算机科学 特征(语言学) 分割 深度学习 人工智能 人工神经网络 特征学习 块(置换群论) 编码器 模式识别(心理学) 计算机视觉 数学 语言学 几何学 操作系统 哲学
作者
Tongxue Zhou
出处
期刊:Biomedical Signal Processing and Control [Elsevier BV]
卷期号:81: 104524-104524 被引量:13
标识
DOI:10.1016/j.bspc.2022.104524
摘要

Brain tumor segmentation from Magnetic Resonance Imaging is essential for early diagnosis and treatment planning for brain cancers in clinical practice. However, existing brain tumor segmentation methods cannot sufficiently learn high-quality feature information for segmentation. To address this issue, a modality-level cross-connection and attentional feature fusion based deep neural network is proposed for multi-modal brain tumor segmentation. The proposed method can not only locate the whole tumor region but also can accurately segment the sub-tumor regions. The proposed network architecture is a multi-encoder based 3D U-Net. Inspired by the characteristics of multi-modalities, a modality-level cross-connection (MCC) is first proposed to take advantage of the complementary information between the related modalities. Moreover, to enhance the feature learning capacity of the network, the attentional feature fusion module (AFFM) is proposed to fuse the multi-modalities as well as to extract the useful feature representation for segmentation. It consists of two components: multi-scale spatial feature fusion (MSFF) block and dual-path channel feature fusion (DCFF) block. They aim at learning multi-scale spatial contextual information and the channel-wise feature information to improve the segmentation accuracy. Also, the proposed fusion module can be easily integrated into other fusion models and deep neural network architectures. Comprehensive experiments evaluated on the BraTS 2018 dataset demonstrate that the proposed network architecture can effectively improve the brain tumor segmentation performance when compared with the baseline methods and the state-of-the-art methods.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
1秒前
CodeCraft应助22222采纳,获得30
1秒前
爆米花应助素笺采纳,获得10
1秒前
123发布了新的文献求助10
2秒前
CipherSage应助史雅怡采纳,获得10
3秒前
酷波er应助rocky采纳,获得10
5秒前
5秒前
FashionBoy应助木头鱼采纳,获得10
6秒前
Astraeus完成签到 ,获得积分10
7秒前
丘比特应助科研通管家采纳,获得10
8秒前
搜集达人应助科研通管家采纳,获得10
8秒前
木木应助科研通管家采纳,获得10
8秒前
852应助科研通管家采纳,获得10
8秒前
柏林寒冬应助科研通管家采纳,获得10
8秒前
8秒前
chengjie应助科研通管家采纳,获得20
8秒前
8秒前
GeoEye应助科研通管家采纳,获得10
8秒前
GeoEye应助科研通管家采纳,获得10
9秒前
9秒前
GeoEye应助科研通管家采纳,获得10
9秒前
9秒前
9秒前
12秒前
Agu关闭了Agu文献求助
12秒前
sci发布了新的文献求助10
14秒前
weihuan发布了新的文献求助10
14秒前
老阎应助aqiuyuehe采纳,获得80
17秒前
Owen应助露似珍珠月似弓采纳,获得10
18秒前
艾妮吗完成签到,获得积分10
19秒前
LiShin完成签到,获得积分10
21秒前
23秒前
23秒前
23秒前
swamp完成签到,获得积分10
24秒前
24秒前
可爱的函函应助Silvia采纳,获得10
25秒前
苗觉觉完成签到,获得积分10
26秒前
huzi发布了新的文献求助10
28秒前
pengchen完成签到 ,获得积分10
29秒前
高分求助中
【请各位用户详细阅读此贴后再求助】科研通的精品贴汇总(请勿应助) 10000
The Mother of All Tableaux: Order, Equivalence, and Geometry in the Large-scale Structure of Optimality Theory 3000
International Code of Nomenclature for algae, fungi, and plants (Madrid Code) (Regnum Vegetabile) 500
Maritime Applications of Prolonged Casualty Care: Drowning and Hypothermia on an Amphibious Warship 500
Comparison analysis of Apple face ID in iPad Pro 13” with first use of metasurfaces for diffraction vs. iPhone 16 Pro 500
Towards a $2B optical metasurfaces opportunity by 2029: a cornerstone for augmented reality, an incremental innovation for imaging (YINTR24441) 500
Robot-supported joining of reinforcement textiles with one-sided sewing heads 490
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 冶金 细胞生物学 免疫学
热门帖子
关注 科研通微信公众号,转发送积分 4062952
求助须知:如何正确求助?哪些是违规求助? 3601444
关于积分的说明 11437967
捐赠科研通 3324713
什么是DOI,文献DOI怎么找? 1827766
邀请新用户注册赠送积分活动 898335
科研通“疑难数据库(出版商)”最低求助积分说明 818997