计算机科学
人工智能
特征(语言学)
背景(考古学)
编码器
模式识别(心理学)
分割
情态动词
深度学习
水准点(测量)
模态(人机交互)
机器学习
古生物学
哲学
语言学
化学
大地测量学
高分子化学
生物
地理
操作系统
作者
Guanghui Yue,Guibin Zhuo,Tianwei Zhou,Weide Liu,Tianfu Wang,Qiuping Jiang
标识
DOI:10.1109/jbhi.2023.3347556
摘要
In the context of contemporary artificial intelligence, increasing deep learning (DL) based segmentation methods have been recently proposed for brain tumor segmentation (BraTS) via analysis of multi-modal MRI. However, known DL-based works usually directly fuse the information of different modalities at multiple stages without considering the gap between modalities, leaving much room for performance improvement. In this paper, we introduce a novel deep neural network, termed ACFNet, for accurately segmenting brain tumor in multi-modal MRI. Specifically, ACFNet has a parallel structure with three encoder-decoder streams. The upper and lower streams generate coarse predictions from individual modality, while the middle stream integrates the complementary knowledge of different modalities and bridges the gap between them to yield fine prediction. To effectively integrate the complementary information, we propose an adaptive cross-feature fusion (ACF) module at the encoder that first explores the correlation information between the feature representations from upper and lower streams and then refines the fused correlation information. To bridge the gap between the information from multi-modal data, we propose a prediction inconsistency guidance (PIG) module at the decoder that helps the network focus more on error-prone regions through a guidance strategy when incorporating the features from the encoder. The guidance is obtained by calculating the prediction inconsistency between upper and lower streams and highlights the gap between multi-modal data. Extensive experiments on the BraTS 2020 dataset show that ACFNet is competent for the BraTS task with promising results and outperforms six mainstream competing methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI