计算机科学
人工智能
分割
模式治疗法
基本事实
深度学习
模式识别(心理学)
模式
编码器
模态(人机交互)
缺少数据
多模态
机器学习
医学
社会科学
外科
社会学
万维网
操作系统
出处
期刊:IEEE Journal of Biomedical and Health Informatics
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:28 (1): 89-99
被引量:1
标识
DOI:10.1109/jbhi.2023.3286689
摘要
Accurate segmentation of brain tumors plays an important role for clinical diagnosis and treatment. Multimodal magnetic resonance imaging (MRI) can provide rich and complementary information for accurate brain tumor segmentation. However, some modalities may be absent in clinical practice. It is still challenging to integrate the incomplete multimodal MRI data for accurate segmentation of brain tumors. In this paper, we propose a brain tumor segmentation method based on multimodal transformer network with incomplete multimodal MRI data. The network is based on U-Net architecture consisting of modality specific encoders, multimodal transformer and multimodal shared-weight decoder. First, a convolutional encoder is built to extract the specific features of each modality. Then, a multimodal transformer is proposed to model the correlations of multimodal features and learn the features of missing modalities. Finally, a multimodal shared-weight decoder is proposed to progressively aggregate the multimodal and multi-level features with spatial and channel self-attention modules for brain tumor segmentation. A missing-full complementary learning strategy is used to explore the latent correlation between the missing and full modalities for feature compensation. For evaluation, our method is tested on the multimodal MRI data from BraTS 2018, BraTS 2019 and BraTS 2020 datasets. The extensive results demonstrate that our method outperforms the state-of-the-art methods for brain tumor segmentation on most subsets of missing modalities.
科研通智能强力驱动
Strongly Powered by AbleSci AI