计算机科学
代表(政治)
模式
分割
人工智能
变形(气象学)
计算机视觉
模式识别(心理学)
地质学
社会学
社会科学
海洋学
政治
政治学
法学
作者
Zhiyuan Li,Yafei Zhang,Huafeng Li,Yi Chai,Yushi Yang
标识
DOI:10.1016/j.bspc.2024.106012
摘要
Multimodal magnetic resonance imaging (MRI) provides complementary information for brain tumor segmentation, and several methods leveraging full modalities have been proposed. However, capturing the full modality information is challenging due to commonplace data corruption, imperfect imaging protocols, and patient-related constraints. The unavailability of certain modalities can significantly undermine the performance of segmentation methods that rely on full-modality data. To address this issue, this paper proposes a deformation-aware and reconstruction-driven method for brain tumor segmentation in the presence of missing modalities. The proposed method introduces a local–global modeling module to enhance the intramodal feature representation ability of the modality-specific encoder. Considering the irregular shape of tumor regions, we develop a deformation-adaptive perceptual multimodal representation learning module that learns deformation information from an incomplete set of multimodal images, thereby guiding the network to accurately localize the tumor regions. Furthermore, we design a reconstruction-driven key-information mining module that recovers the original images from the features extracted by the encoder. This process further ensures that the encoder can extract the key tumor discriminative features. During the inference phase, the module is removed to mitigate additional computational burdens. Experimental results on two publicly available multimodal brain tumor benchmark datasets show that the proposed method outperforms existing brain tumor segmentation methods with missing modalities. The code is available at https://github.com/Linzy0227/SRMNet.
科研通智能强力驱动
Strongly Powered by AbleSci AI