图像分割
人工智能
计算机科学
计算机视觉
医学影像学
模态(人机交互)
分割
图像(数学)
尺度空间分割
模式识别(心理学)
作者
Shenhai Zheng,Xin Ye,Chaohui Yang,Lei Yu,Weisheng Li,Xinbo Gao,Yue Zhao
标识
DOI:10.1109/tmi.2025.3526604
摘要
Existing studies of multi-modality medical image segmentation tend to aggregate all modalities without discrimination and employ multiple symmetric encoders or decoders for feature extraction and fusion. They often overlook the different contributions to visual representation and intelligent decisions among multi-modality images. Motivated by this discovery, this paper proposes an asymmetric adaptive heterogeneous network for multi-modality image feature extraction with modality discrimination and adaptive fusion. For feature extraction, it uses a heterogeneous two-stream asymmetric feature-bridging network to extract complementary features from auxiliary multi-modality and leading single-modality images, respectively. For feature adaptive fusion, the proposed Transformer-CNN Feature Alignment and Fusion (T-CFAF) module enhances the leading single-modality information, and the Cross-Modality Heterogeneous Graph Fusion (CMHGF) module further fuses multi-modality features at a high-level semantic layer adaptively. Comparative evaluation with ten segmentation models on six datasets demonstrates significant efficiency gains as well as highly competitive segmentation accuracy. (Our code is publicly available at https://github.com/joker-527/AAHN).
科研通智能强力驱动
Strongly Powered by AbleSci AI