分割
计算机科学
人工智能
图像分割
基于分割的对象分类
尺度空间分割
模式识别(心理学)
判别式
变压器
计算机视觉
概化理论
数学
统计
物理
电压
量子力学
作者
Wen Ji,Albert C. S. Chung
标识
DOI:10.1109/tmi.2023.3322581
摘要
Image segmentation is essential to medical image analysis as it provides the labeled regions of interest for the subsequent diagnosis and treatment. However, fully-supervised segmentation methods require high-quality annotations produced by experts, which is laborious and expensive. In addition, when performing segmentation on another unlabeled image modality, the segmentation performance will be adversely affected due to the domain shift. Unsupervised domain adaptation (UDA) is an effective way to tackle these problems, but the performance of the existing methods is still desired to improve. Also, despite the effectiveness of recent Transformer-based methods in medical image segmentation, the adaptability of Transformers is rarely investigated. In this paper, we present a novel UDA framework using a Transformer for building a cross-modality segmentation method with the advantages of learning long-range dependencies and transferring attentive information. To fully utilize the attention learned by the Transformer in UDA, we propose Meta Attention (MA) and use it to perform a fully attention-based alignment scheme, which can learn the hierarchical consistencies of attention and transfer more discriminative information between two modalities. We have conducted extensive experiments on cross-modality segmentation using three datasets, including a whole heart segmentation dataset (MMWHS), an abdominal organ segmentation dataset, and a brain tumor segmentation dataset. The promising results show that our method can significantly improve performance compared with the state-of-the-art UDA methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI