编码器
判别式
变压器
像素
计算机科学
人工智能
模式识别(心理学)
图像分割
分割
计算机视觉
电压
物理
量子力学
操作系统
作者
Bingzhi Chen,Y Q Liu,Zheng Zhang,Guangming Lu,Adams Wai-Kin Kong
标识
DOI:10.1109/tetci.2023.3309626
摘要
Accurate segmentation of organs or lesions from medical images is crucial for reliable diagnosis of diseases and organ morphometry. In recent years, convolutional encoder-decoder solutions have achieved substantial progress in the field of automatic medical image segmentation. Due to the inherent bias in the convolution operations, prior models mainly focus on local visual cues formed by the neighboring pixels, but fail to fully model the long-range contextual dependencies. In this article, we propose a novel Transformer-based Attention Guided Network called TransAttUnet , in which the multi-level guided attention and multi-scale skip connection are designed to jointly enhance the performance of the semantical segmentation architecture. Inspired by Transformer, the self-aware attention (SAA) module with Transformer Self Attention (TSA) and Global Spatial Attention (GSA) is incorporated into TransAttUnet to effectively learn the non-local interactions among encoder features. Moreover, we also use additional multi-scale skip connections between decoder blocks to aggregate the upsampled features with different semantic scales. In this way, the representation ability of multi-scale context information is strengthened to generate discriminative features. Benefitting from these complementary components, the proposed TransAttUnet can effectively alleviate the loss of fine details caused by the stacking of convolution layers and the consecutive sampling operations, finally improving the segmentation quality of medical images. Extensive experiments were conducted on multiple medical image segmentation datasets from various imaging modalities, which demonstrate that the proposed method consistently outperforms the existing state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI