计算机科学
残余物
强化学习
人工智能
变压器
卷积(计算机科学)
模式识别(心理学)
人工神经网络
算法
电压
工程类
电气工程
作者
Hao Tang,Ningfeng Que,Ye Tian,Mingzhe Li,Alessandro Perelli,Yueyang Teng
标识
DOI:10.1088/1361-6560/adb19a
摘要
Abstract Objective. Computed tomography (CT) is a crucial medical imaging technique which uses X-ray radiation to identify cancer tissues. Since radiation poses a significant health risk, low dose acquisition procedures need to be adopted. However, low-dose CT (LDCT) can cause higher noise and artifacts which massively degrade the diagnosis. Approach. To denoise LDCT images more effectively, this paper proposes a deep learning method based on U-Net with multiple lightweight attention-based modules and residual reinforcement (MLAR-UNet), We integrate a U-Net architecture with several advanced modules, including Convolutional Block Attention Module (CBAM), Cross Residual Module (CR), Attention Cross Reinforcement Module (ACRM), and Convolution and Transformer Cross Attention Module (CTCAM). Among these modules, CBAM applies channel and spatial attention mechanisms to enhance local feature representation. However, serious detail loss caused by incorrect embedding of CBAM for LDCT denoising is verified in this study. To relieve this, we introduce CR to reduce information loss in deeper layers, preserving features more effectively. To address the excessive local attention of CBAM, we design ACRM, which incorporates Transformer to adjust the attention weights. Furthermore, we design CTCAM, which leverages a complex combination of Transformer and convolution to capture multi-scale information and compute more accurate attention weights. Results. Experiments verify the embedding rationality and validity of each module and show that the proposed MLAR-UNet denoises LDCT images more effectively and preserves more details than many state-of-the-art (SOTA) methods on clinical chest and abdominal CT datasets. Significance. The proposed MLAR-UNet not only demonstrates superior LDCT image denoising capability but also highlights the strong detail comprehension and negligible overheads of our designed ACRM and CTCAM. These findings provide a novel approach to integrating Transformer more efficiently in image processing.
科研通智能强力驱动
Strongly Powered by AbleSci AI