分割
人工智能
特征(语言学)
计算机科学
对偶(语法数字)
计算机视觉
图像分割
比例(比率)
模式识别(心理学)
尺度空间分割
艺术
哲学
语言学
物理
文学类
量子力学
作者
Xiaosen Li,Linli Li,Xinlong Xing,Huixian Liao,Wenji Wang,Qingfeng Dong,Xiao Qin,Changan Yuan
标识
DOI:10.1109/tmi.2025.3549011
摘要
Melanoma is a malignant tumor originating from the lesions of skin cells. Medical image segmentation tasks for skin lesion play a crucial role in quantitative analysis. Achieving precise and efficient segmentation remains a significant challenge for medical practitioners. Hence, a skin lesion segmentation model named MSDUNet, which incorporates multi-scale deformable block (MSD Block) and dual-input dynamic enhancement module(D2M), is proposed. Firstly, the model employs a hybrid architecture encoder that better integrates global and local features. Secondly, to better utilize macroscopic and microscopic multiscale information, improvements are made to skip connection and decoder block, introducing D2M and MSD Block. The D2M leverages large kernel dilated convolution to draw out attention bias matrix on the decoder features, supplementing and enhancing the semantic features of the decoder's lower layers transmitted through skip connection features, thereby compensating semantic gaps. The MSD Block uses channel-wise split and deformable convolutions with varying receptive fields to better extract and integrate multi-scale information while controlling the model's size, enabling the decoder to focus more on task-relevant regions and edge details. MSDUNet attains outstanding performance with Dice scores of 93.08% and 91.68% on the ISIC-2016 and ISIC-2018 datasets, respectively. Furthermore, experiments on the HAM10000 dataset demonstrate its superior performance with a Dice score of 95.40%. External validation experiments based on the ISIC-2016, ISIC-2018, and HAM10000 experimental weights on the PH2 dataset yield Dice scores of 92.67%, 92.31%, and 93.46%, respectively, showcasing the exceptional generalization capability of MSDUNet. Our code implementation is publicly available at the Github.
科研通智能强力驱动
Strongly Powered by AbleSci AI