分割
融合
人工智能
特征(语言学)
计算机科学
计算机视觉
特征提取
模式识别(心理学)
语言学
哲学
作者
Somaiya Khan,Ali Khan,Yinglei Teng
标识
DOI:10.1109/tim.2025.3565715
摘要
The alarming increase in the incidence rate of skin cancer has widely necessitated the development of more effective diagnostic technologies to improve precision and efficacy. Despite the notable performance of the current methods integrating multiple advanced modules, they have increased the computational complexity, making their deployment in real-time clinical settings challenging, particularly on devices with limited resources. To address this issue, this study proposes a novel lightweight model named Deep Feature Fusion-UNet (DFF-UNet), which can significantly reduce computational complexity while ensuring high performance in skin lesion segmentation. The proposed DFF-UNet model introduces a newly designed Residual Feature Refinement Encoder (RFRE), which enhances the feature extraction process through a feature refinement mechanism by leveraging adaptive convolutional layers and residual connections, thereby capturing both detailed and high-level contextual features with efficient gradient flow. We also propose a Parallel Dilated Contextual Pyramid (PDCP) module to link the encoder and decoder. This module employs dilated convolutions at various dilation rates aligned parallel to each other to provide efficient feature mapping by capturing contextual skin lesion feature information. Lastly, we propose the Bottleneck Skip Fusion Decoder (BSFD), which utilizes bottleneck blocks to capture spatial features and skip connections for high-level semantic features. Extensive experiments and comparative analysis with state-of-the-art (SOTA) models validate the performance of the DFF-UNet method. Compared to the traditional UNet, the DFFUNet performed exceptionally well on four public datasets, including ISIC2018, ISIC2017, PH2, and HAM10000, with a significant reduction of 97.55% in params, 99.20% in GFLOPs, and 30.24% in inference time while improving mIoU by 1.05%, dice score by 0.82%, and accuracy by 0.22%.
科研通智能强力驱动
Strongly Powered by AbleSci AI