计算机科学
人工智能
模态(人机交互)
计算机视觉
图像分割
分割
图像(数学)
比例(比率)
频域
尺度空间分割
领域(数学分析)
模式识别(心理学)
数学
量子力学
物理
数学分析
作者
Ju-Hyeon Nam,Nur Suriza Syazwany,Su Jung Kim,Sang‐Cheol Lee
标识
DOI:10.1109/cvpr52733.2024.01091
摘要
Generalizability in deep neural networks plays a pivotal role in medical image segmentation. However, deep learning-based medical image analyses tend to overlook the importance of frequency variance, which is critical element for achieving a model that is both modality-agnostic and domain-generalizable. Additionally, various models fail to account for the potential information loss that can arise from multitask learning under deep supervision, a factor that can impair the model's representation ability. To address these challenges, we propose a Modality-agnostic Domain Generalizable Network (MADGNet) for medical image segmentation, which comprises two key components: a Multi-Frequency in Multi-Scale Attention (MFMSA) block and Ensemble Sub-Decoding Module (E-SDM). The MFMSA block refines the process of spatial feature extraction, particularly in capturing boundary features, by incorporating multi-frequency and multi-scale features, thereby offering informative cues for tissue outline and anatomical structures. Moreover, we propose E-SDM to mitigate information loss in multitask learning with deep supervision, especially during substantial upsampling from low resolution. We evaluate the segmentation performance of MADGNet across six modalities and fifteen datasets. Through extensive experiments, we demonstrate that MADGNet consistently outperforms state-of-the-art models across various modalities, showcasing superior segmentation performance. This affirms MADGNet as a robust solution for medical image segmentation that excels in diverse imaging scenarios. Our MADGNet code is available in GitHub Link.
科研通智能强力驱动
Strongly Powered by AbleSci AI