计算机科学
人工智能
分割
模式识别(心理学)
图像分割
对抗制
特征(语言学)
图像(数学)
领域(数学分析)
域适应
计算机视觉
数学
语言学
分类器(UML)
数学分析
哲学
作者
María Baldeon-Calisto,Susana K. Lai-Yuen
摘要
Deep learning models have obtained state-of-the-art results for medical image analysis. However, CNNs require a massive amount of labelled data to achieve a high performance. Moreover, many supervised learning approaches assume that the training/source dataset and test/target dataset follow the same probability distribution. Nevertheless, this assumption is hardly satisfied in real-world data and when the models are tested on an unseen domain there is a significant performance degradation. In this work, we present an unsupervised Cross-Modality Adversarial Domain Adaptation (C-MADA) framework for medical image segmentation. C-MADA implements an image-level and feature-level adaptation method in a two-step sequential manner. First, images from the source domain are translated to the target domain through an unpaired image-to-image adversarial translation with cycle-consistency loss. Then, a U-Net network is trained with the mapped source domain images and target domain images in an adversarial manner to learn domain-invariant feature representations and produce segmentations for the target domain. Furthermore, to improve the network’s segmentation performance, information about the shape, texture, and contour of the predicted segmentation is included during the adversarial training. C-MADA is tested on the task of brain MRI segmentation from the crossMoDa Grand Challenge and is ranked within the top 15 submissions of the challenge.
科研通智能强力驱动
Strongly Powered by AbleSci AI