计算机科学
自编码
领域(数学分析)
图像(数学)
域适应
人工智能
适应(眼睛)
钥匙(锁)
分割
图像分割
限制
编码(集合论)
数据挖掘
模式识别(心理学)
算法
机器学习
人工神经网络
集合(抽象数据类型)
机械工程
数学分析
物理
数学
计算机安全
分类器(UML)
光学
程序设计语言
工程类
作者
Qingqiao Hu,Hongwei Li,Jianguo Zhang
标识
DOI:10.1007/978-3-031-16446-0_47
摘要
Medical image synthesis has attracted increasing attention because it could generate missing image data, improve diagnosis, and benefits many downstream tasks. However, so far the developed synthesis model is not adaptive to unseen data distribution that presents domain shift, limiting its applicability in clinical routine. This work focuses on exploring domain adaptation (DA) of 3D image-to-image synthesis models. First, we highlight the technical difference in DA between classification, segmentation, and synthesis models. Second, we present a novel efficient adaptation approach based on a 2D variational autoencoder which approximates 3D distributions. Third, we present empirical studies on the effect of the amount of adaptation data and the key hyper-parameters. Our results show that the proposed approach can significantly improve the synthesis accuracy on unseen domains in a 3D setting. The code is publicly available at https://github.com/WinstonHuTiger/2D_VAE_UDA_for_3D_sythesis .
科研通智能强力驱动
Strongly Powered by AbleSci AI