翻译(生物学)
编码器
对比度(视觉)
特征(语言学)
图像(数学)
模式识别(心理学)
人工智能
计算机科学
空格(标点符号)
计算机视觉
化学
生物化学
操作系统
哲学
信使核糖核酸
基因
语言学
作者
Heran Yang,Jian Sun,Liwei Yang,Zongben Xu
标识
DOI:10.1007/978-3-030-87199-4_12
摘要
Cross-contrast image translation is an important task for completing missing contrasts in clinical diagnosis. However, most existing methods learn separate translator for each pair of contrasts, which is inefficient due to many possible contrast pairs in real scenarios. In this work, we propose a unified Hyper-GAN model for effectively and efficiently translating between different contrast pairs. Hyper-GAN consists of a pair of hyper-encoder and hyper-decoder to first map from the source contrast to a common feature space, and then further map to the target contrast image. To facilitate the translation between different contrast pairs, contrast-modulators are designed to tune the hyper-encoder and hyper-decoder adaptive to different contrasts. We also design a common space loss to enforce that multi-contrast images of a subject share a common feature space, implicitly modeling the shared underlying anatomical structures. Experiments on two datasets of IXI and BraTS 2019 show that our Hyper-GAN achieves state-of-the-art results in both accuracy and efficiency, e.g., improving more than 1.47 and 1.09 dB in PSNR on two datasets with less than half the amount of parameters.
科研通智能强力驱动
Strongly Powered by AbleSci AI