一致性(知识库)
计算机科学
人工智能
图像翻译
翻译(生物学)
模式识别(心理学)
规范化(社会学)
深度学习
图像(数学)
医学影像学
机器学习
生物化学
化学
信使核糖核酸
基因
社会学
人类学
作者
Weiwei Jiang,Yingyu Qin,Xiaoyan Wang,Qiuju Chen,Qiu Guan,Minhua Lu
标识
DOI:10.1088/1361-6560/adb2d7
摘要
Abstract Unsupervised medical image translation tasks are challenging due to the difficulty of obtaining perfectly paired medical images. CycleGAN-based methods have proven effective in unpaired medical image translation. However, these methods can produce artifacts in the generated medical images. To address this issue, we propose an unsupervised network based on cycle consistency and hybrid contrastive unpaired translation (CycleH-CUT). CycleH-CUT consists of two hybrid contrastive unpaired translation (H-CUT) networks. In the H-CUT network, a query-selected attention (QS-Attn) mechanism is adopted to select queries with important features. The boosted contrastive learning (BoNCE) loss is employed to reweight all negative patches via the optimal transport strategy. We further apply spectral normalization (SN) to improve training stability, allowing the generator to extract complex features. On the basis of the H-CUT network, a new CycleH-CUT framework is proposed to integrate contrastive learning and cycle consistency. Two H-CUT networks are used to reconstruct the generated images back to the source domain, facilitating effective translation between unpaired medical images. We conduct extensive experiments on three public datasets (BraTS, OASIS3, and IXI) and a private Spinal Column dataset to demonstrate the effectiveness of CycleH-CUT and H-CUT. Specifically, CycleH-CUT achieves an average SSIM of 0.926 in the BraTS dataset, an average SSIM of 0.796 on the OASIS3 dataset, an average SSIM of 0.932 on the IXI dataset, and an average SSIM of 0.890 on the private Spinal Column dataset.
科研通智能强力驱动
Strongly Powered by AbleSci AI