翻译(生物学)
计算机科学
医学影像学
人工智能
图像(数学)
模态(人机交互)
分类器(UML)
生成模型
图像翻译
计算机视觉
生成语法
模式识别(心理学)
生物化学
信使核糖核酸
基因
化学
作者
Yimin Luo,Qinyu Yang,Ziyi Liu,Zenglin Shi,Weimin Huang,Guoyan Zheng,Jun Cheng
标识
DOI:10.1109/jbhi.2024.3393870
摘要
In a clinical setting, the acquisition of certain medical image modality is often unavailable due to various considerations such as cost, radiation, etc. Therefore, unpaired cross-modality translation techniques, which involve training on the unpaired data and synthesizing the target modality with the guidance of the acquired source modality, are of great interest. Previous methods for synthesizing target medical images are to establish one-shot mapping through generative adversarial networks (GANs). As promising alternatives to GANs, diffusion models have recently received wide interests in generative tasks. In this paper, we propose a target-guided diffusion model (TGDM) for unpaired cross-modality medical image translation. For training, to encourage our diffusion model to learn more visual concepts, we adopted a perception prioritized weight scheme (P2W) to the training objectives. For sampling, a pre-trained classifier is adopted in the reverse process to relieve modality-specific remnants from source data. Experiments on both brain MRI-CT and prostate MRI-US datasets demonstrate that the proposed method achieves a visually realistic result that mimics a vivid anatomical section of the target organ. In addition, we have also conducted a subjective assessment based on the synthesized samples to further validate the clinical value of TGDM.
科研通智能强力驱动
Strongly Powered by AbleSci AI