对抗制
生成对抗网络
生成语法
计算机科学
人工智能
放射科
医学
深度学习
作者
Yu Luo,Shaowei Zhang,Jie Ling,Zhiyi Lin,Zongming Wang,Shun Yao
标识
DOI:10.1016/j.knosys.2024.111799
摘要
Synthetic computed tomography (sCT) images from magnetic resonance imaging (MRI) data have broad applications in clinical medicine, including radiation oncology and surgical planning. With the development of deep learning technology in medical image analysis, convolution-based generative adversarial networks (GANs) have demonstrated their promising performance in synthesizing CT from MRI. However, many GAN variants tend to generate sCT images from MRI scans in an end-to-end manner, ignoring the distribution differences between different tissues and potentially leading to poor and unrealistic synthetic results. To solve this problem, we propose the MGDGAN, a mask-guided dual network based on GAN architecture for CT synthesis from MRI. Specifically, a mask that delineates the bone part (sBone) is first learned to guide the following synthesis, then the sBone and the soft-tissue part (sSoft-tissue) are synthesized through two parallel branches. Finally, the sCT image is obtained by the fusion of sBone and sSoft-tissue. Experimental results indicate that MGDGAN could generate sCT images with high accuracy in fine bone structure, brain tissue, and cerebral lesions, which are visually closer to the real CT (rCT) images. In quantitative evaluation, MGDGAN outperforms other state-of-the-art methods on multiple datasets, including CycleGAN, Pix2Pix, ECNN, cGAN9, APS and ResViT.
科研通智能强力驱动
Strongly Powered by AbleSci AI