自编码
人工智能
计算机科学
模式识别(心理学)
特征(语言学)
编码器
人工神经网络
像素
计算机视觉
语言学
操作系统
哲学
作者
Haihua Wang,Wei Zou,Jiajun Wang,Jihui Li,Bin Zhang
标识
DOI:10.1088/1361-6560/ae0b28
摘要
Abstract Objective . Integrated PET/CT imaging plays a vital role in tumor diagnosis by offering both anatomical and functional information. However, the high cost, limited accessibility of PET imaging and concerns about cumulative radiation exposure in repeated scans may restrict its clinical use. This study aims to develop a cross-modal medical image synthesis method for generating PET images from CT scans, with a particular focus on accurately synthesizing lesion regions.
 Approach . We propose a two-stage Generative Adversarial Network termed MMF-PAE-GAN (Multi-modal Fusion Pre-trained AutoEncoder GAN) that integrates pre-GAN and post-GAN in terms of a Pre-trained AutoEncoder (PAE). The pre-GAN produces an initial pseudo PET image and provides the post-GAN with PET related multi-scale features. Unlike traditional Sample Adaptive Encoder (SAE), the PAE enhances sample-specific representation by extracting multi-scale contextual features. To capture both lesion-related and non-lesion-related anatomical information, two CT scans processed under different window settings are fed into the post-GAN. Furthermore, a Multi-modal Weighted Feature Fusion Module (MMWFFM) is introduced to dynamically highlight informative cross-modal features while suppress redundancies. A Perceptual Loss (PL), computed based on the PAE, is also used to impose constraints in feature-space and improve the fidelity lesion synthesis. 
 Main results . On the AutoPET dataset, our method achieved a PSNR of 29.1781 dB, MAE of 0.0094, SSIM of 0.9217, NMSE of 0.3651 for pixel-level metrics, along with a Sensitivity of 85.31\%, Specificity of 97.02\% and Accuracy of 95.97\% for slice-level classification metrics. On the FAHSU dataset, these two metrics amount to a PSNR of 29.1506 dB, MAE of 0.0095, SSIM of 0.9193, NMSE of 0.3663, Sensitivity of 84.51\%, Specificity of 96.82\% and Accuracy of 95.71\%.
 Significance . The proposed MMF-PAE-GAN can generate high-quality PET images directly from CT scans without the need for radioactive tracers, which potentially improves accessibility of functional imaging and reduces costs in clinical scenarios where PET acquisition is limited or repeated scans are not feasible.
科研通智能强力驱动
Strongly Powered by AbleSci AI