作者
Mojtaba Safari,Shansong Wang,Zach Eidex,Richard Qiu,Chih‐Wei Chang,David S. Yu,Xiaofeng Yang
摘要
Abstract Background Magnetic resonance imaging (MRI) is an essential brain imaging tool, but its long acquisition times make it highly susceptible to motion artifacts that can degrade diagnostic quality. Purpose This work aims to develop and evaluate a novel physics‐informed motion correction network, termed PI‐MoCoNet, which leverages complementary information from both the spatial and k ‐space domains. The primary goal is to robustly remove motion artifacts from high‐resolution brain MRI images without explicit motion parameter estimation, thereby preserving image fidelity and enhancing diagnostic reliability. Materials and Methods PI‐MoCoNet is designed as a dual‐network framework consisting of a motion detection network and a motion correction network. The motion detection network employs a U‐net architecture to identify corrupted k ‐space lines using a spatial averaging module, thereby reducing prediction uncertainty. The correction network, inspired by recent advances in U‐net architectures and incorporating Swin Transformer blocks, reconstructs motion‐corrected images by leveraging three loss components: the reconstruction loss (), a learned perceptual image patch similarity (LPIPS) loss, and a data consistency loss () that enforces fidelity in the k ‐space domain. Realistic motion artifacts were simulated by perturbing phase encoding lines with random rigid transformations. The method was evaluated on two public datasets (IXI and MR‐ART). Comparative assessments were made against baseline models, including Pix2Pix GAN, CycleGAN, and a conventional U‐net, using quantitative metrics such as peak signal‐to‐noise ratio(PSNR), structural similarity index measure (SSIM), and normalized mean square error (NMSE). Results PI‐MoCoNet demonstrated significant improvements over competing methods across all levels of motion artifacts. On the IXI dataset, for minor motion artifacts, PSNR improved from 34.15 dB in the motion‐corrupted images to 45.95 dB after correction, SSIM increased from 0.87 to 1.00, and NMSE was reduced from 0.55% to 0.04%. For moderate artifacts, PSNR increased from 30.23 to 42.16 dB, SSIM from 0.80 to 0.99, and NMSE from 1.32% to 0.09%. In the case of heavy artifacts, PSNR improved from 27.99 to 36.01 dB, SSIM from 0.75 to 0.97, and NMSE decreased from 2.21% to 0.36%. On the MR‐ART dataset, PSNR values increased from 23.15 to 33.01 dB for low artifact levels and from 21.23 to 31.72 dB for high artifact levels; concurrently, SSIM improved from 0.72 to 0.87 and from 0.63 to 0.83, while NMSE decreased from 10.08% to 6.24% and from 14.77% to 8.32%, respectively. An ablation study further confirmed that incorporating both data consistency and perceptual losses led to an approximate 1 dB gain in PSNR and a reduction of 0.17% in NMSE compared to using the reconstruction loss alone. Conclusions PI‐MoCoNet is a robust, physics‐informed framework for mitigating brain motion artifacts in MRI. By successfully integrating spatial and k ‐space information, it enhances image quality and reduces the likelihood of repeat imaging sessions due to motion‐induced degradation. Its superior performance compared to existing methods underscores its clinical applicability, especially in scenarios where patient motion is inevitable, thus improving patient comfort, diagnostic reliability, and overall treatment planning efficiency.