模态(人机交互)
稳健性(进化)
情态动词
计算机视觉
人工神经网络
迭代重建
计算机科学
深度学习
人工智能
代表(政治)
模式识别(心理学)
高分子化学
生物化学
化学
基因
政治
政治学
法学
作者
Jinbao Wei,Gang Yang,Zhijie Wang,Yu Liu,Aiping Liu,Xun Chen
标识
DOI:10.1016/j.knosys.2024.111866
摘要
Multi-modal Magnetic Resonance Imaging (MRI) super-resolution (SR) and reconstruction aims to obtain a high-quality target image from corresponding sparsely sampled signals under the guidance of a reference image. However, existing techniques typically assume that the input multi-modal MR images are well aligned, which is challenging to achieve in clinical practice. This naive assumption has made their algorithms vulnerable to misalignment scenarios. Moreover, they often neglect many non-local common characteristics within and between modalities. In this work, we proposed a MisAlignment-Resistant Deep Unfolding Network (MAR-DUN) embedded in the tailored gradient descent module (GDM) and proximal mapping module (PMM) for multi-modal MRI SR and reconstruction. In the GDM, we employ an adaptive step-size sub-network (ASS-Net) to enhance the texture representation capacity of the proposed MAR-DUN. Furthermore, in the PMM, we propose a cross-modality non-local module (CNLM) featuring the inverse deformation layer (IDL). The IDL aligns features between the target and reference images by adaptively learning their spatial transformations, thus enhancing the robustness of the proposed network and allowing the CNLM to further explore the cross-modality non-local characteristics. On the other hand, the proposed CNLM aims to establish both the intra-modality and inter-modality non-local dependencies for fully exploiting the correlations between the target and reference images. Extensive experimental results show that our proposed method consistently achieves state-of-the-art reconstruction performance in alignment and misalignment scenarios, demonstrating its significant promise for real-world applications.
科研通智能强力驱动
Strongly Powered by AbleSci AI