人工智能
计算机科学
对比度(视觉)
计算机视觉
卷积神经网络
超分辨率
模式识别(心理学)
分辨率(逻辑)
图像分辨率
图像(数学)
作者
Pengcheng Lei,Miaomiao Zhang,Faming Fang,Guixu Zhang
标识
DOI:10.1109/tmi.2025.3563523
摘要
Multi-contrast magnetic resonance imaging (MCMRI) super-resolution (SR) methods aims to leverage the complementary information present in multi-contrast images. However, existing methods encounter several limitations. Firstly, most current networks fail to appropriately model the correlations of multi-contrast images and lack certain interpretability. Secondly, they often overlook the negative impact of spatial misalignment between modalities in clinical practice. Thirdly, existing methods do not effectively constrain the complementary information learned between multi-contrast images, resulting in information redundancy and limiting their model performance. In this paper, we propose a robust alignment-assisted multi-contrast convolutional dictionary (A2-CDic) model to address these challenges. Specifically, we develop an observation model based on convolutional sparse coding to explicitly represent multi-contrast images as common (e.g., consistent textures) and unique (e.g., inconsistent structures and contrasts) components. Considering there are spatial misalignments in real-world multi-contrast images, we incorporate a spatial alignment module to compensate for the misaligned structures. This approach enables the proposed model to fully exploit the valuable information in the reference image while mitigating interference from inconsistent information. We employ the proximal gradient algorithm to optimize the model and unroll the iterative steps into a multi-scale convolutional dictionary network. Furthermore, we utilize mutual information losses to constrain the extracted common and unique components. This constraint reduces the redundancy between the decomposed components, allowing each sub-module to learn more representative features. We evaluate our model on four publicly available datasets comprising internal, external, spatially aligned, and misaligned MCMRI images. The experimental results demonstrate that our model surpasses existing state-of-the-art MCMRI SR methods in terms of both generalization ability and overall performance. Code is available at https://github.com/lpcccc-cv/A2-CDic.
科研通智能强力驱动
Strongly Powered by AbleSci AI