计算机科学
收缩率
特征(语言学)
人工智能
适应性
忠诚
模式识别(心理学)
代表(政治)
迭代重建
算法
极限(数学)
人工神经网络
刚度(电磁)
高保真
计算机视觉
趋同(经济学)
断层摄影术
迭代法
质量(理念)
重建算法
成像体模
衰退
信号重构
深层神经网络
标识
DOI:10.1088/1361-6501/ae2b1f
摘要
Abstract Sparse-view computed tomography (CT) reduces X-ray exposure while maintaining diagnostic accuracy, but conventional reconstruction methods suffer severe artifacts under highly undersampled projections. Deep unfolding networks (DUNs) improve reconstruction quality with interpretability, yet existing DUN-based methods have two limitations: (1) each iterative stage uses only single-channel inputs and outputs, which limits the representation of complex anatomical structures and weakens feature transfer across stages, resulting in structural detail loss; (2) reliance on short-term connections between adjacent stages makes it difficult to capture long-range dependencies, leading to the fading of early high-frequency features such as edges and fine anatomical details; (3) fixed thresholds in l1-regularized frameworks limit adaptability to variations in tissue density and anatomical complexity. To address these challenges, we propose the Memory-Augmented Adaptive Shrinkage Cascading Unfolding Network (MASCU-Net). MASCU-Net incorporates a Side-Information (SI) mechanism and a Cross-stage Memory-Enhancement Module (CMEM) to optimize inter-stage information flow, enhance adjacent-stage feature interaction, and establish hierarchical long-range dependencies. Furthermore, a Content-Aware Adaptive Threshold Module (CATM) dynamically adjusts shrinkage thresholds according to input features, overcoming the rigidity of fixed thresholds. Extensive experiments demonstrate that MASCU-Net achieves superior reconstruction fidelity and quantitative performance, consistently outperforming state-of-the-art methods under various sparse-view configurations.
科研通智能强力驱动
Strongly Powered by AbleSci AI