计算机科学
人工智能
机器学习
变压器
一般化
蒸馏
深度学习
监督学习
半监督学习
块(置换群论)
过程(计算)
模式识别(心理学)
人工神经网络
数据挖掘
数学
数学分析
物理
电压
操作系统
有机化学
化学
量子力学
几何学
标识
DOI:10.1007/978-3-031-18910-4_6
摘要
Deep learning-based magnetic resonance imaging (MRI) reconstruction obtains extremely high reconstruction performance by training with large amounts of data. However, the acquisition of fully sampled MRI data is expensive and time-consuming, making it difficult to obtain large amounts of fully sampled data. This not only poses a challenge for models that require more fully sampled data for training but also limits the generalization ability of network models. To address the above problems, we propose a semi-supervised MRI reconstruction algorithm based on migration learning by analyzing the characteristics of under-sampled MRI reconstruction. To address the lack of data scarcity and generalization in the reconstruction process. The model fuses shallow features extracted from the convolutional layer and deep features obtained from the Swin Transformer Block (STB) to greatly improve the reconstruction capability of the network. However, the process of distillation cannot predict the goodness of the knowledge transferred, which hinders the performance of the semi-supervised algorithm. We further propose the use of privilege loss as a way to improve the distillation of useful knowledge. Experimental results show that student networks trained by this algorithm are able to rival supervised teacher networks and outperform unsupervised network models in brain data reconstruction.
科研通智能强力驱动
Strongly Powered by AbleSci AI