人工智能
计算机科学
混叠
深度学习
迭代重建
计算机视觉
图像质量
保险丝(电气)
过程(计算)
噪音(视频)
模态(人机交互)
图像融合
采样(信号处理)
模式识别(心理学)
图像(数学)
欠采样
滤波器(信号处理)
电气工程
工程类
操作系统
作者
Lei Xiang,Yong Chen,Wei‐Tang Chang,Yiqiang Zhan,Weili Lin,Qian Wang,Dinggang Shen
标识
DOI:10.1109/tbme.2018.2883958
摘要
T1-weighted image (T1WI) and T2-weighted image (T2WI) are the two routinely acquired magnetic resonance (MR) modalities that can provide complementary information for clinical and research usages. However, the relatively long acquisition time makes the acquired image vulnerable to motion artifacts. To speed up the imaging process, various algorithms have been proposed to reconstruct high-quality images from under-sampled k-space data. However, most of the existing algorithms only rely on mono-modality acquisition for the image reconstruction. In this paper, we propose to combine complementary MR acquisitions (i.e., T1WI and under-sampled T2WI particularly) to reconstruct the high-quality image (i.e., corresponding to the fully sampled T2WI). To the best of our knowledge, this is the first work to fuse multi-modal MR acquisitions through deep learning to speed up the reconstruction of a certain target image. Specifically, we present a novel deep learning approach, namely Dense-Unet, to accomplish the reconstruction task. The proposed Dense-Unet requires fewer parameters and less computation, while achieving promising performance. Our results have shown that Dense-Unet can reconstruct a three-dimensional T2WI volume in less than 10 s with an under-sampling rate of 8 for the k-space and negligible aliasing artifacts or signal-noise-ratio loss. Experiments also demonstrate excellent transferring capability of Dense-Unet when applied to the datasets acquired by different MR scanners. The above-mentioned results imply great potential of our method in many clinical scenarios.
科研通智能强力驱动
Strongly Powered by AbleSci AI