模态(人机交互)
计算机科学
学习迁移
人工智能
感觉系统
模式识别(心理学)
机器学习
计算机视觉
神经科学
心理学
作者
Lingting Zhu,Yizheng Chen,Lianli Liu,Lei Xing,Lequan Yu
标识
DOI:10.1109/tpami.2024.3465649
摘要
Multi-modality imaging is widely used in clinical practice and biomedical research to gain a comprehensive understanding of an imaging subject. Currently, multi-modality imaging is accomplished by post hoc fusion of independently reconstructed images under the guidance of mutual information or spatially registered hardware, which limits the accuracy and utility of multi-modality imaging. Here, we investigate a data-driven multi-modality imaging (DMI) strategy for synergetic imaging of CT and MRI. We reveal two distinct types of features in multi-modality imaging, namely intra- and inter-modality features, and present a multi-sensor learning (MSL) framework to utilize the crossover inter-modality features for augmented multi-modality imaging. The MSL imaging approach breaks down the boundaries of traditional imaging modalities and allows for optimal hybridization of CT and MRI, which maximizes the use of sensory data. We showcase the effectiveness of our DMI strategy through synergetic CT-MRI brain imaging. The principle of DMI is quite general and holds enormous potential for various DMI applications across disciplines.
科研通智能强力驱动
Strongly Powered by AbleSci AI