模态(人机交互)
计算机科学
人工智能
推论
分割
特征(语言学)
水准点(测量)
班级(哲学)
像素
机器学习
图像分割
模式识别(心理学)
代表(政治)
情态动词
特征提取
缺少数据
医学影像学
数据挖掘
哲学
地理
法学
高分子化学
化学
政治
语言学
政治学
大地测量学
作者
Shuai Wang,Zipei Yan,Daoan Zhang,Haining Wei,Zhongsen Li,Rui Li
标识
DOI:10.1109/icassp49357.2023.10095014
摘要
Multi-modality medical imaging is crucial in clinical treatment as it can provide complementary information for medical image segmentation. However, collecting multi-modal data in clinical is difficult due to the limitation of the scan time and other clinical situations. As such, it is clinically meaningful to develop an image segmentation paradigm to handle this missing modality problem. In this paper, we propose a prototype knowledge distillation (ProtoKD) method to tackle the challenging problem, especially for the toughest scenario when only single modal data can be accessed. Specifically, our ProtoKD can not only distillate the pixel-wise knowledge of multi-modality data to single-modality data but also transfer intra-class and inter-class feature variations, such that the student model could learn more robust feature representation from the teacher model and inference with only one single modality data. Our method achieves state-of-the-art performance on BraTS benchmark.
科研通智能强力驱动
Strongly Powered by AbleSci AI