模式
计算机科学
人工智能
分割
模式识别(心理学)
医学影像学
编码器
计算机视觉
深度学习
图像分割
情态动词
模态(人机交互)
卷积神经网络
操作系统
社会学
化学
高分子化学
社会科学
作者
Vanya V. Valindria,Nick Pawlowski,Martin Rajchl,Ioannis Lavdas,Eric O. Aboagye,Andrea Rockall,Daniel Rueckert,Ben Glocker
出处
期刊:Workshop on Applications of Computer Vision
日期:2018-03-01
被引量:70
标识
DOI:10.1109/wacv.2018.00066
摘要
Convolutional neural networks have been widely used in medical image segmentation. The amount of training data strongly determines the overall performance. Most approaches are applied for a single imaging modality, e.g., brain MRI. In practice, it is often difficult to acquire sufficient training data of a certain imaging modality. The same anatomical structures, however, may be visible in different modalities such as major organs on abdominal CT and MRI. In this work, we investigate the effectiveness of learning from multiple modalities to improve the segmentation accuracy on each individual modality. We study the feasibility of using a dual-stream encoder-decoder architecture to learn modality-independent, and thus, generalisable and robust features. All of our MRI and CT data are unpaired, which means they are obtained from different subjects and not registered to each other. Experiments show that multi-modal learning can improve overall accuracy over modality-specific training. Results demonstrate that information across modalities can in particular improve performance on varying structures such as the spleen.
科研通智能强力驱动
Strongly Powered by AbleSci AI