计算机科学
编码器
学习迁移
人工智能
分割
深度学习
特征学习
变压器
模式识别(心理学)
医学影像学
磁共振成像
可视化
机器学习
电压
操作系统
医学
物理
量子力学
放射科
作者
Eunji Jun,Seung‐Woo Jeong,Da-Woon Heo,Heung‐Il Suk
出处
期刊:IEEE transactions on neural networks and learning systems
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:: 1-11
被引量:3
标识
DOI:10.1109/tnnls.2023.3308712
摘要
Transfer learning has attracted considerable attention in medical image analysis because of the limited number of annotated 3-D medical datasets available for training data-driven deep learning models in the real world. We propose Medical Transformer, a novel transfer learning framework that effectively models 3-D volumetric images as a sequence of 2-D image slices. To improve the high-level representation in 3-D-form empowering spatial relations, we use a multiview approach that leverages information from three planes of the 3-D volume, while providing parameter-efficient training. For building a source model generally applicable to various tasks, we pretrain the model using self-supervised learning (SSL) for masked encoding vector prediction as a proxy task, using a large-scale normal, healthy brain magnetic resonance imaging (MRI) dataset. Our pretrained model is evaluated on three downstream tasks: 1) brain disease diagnosis; 2) brain age prediction; and 3) brain tumor segmentation, which are widely studied in brain MRI research. Experimental results demonstrate that our Medical Transformer outperforms the state-of-the-art (SOTA) transfer learning methods, efficiently reducing the number of parameters by up to approximately 92% for classification and regression tasks and 97% for segmentation task, and it also achieves good performance in scenarios where only partial training samples are used.
科研通智能强力驱动
Strongly Powered by AbleSci AI