人工智能
计算机科学
分割
模式识别(心理学)
注释
监督学习
医学影像学
半监督学习
机器学习
像素
标记数据
图像分割
计算机视觉
人工神经网络
作者
Banafshe Felfeliyan,Nils D. Forkert,Abhilash Rakkunedeth Hareendranathan,Daniel Cornel,Yuyue Zhou,Gregor Kuntze,Jacob L. Jaremko,Janet L. Ronsky
标识
DOI:10.1016/j.compmedimag.2023.102297
摘要
Many successful methods developed for medical image analysis based on machine learning use supervised learning approaches, which often require large datasets annotated by experts to achieve high accuracy. However, medical data annotation is time-consuming and expensive, especially for segmentation tasks. To overcome the problem of learning with limited labeled medical image data, an alternative deep learning training strategy based on self-supervised pretraining on unlabeled imaging data is proposed in this work. For the pretraining, different distortions are arbitrarily applied to random areas of unlabeled images. Next, a Mask-RCNN architecture is trained to localize the distortion location and recover the original image pixels. This pretrained model is assumed to gain knowledge of the relevant texture in the images from the self-supervised pretraining on unlabeled imaging data. This provides a good basis for fine-tuning the model to segment the structure of interest using a limited amount of labeled training data. The effectiveness of the proposed method in different pretraining and fine-tuning scenarios was evaluated based on the Osteoarthritis Initiative dataset with the aim of segmenting effusions in MRI datasets of the knee. Applying the proposed self-supervised pretraining method improved the Dice score by up to 18% compared to training the models using only the limited annotated data. The proposed self-supervised learning approach can be applied to many other medical image analysis tasks including anomaly detection, segmentation, and classification.
科研通智能强力驱动
Strongly Powered by AbleSci AI