计算机科学
遗忘
人工智能
加权
投影(关系代数)
正规化(语言学)
一致性(知识库)
机器学习
水准点(测量)
可视化
子空间拓扑
边距(机器学习)
算法
语言学
医学
放射科
哲学
大地测量学
地理
作者
Kuang Shu,Heng Li,Jie Cheng,Qinghai Guo,Luziwei Leng,Jianxing Liao,Yan Hu,Jiang Liu
标识
DOI:10.1109/bibm55620.2022.9995580
摘要
Despite the tremendous progress recently achieved by deep learning (DL) in medical image analysis, most DL models only concentrate on single data distribution, which follows the independent and identically distributed (i.i.d) assumption. However, in practice, image data distribution changes with clinical conditions, such as different scanner manufacturers, imaging settings, and statistics regions. Although one can further train the model on new data samples, updating a model with data from an unknown distribution will always result in the model's performance degradation on the learned data, a notorious phenomenon called catastrophic forgetting. Therefore affects the applicability of DL algorithms in continuously changing clinical scenarios. In this study, we have proposed a new method to address the impact of changing distributions in continual learning scenarios and alleviate catastrophic forgetting. A gradient regularization approach is used to suppress forgetting, and a replay-oriented consistency calculation method combined with a subspace weighting strategy is proposed to improve the model plasticity further. The proposed replay-oriented gradient projection memory (RO-GPM) is evaluated on multiple fundus disease diagnosis datasets including a real-world application and a continual learning benchmark. The quantitative and visualization results demonstrate that the proposed RO-GPM achieves superior performance to state-of-the-art algorithms by a large margin. 1
科研通智能强力驱动
Strongly Powered by AbleSci AI