遗忘
计算机科学
人工智能
任务(项目管理)
班级(哲学)
机器学习
特征(语言学)
分类器(UML)
集合(抽象数据类型)
认知心理学
心理学
语言学
哲学
管理
经济
程序设计语言
作者
Hong-Jun Choi,Dong-Wan Choi
标识
DOI:10.1016/j.neucom.2022.05.079
摘要
In continual learning over deep neural networks (DNNs), the rehearsal strategy , in which the previous exemplars are jointly trained with new samples, is commonly employed for the purpose of addressing catastrophic forgetting . Unfortunately, due to the memory limit, rehearsal-based techniques inevitably cause the class imbalance issue leading to a DNN biased toward new tasks having more samples. Existing works mostly focus on correcting such a bias in the fully connected layer or classifier. In this paper, we newly discover that class imbalance tends to make old classes even more highly correlated with their similar new classes in the feature space, which turns out to be the major reason behind catastrophic forgetting, called inter-task forgetting . To alleviate inter-task forgetting, we propose a novel class incremental learning method, called attractive & repulsive training (ART) , which effectively captures the previous feature space into a set of class-wise flags , and thereby makes old and new similar classes less correlated in the new feature space. In our empirical study, our ART method is observed to be quite effective to improve the performance of the state-of-the-art methods by substantially mitigating inter-task forgetting. Our implementation is available at: https://github.com/bigdata-inha/ART/ .
科研通智能强力驱动
Strongly Powered by AbleSci AI