概念漂移
计算机科学
遗忘
嵌入
利用
偏移量(计算机科学)
数据流
人工智能
班级(哲学)
机器学习
数据流挖掘
计算机安全
语言学
电信
哲学
程序设计语言
作者
Huiwei Lin,Shanshan Feng,Xutao Li,Wentao Li,Yunming Ye
标识
DOI:10.1109/tcsvt.2022.3219605
摘要
Online class-incremental learning (OCIL) studies the problem of mitigating the phenomenon of catastrophic forgetting while learning new classes from a continuously non-stationary data stream. Existing approaches mainly constrain the updating of parameters to prevent the drift of previous classes that reflects the movement of samples in the embedding space. Although this kind of drift can be relieved to some extent by existing approaches, it is usually inevitable. Therefore, only prevention of drift is not enough, and we also need to further compensate for it. To this end, for each previous class, we exploit the sample with the smallest loss value as its anchor, which can representatively characterize the corresponding class. Based on the assistance of anchors, we present a novel Anchor Assisted Experience Replay (AAER) method that not only prevents the drift but also compensates for the inevitable drift to overcome the catastrophic forgetting. Specifically, we design a Drift-Prevention with Anchor (DPA) operation, which plays a preventive role by reducing the drift implicitly as well as encouraging the samples with the same label cluster tightly. Moreover, we propose a Drift-Compensation with Anchor (DCA) operation that contains two remedy mechanisms: one is Forward-offset which keeps embedding of previous data but estimates new classification centers; the other is just the opposite named Backward-offset, which keeps the old classification centers unchanged but updates the embedding of previous data. We conduct extensive experiments on three real-world datasets, and empirical results consistently demonstrate the superior performance of AAER over various state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI