理论(学习稳定性)
班级(哲学)
特征(语言学)
计算机科学
人工智能
机器学习
代表(政治)
渐进式学习
可塑性
集合(抽象数据类型)
物理
哲学
热力学
程序设计语言
法学
政治
语言学
政治学
作者
Kim, Dongwan,Han, Bohyung
出处
期刊:Cornell University - arXiv
日期:2023-04-04
标识
DOI:10.48550/arxiv.2304.01663
摘要
A primary goal of class-incremental learning is to strike a balance between stability and plasticity, where models should be both stable enough to retain knowledge learned from previously seen classes, and plastic enough to learn concepts from new classes. While previous works demonstrate strong performance on class-incremental benchmarks, it is not clear whether their success comes from the models being stable, plastic, or a mixture of both. This paper aims to shed light on how effectively recent class-incremental learning algorithms address the stability-plasticity trade-off. We establish analytical tools that measure the stability and plasticity of feature representations, and employ such tools to investigate models trained with various algorithms on large-scale class-incremental benchmarks. Surprisingly, we find that the majority of class-incremental learning algorithms heavily favor stability over plasticity, to the extent that the feature extractor of a model trained on the initial set of classes is no less effective than that of the final incremental model. Our observations not only inspire two simple algorithms that highlight the importance of feature representation analysis, but also suggest that class-incremental learning approaches, in general, should strive for better feature representation learning.
科研通智能强力驱动
Strongly Powered by AbleSci AI