计算机科学
人工智能
机器学习
MNIST数据库
分类
深度学习
水准点(测量)
多任务学习
钥匙(锁)
基于实例的学习
任务(项目管理)
班级(哲学)
领域(数学)
集合(抽象数据类型)
渐进式学习
主动学习(机器学习)
工程类
程序设计语言
系统工程
纯数学
地理
计算机安全
数学
大地测量学
作者
Gido M. van de Ven,Tinne Tuytelaars,Andreas S. Tolias
标识
DOI:10.1038/s42256-022-00568-3
摘要
Incrementally learning new information from a non-stationary stream of data, referred to as 'continual learning', is a key feature of natural intelligence, but a challenging problem for deep neural networks. In recent years, numerous deep learning methods for continual learning have been proposed, but comparing their performances is difficult due to the lack of a common framework. To help address this, we describe three fundamental types, or 'scenarios', of continual learning: task-incremental, domain-incremental and class-incremental learning. Each of these scenarios has its own set of challenges. To illustrate this, we provide a comprehensive empirical comparison of currently used continual learning strategies, by performing the Split MNIST and Split CIFAR-100 protocols according to each scenario. We demonstrate substantial differences between the three scenarios in terms of difficulty and in terms of the effectiveness of different strategies. The proposed categorization aims to structure the continual learning field, by forming a key foundation for clearly defining benchmark problems.
科研通智能强力驱动
Strongly Powered by AbleSci AI