计算机科学
人工智能
机器学习
原始数据
深度学习
训练集
过程(计算)
人工神经网络
任务(项目管理)
深层神经网络
蒸馏
数据科学
数据挖掘
管理
经济
程序设计语言
操作系统
化学
有机化学
作者
Ruonan Yu,Songhua Liu,Xinchao Wang
标识
DOI:10.1109/tpami.2023.3323376
摘要
Recent success of deep learning is largely attributed to the sheer amount of data used for training deep neural networks. Despite the unprecedented success, the massive data, unfortunately, significantly increases the burden on storage and transmission and further gives rise to a cumbersome model training process. Besides, relying on the raw data for training per se yields concerns about privacy and copyright. To alleviate these shortcomings, dataset distillation (DD), also known as dataset condensation (DC), was introduced and has recently attracted much research attention in the community. Given an original dataset, DD aims to derive a much smaller dataset containing synthetic samples, based on which the trained models yield performance comparable with those trained on the original dataset. In this paper, we give a comprehensive review and summary of recent advances in DD and its application. We first introduce the task formally and propose an overall algorithmic framework followed by all existing DD methods. Next, we provide a systematic taxonomy of current methodologies in this area, and discuss their theoretical interconnections. We also present current challenges in DD through extensive empirical studies and envision possible directions for future works.
科研通智能强力驱动
Strongly Powered by AbleSci AI