计算机科学
代表性启发
稳健性(进化)
过度拟合
蒸馏
机器学习
人工智能
人工神经网络
可扩展性
分类
一般化
同种类的
数据挖掘
深度学习
启发式
简单(哲学)
水准点(测量)
计算复杂性理论
编码(集合论)
源代码
标杆管理
上下文图像分类
模式识别(心理学)
作者
Zhiheng Ma,Anjia Cao,Funing Yang,Yihong Gong,Xing Wei
标识
DOI:10.1109/tip.2025.3579228
摘要
Most dataset distillation methods struggle to accommodate large-scale datasets due to their substantial computational and memory requirements. Recent research has begun to explore scalable disentanglement methods. However, there are still performance bottlenecks and room for optimization in this direction. In this paper, we present a curriculum-based dataset distillation framework aiming to harmonize performance and scalability. This framework strategically distills synthetic images, adhering to a curriculum that transitions from simple to complex. By incorporating curriculum evaluation, we address the issue of previous methods generating images that tend to be homogeneous and simplistic, doing so at a manageable computational cost. Furthermore, we introduce adversarial optimization towards synthetic images to further improve their representativeness and safeguard against their overfitting to the neural network involved in distilling. This enhances the generalization capability of the distilled images across various neural network architectures and also increases their robustness to noise. Extensive experiments demonstrate that our framework sets new benchmarks in large-scale dataset distillation, achieving substantial improvements of 11.1% on Tiny-ImageNet, 9.0% on ImageNet-1K, and 7.3% on ImageNet-21K. Our distilled datasets and code are available at https://github.com/MIV-XJTU/CUDD.
科研通智能强力驱动
Strongly Powered by AbleSci AI