冗余(工程)
计算机科学
计算
模式识别(心理学)
人工智能
人工神经网络
图像(数学)
数据挖掘
算法
机器学习
操作系统
作者
Qingtang Ding,Zhengyu Liang,Longguang Wang,Yingqian Wang,Jungang Yang
标识
DOI:10.1109/lsp.2023.3329754
摘要
Although the performance of single image super-resolution (SR) has been significantly improved with deep neural networks, existing methods commonly require millions of iterations for training, which not only limits their training efficiency, but also causes considerable energy consumption. In this paper, we comprehensively study the redundancy of existing training datasets and reveal that not all patches are equal for SR network training. We observe that a large percentage of patches with low textures or similar textures lead to high computation costs but make low contributions to SR performance. Then, we propose a dataset condensation method to remove these redundant patches hierarchically. Extensive experiments demonstrate that our dataset condensation method can effectively reduce the redundancy of SR datasets with a 90% condensation rate on DIV2K. With our condensed dataset, baseline networks can achieve significant improvement in terms of training efficiency while maintaining competitive accuracy. Codes are available at https://github.com/QingtangDing/DCSR .
科研通智能强力驱动
Strongly Powered by AbleSci AI