修剪
计算机科学
初始化
MNIST数据库
超参数
灵敏度(控制系统)
人工智能
机器学习
卷积神经网络
任务(项目管理)
残余物
人工神经网络
算法
电子工程
生物
工程类
经济
管理
农学
程序设计语言
作者
Namhoon Lee,Thalaiyasingam Ajanthan,Philip H. S. Torr
出处
期刊:Cornell University - arXiv
日期:2018-10-04
被引量:363
标识
DOI:10.48550/arxiv.1810.02340
摘要
Pruning large neural networks while maintaining their performance is often desirable due to the reduced space and time complexity. In existing methods, pruning is done within an iterative optimization procedure with either heuristically designed pruning schedules or additional hyperparameters, undermining their utility. In this work, we present a new approach that prunes a given network once at initialization prior to training. To achieve this, we introduce a saliency criterion based on connection sensitivity that identifies structurally important connections in the network for the given task. This eliminates the need for both pretraining and the complex pruning schedule while making it robust to architecture variations. After pruning, the sparse network is trained in the standard way. Our method obtains extremely sparse networks with virtually the same accuracy as the reference network on the MNIST, CIFAR-10, and Tiny-ImageNet classification tasks and is broadly applicable to various architectures including convolutional, residual and recurrent networks. Unlike existing methods, our approach enables us to demonstrate that the retained connections are indeed relevant to the given task.
科研通智能强力驱动
Strongly Powered by AbleSci AI