蒸馏
初始化
集成学习
计算机科学
人工神经网络
人工智能
机器学习
Boosting(机器学习)
集合预报
试验装置
深度学习
试验数据
化学
有机化学
程序设计语言
作者
Zeyuan Allen-Zhu,Yuanzhi Li
出处
期刊:Cornell University - arXiv
日期:2020-01-01
被引量:36
标识
DOI:10.48550/arxiv.2012.09816
摘要
We formally study how ensemble of deep learning models can improve test accuracy, and how the superior performance of ensemble can be distilled into a single model using knowledge distillation. We consider the challenging case where the ensemble is simply an average of the outputs of a few independently trained neural networks with the SAME architecture, trained using the SAME algorithm on the SAME data set, and they only differ by the random seeds used in the initialization. We show that ensemble/knowledge distillation in Deep Learning works very differently from traditional learning theory (such as boosting or NTKs, neural tangent kernels). To properly understand them, we develop a theory showing that when data has a structure we refer to as ``multi-view'', then ensemble of independently trained neural networks can provably improve test accuracy, and such superior test accuracy can also be provably distilled into a single model by training a single model to match the output of the ensemble instead of the true label. Our result sheds light on how ensemble works in deep learning in a way that is completely different from traditional theorems, and how the ``dark knowledge'' is hidden in the outputs of the ensemble and can be used in distillation. In the end, we prove that self-distillation can also be viewed as implicitly combining ensemble and knowledge distillation to improve test accuracy.
科研通智能强力驱动
Strongly Powered by AbleSci AI