计算机科学
蒸馏
知识转移
人工智能
机器学习
多样性(控制论)
代表(政治)
估计员
人工神经网络
概率逻辑
学习迁移
编码(集合论)
匹配(统计)
数学
知识管理
化学
统计
有机化学
集合(抽象数据类型)
政治
政治学
法学
程序设计语言
作者
Yonglong Tian,Dilip Krishnan,Phillip Isola
出处
期刊:Cornell University - arXiv
日期:2019-01-01
被引量:549
标识
DOI:10.48550/arxiv.1910.10699
摘要
Often we wish to transfer representational knowledge from one neural network to another. Examples include distilling a large network into a smaller one, transferring knowledge from one sensory modality to a second, or ensembling a collection of models into a single estimator. Knowledge distillation, the standard approach to these problems, minimizes the KL divergence between the probabilistic outputs of a teacher and student network. We demonstrate that this objective ignores important structural knowledge of the teacher network. This motivates an alternative objective by which we train a student to capture significantly more information in the teacher's representation of the data. We formulate this objective as contrastive learning. Experiments demonstrate that our resulting new objective outperforms knowledge distillation and other cutting-edge distillers on a variety of knowledge transfer tasks, including single model compression, ensemble distillation, and cross-modal transfer. Our method sets a new state-of-the-art in many transfer tasks, and sometimes even outperforms the teacher network when combined with knowledge distillation. Code: http://github.com/HobbitLong/RepDistiller.
科研通智能强力驱动
Strongly Powered by AbleSci AI