计算机科学
自编码
人工智能
辍学(神经网络)
推论
信息瓶颈法
特征学习
信息论
机器学习
熵(时间箭头)
人工神经网络
一般化
算法
理论计算机科学
相互信息
数学
物理
数学分析
量子力学
统计
作者
Alessandro Achille,Stefano Soatto
标识
DOI:10.1109/tpami.2017.2784440
摘要
The cross-entropy loss commonly used in deep learning is closely related to the defining properties of optimal representations, but does not enforce some of the key properties. We show that this can be solved by adding a regularization term, which is in turn related to injecting multiplicative noise in the activations of a Deep Neural Network, a special case of which is the common practice of dropout. We show that our regularized loss function can be efficiently minimized using Information Dropout, a generalization of dropout rooted in information theoretic principles that automatically adapts to the data and can better exploit architectures of limited capacity. When the task is the reconstruction of the input, we show that our loss function yields a Variational Autoencoder as a special case, thus providing a link between representation learning, information theory and variational inference. Finally, we prove that we can promote the creation of optimal disentangled representations simply by enforcing a factorized prior, a fact that has been observed empirically in recent work. Our experiments validate the theoretical intuitions behind our method, and we find that Information Dropout achieves a comparable or better generalization performance than binary dropout, especially on smaller models, since it can automatically adapt the noise to the structure of the network, as well as to the test sample.
科研通智能强力驱动
Strongly Powered by AbleSci AI