Understanding the difficulty of training deep feedforward neural networks

初始化 计算机科学 人工神经网络 人工智能 深层神经网络 深度学习 梯度下降 雅可比矩阵与行列式 乙状窦函数 机器学习 数学 应用数学 程序设计语言
作者
Xavier Glorot,Yoshua Bengio
出处
期刊:International Conference on Artificial Intelligence and Statistics 卷期号:: 249-256 被引量:11976
摘要

Whereas before 2006 it appears that deep multilayer neural networks were not successfully trained, since then several algorithms have been shown to successfully train them, with experimental results showing the superiority of deeper vs less deep architectures. All these experimental results were obtained with new initialization or training mechanisms. Our objective here is to understand better why standard gradient descent from random initialization is doing so poorly with deep neural networks, to better understand these recent relative successes and help design better algorithms in the future. We first observe the influence of the non-linear activations functions. We find that the logistic sigmoid activation is unsuited for deep networks with random initialization because of its mean value, which can drive especially the top hidden layer into saturation. Surprisingly, we find that saturated units can move out of saturation by themselves, albeit slowly, and explaining the plateaus sometimes seen when training neural networks. We find that a new non-linearity that saturates less can often be beneficial. Finally, we study how activations and gradients vary across layers and during training, with the idea that training may be more difficult when the singular values of the Jacobian associated with each layer are far from 1. Based on these considerations, we propose a new initialization scheme that brings substantially faster convergence. 1 Deep Neural Networks Deep learning methods aim at learning feature hierarchies with features from higher levels of the hierarchy formed by the composition of lower level features. They include Appearing in Proceedings of the 13 International Conference on Artificial Intelligence and Statistics (AISTATS) 2010, Chia Laguna Resort, Sardinia, Italy. Volume 9 of JMLR: WC Weston et al., 2008). Much attention has recently been devoted to them (see (Bengio, 2009) for a review), because of their theoretical appeal, inspiration from biology and human cognition, and because of empirical success in vision (Ranzato et al., 2007; Larochelle et al., 2007; Vincent et al., 2008) and natural language processing (NLP) (Collobert & Weston, 2008; Mnih & Hinton, 2009). Theoretical results reviewed and discussed by Bengio (2009), suggest that in order to learn the kind of complicated functions that can represent high-level abstractions (e.g. in vision, language, and other AI-level tasks), one may need deep architectures. Most of the recent experimental results with deep architecture are obtained with models that can be turned into deep supervised neural networks, but with initialization or training schemes different from the classical feedforward neural networks (Rumelhart et al., 1986). Why are these new algorithms working so much better than the standard random initialization and gradient-based optimization of a supervised training criterion? Part of the answer may be found in recent analyses of the effect of unsupervised pretraining (Erhan et al., 2009), showing that it acts as a regularizer that initializes the parameters in a “better” basin of attraction of the optimization procedure, corresponding to an apparent local minimum associated with better generalization. But earlier work (Bengio et al., 2007) had shown that even a purely supervised but greedy layer-wise procedure would give better results. So here instead of focusing on what unsupervised pre-training or semi-supervised criteria bring to deep architectures, we focus on analyzing what may be going wrong with good old (but deep) multilayer neural networks. Our analysis is driven by investigative experiments to monitor activations (watching for saturation of hidden units) and gradients, across layers and across training iterations. We also evaluate the effects on these of choices of activation function (with the idea that it might affect saturation) and initialization procedure (since unsupervised pretraining is a particular form of initialization and it has a drastic impact).
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
记忆发布了新的文献求助10
1秒前
1秒前
归尘发布了新的文献求助10
3秒前
上官若男应助科研通管家采纳,获得10
3秒前
隐形曼青应助科研通管家采纳,获得10
3秒前
ding应助科研通管家采纳,获得10
3秒前
科研通AI5应助科研通管家采纳,获得10
3秒前
小马甲应助科研通管家采纳,获得30
3秒前
桐桐应助科研通管家采纳,获得10
3秒前
SciGPT应助科研通管家采纳,获得10
4秒前
传奇3应助科研通管家采纳,获得10
4秒前
丘比特应助科研通管家采纳,获得10
4秒前
浮游应助科研通管家采纳,获得10
4秒前
4秒前
4秒前
老福贵儿应助HHH采纳,获得10
5秒前
via完成签到,获得积分10
5秒前
自由的雪一完成签到,获得积分10
6秒前
摇一摇发布了新的文献求助10
6秒前
6秒前
6秒前
6秒前
伊呀呀呀发布了新的文献求助10
7秒前
ning完成签到,获得积分10
8秒前
8秒前
9秒前
清研发布了新的文献求助10
10秒前
10秒前
苽峰发布了新的文献求助10
11秒前
12秒前
精明玲完成签到 ,获得积分10
12秒前
jwh111完成签到,获得积分10
13秒前
13秒前
薯片发布了新的文献求助10
14秒前
zy完成签到,获得积分10
14秒前
科研通AI6应助张雪影采纳,获得10
14秒前
CipherSage应助苽峰采纳,获得10
15秒前
山屿发布了新的文献求助10
15秒前
16秒前
Ryuki发布了新的文献求助10
16秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Einführung in die Rechtsphilosophie und Rechtstheorie der Gegenwart 1500
Cowries - A Guide to the Gastropod Family Cypraeidae 1200
Socialization In The Context Of The Family: Parent-Child Interaction 600
“Now I Have My Own Key”: The Impact of Housing Stability on Recovery and Recidivism Reduction Using a Recovery Capital Framework 500
The Red Peril Explained: Every Man, Woman & Child Affected 400
The Social Work Ethics Casebook(2nd,Frederic G. Reamer) 400
热门求助领域 (近24小时)
化学 医学 生物 材料科学 工程类 有机化学 内科学 生物化学 物理 计算机科学 纳米技术 遗传学 基因 复合材料 化学工程 物理化学 病理 催化作用 免疫学 量子力学
热门帖子
关注 科研通微信公众号,转发送积分 5012030
求助须知:如何正确求助?哪些是违规求助? 4253332
关于积分的说明 13253684
捐赠科研通 4056100
什么是DOI,文献DOI怎么找? 2218549
邀请新用户注册赠送积分活动 1228141
关于科研通互助平台的介绍 1150493