反向传播
计算机科学
人工智能
人工神经网络
机器学习
概率逻辑
可扩展性
过度拟合
贝叶斯网络
贝叶斯概率
数据库
作者
José Miguel Hernández-Lobato,Ryan P. Adams
出处
期刊:International Conference on Machine Learning
日期:2015-07-06
卷期号:: 1861-1869
被引量:468
摘要
Large multilayer neural networks trained with backpropagation have recently achieved state-of-the-art results in a wide range of problems. However, using backprop for neural net learning still has some disadvantages, e.g., having to tune a large number of hyperparameters to the data, lack of calibrated probabilistic predictions, and a tendency to overfit the training data. In principle, the Bayesian approach to learning neural networks does not have these problems. However, existing Bayesian techniques lack scalability to large dataset and network sizes. In this work we present a novel scalable method for learning Bayesian neural networks, called probabilistic backpropagation (PBP). Similar to classical backpropagation, PBP works by computing a forward propagation of probabilities through the network and then doing a backward computation of gradients. A series of experiments on ten real-world datasets show that PBP is significantly faster than other techniques, while offering competitive predictive abilities. Our experiments also show that PBP provides accurate estimates of the posterior variance on the network weights.
科研通智能强力驱动
Strongly Powered by AbleSci AI