先验概率
贝叶斯概率
计算机科学
贝叶斯线性回归
变阶贝叶斯网络
人工智能
一般化
贝叶斯平均
机器学习
人工神经网络
贝叶斯实验设计
贝叶斯推理
数学
数学分析
作者
Jouko Lampinen,Aki Vehtari
标识
DOI:10.1016/s0893-6080(00)00098-8
摘要
We give a short review on the Bayesian approach for neural network learning and demonstrate the advantages of the approach in three real applications. We discuss the Bayesian approach with emphasis on the role of prior knowledge in Bayesian models and in classical error minimization approaches. The generalization capability of a statistical model, classical or Bayesian, is ultimately based on the prior assumptions. The Bayesian approach permits propagation of uncertainty in quantities which are unknown to other assumptions in the model, which may be more generally valid or easier to guess in the problem. The case problem studied in this paper include a regression, a classification, and an inverse problem. In the most thoroughly analyzed regression problem, the best models were those with less restrictive priors. This emphasizes the major advantage of the Bayesian approach, that we are not forced to guess attributes that are unknown, such as the number of degrees of freedom in the model, non-linearity of the model with respect to each input variable, or the exact form for the distribution of the model residuals.
科研通智能强力驱动
Strongly Powered by AbleSci AI