人工神经网络
计算机科学
集合(抽象数据类型)
一般化
功能(生物学)
人工智能
神经系统网络模型
面子(社会学概念)
网络体系结构
时滞神经网络
理论计算机科学
概率神经网络
数学
计算机网络
社会学
数学分析
生物
进化生物学
社会科学
程序设计语言
作者
George Bebis,Michael Georgiopoulos
出处
期刊:IEEE Potentials
[Institute of Electrical and Electronics Engineers]
日期:1994-10-01
卷期号:13 (4): 27-31
被引量:367
摘要
One critical aspect neural network designers face today is choosing an appropriate network size for a given application. Network size involves in the case of layered neural network architectures, the number of layers in a network, the number of nodes per layer, and the number of connections. Roughly speaking, a neural network implements a nonlinear mapping of u=G(x). The mapping function G is established during a training phase where the network learns to correctly associate input patterns x to output patterns u. Given a set of training examples (x, u), there is probably an infinite number of different size networks that can learn to map input patterns x into output patterns u. The question is, which network size is more appropriate for a given problem? Unfortunately, the answer to this question is not always obvious. Many researchers agree that the quality of a solution found by a neural network depends strongly on the network size used. In general, network size affects network complexity, and learning time. It also affects the generalization capabilities of the network; that is, its ability-to produce accurate results on patterns outside its training set.< >
科研通智能强力驱动
Strongly Powered by AbleSci AI