物理
组合数学
概率分布
继任枢机主教
订单(交换)
分布(数学)
统计物理学
数学
数学分析
统计
财务
经济
作者
Minping Qian,Guanglu Gong,J. W. Clark
出处
期刊:Physical Review A
[American Physical Society]
日期:1991-01-01
卷期号:43 (2): 1061-1070
被引量:19
标识
DOI:10.1103/physreva.43.1061
摘要
The dynamics of a probabilistic neural network is characterized by the distribution \ensuremath{\nu}(x'\ensuremath{\Vert}x) of successor states x' of an arbitrary state x of the network. A prescribed memory or behavior pattern is represented in terms of an ordered sequence of network states ${\mathit{x}}^{(1)}$,${\mathit{x}}^{(2)}$,...,${\mathit{x}}^{(\mathit{l})}$. A successful procedure for learning this pattern must modify the neuronal interactions in such a way that the dynamical successor of ${\mathit{x}}^{(\mathit{s})}$ is likely to be ${\mathit{x}}^{(\mathit{s}+1)}$, with ${\mathit{x}}^{(\mathit{l}+1)}$=${\mathit{x}}^{(1)}$. The relative entropy G of the probability distribution ${\mathrm{\ensuremath{\delta}}}_{\mathit{x}}^{(\mathit{s}+1)}$,x' concentrated at the desired successor state, evaluated with respect to the dynamical distribution \ensuremath{\nu}(x'\ensuremath{\Vert}${\mathit{x}}^{(\mathit{s})}$), is used to quantify this criterion, by providing a measure of the distance between actual and ideal probability distributions. Minimization of G subject to appropriate resource constraints leads to ``optimal'' learning rules for pairwise and higher-order neuronal interactions. The degree to which optimality is approached by simple learning rules in current use is considered, and it is found, in particular, that the algorithm adopted in the Hopfield model is more effective in minimizing G than the original Hebb law.
科研通智能强力驱动
Strongly Powered by AbleSci AI