Internal Sigmoid Dynamics in Feedforward Neural Networks
作者:
KARL GUSTAFSON,
期刊:
Connection Science
(Taylor Available online 1998)
卷期:
Volume 10,
issue 1
页码: 43-73
ISSN:0954-0091
年代: 1998
DOI:10.1080/095400998116576
出版商: Taylor & Francis Group
关键词: Feedforward Neural Networks;Sigmoid Thresholding;Quadratic Dynamics;Chaos;Ergodicity;Backpropagation Learning
数据来源: Taylor
摘要:
Departing from the customary view of the sigmoid thresholding function as a smooth transition non-linearity introduced into multi-layer perceptron (MLP) networks to a continuously differentiable albeit slow gradient descent toward an optimal solution minimizing some error norm, here a different, more fundamental viewpoint is proposed: the intrinsic dynamics throughout the network become those of the quadratic map of theory. This new viewpoint enables valuable insights into understanding the initial, intermediate and final dynamics of supervised learning of algorithms such as the widely used backpropagation scheme. More specifically, although approximately: the weight changes in the aforementioned three learning stages correspond to the three regimes fluctuation, periodicity and fixed points of the quadratic map. The purpose of this is to examine this basic idea, to support it theoretically, by example and through literature, and to suggest the next steps in its further investigation.
点击下载:
PDF (528KB)
返 回