首页   按字顺浏览 期刊浏览 卷期浏览 Convergence properties of backpropagation for neural nets via theory of stochastic grad...
Convergence properties of backpropagation for neural nets via theory of stochastic gradient methods. Part 1

 

作者: Alexei A. Gaivoronski,  

 

期刊: Optimization Methods and Software  (Taylor Available online 1994)
卷期: Volume 4, issue 2  

页码: 117-134

 

ISSN:1055-6788

 

年代: 1994

 

DOI:10.1080/10556789408805582

 

出版商: Gordon and Breach Science Publishers

 

关键词: Neural network,Backpropagation, Stochastic gradient

 

数据来源: Taylor

 

摘要:

We study here convergence properties of serial and parallel backpropagation algorithm for training of neural nets, as well as its modification with momentum term. It is shown that these algorithms can be put into the general framework of the stochastic gradient methods. This permits to consider from the same positions both stochastic and deterministic rules for the selection of components (training examples) of the error function to minimize at each iteration. We obtained weaker conditions on the stepsize for deterministic case and provide quite general synchronization rule for parallel version.

 

点击下载:  PDF (556KB)



返 回