首页   按字顺浏览 期刊浏览 卷期浏览 B-RAAM: A Connectionist Model which Develops Holistic Internal Representations of Symbo...
B-RAAM: A Connectionist Model which Develops Holistic Internal Representations of Symbolic Structures

 

作者: MARTIN J ADAMSON,  

 

期刊: Connection Science  (Taylor Available online 1999)
卷期: Volume 11, issue 1  

页码: 41-71

 

ISSN:0954-0091

 

年代: 1999

 

DOI:10.1080/095400999116359

 

出版商: Taylor & Francis Group

 

关键词: Raam;b-raam;Recurrent Neural Networks;Auto-associator;Symbolic Processing

 

数据来源: Taylor

 

摘要:

Connectionist models have been criticized as seemingly unable to represent data structures thought necessary to support symbolic processing. However, a class of model-recursive auto-associative memory (RAAM)-has been demonstrated to be capable of encoding/ decoding compositionally such symbolic structures as trees, lists and stacks. Despite RAAM's appeal, a number of shortcomings are apparent. These include: the large number of epochs often required to train RAAM models; the size of encoded representation (and, therefore, of hidden layer) needed; a bias in the (fed-back) representation for more recently-presented information; and a cumulative error effect that results from recursively processing the encoded pattern during decoding. In this paper, the RAAM model is modified to form a new encoder/decoder, called bi-coded RAAM (B-RAAM). In bicoding, there are two mechanisms for holding contextual information: the first is hiddento-input layer feedback as in RAAM but extended with a delay line; the second is an output layer which expands dynamically to hold the concatenation of past input symbols. A comprehensive series of experiments is described which demonstrates the superiority of B-RAAM over RAAM in terms of fewer training epochs, smaller hidden layer, improved ability to represent long-term time dependencies and reduction of the cumulative error effect during decoding.

 

点击下载:  PDF (1856KB)



返 回