|
1. |
A Recurrent Neural Network that Learns to Count |
|
Connection Science,
Volume 11,
Issue 1,
1999,
Page 5-40
PAUL RODRIGUEZ,
JANET WILES,
JEFFREY L ELMAN,
Preview
|
PDF (469KB)
|
|
摘要:
Parallel distributed processing (PDP) architectures demonstrate a potentially radical alternative to the traditional theories of language processing that are based on serial computational models. However, learning complex structural relationships in temporal data presents a serious challenge to PDP systems. For example, automata theory dictates that processing strings from a context-free language (CFL) requires a stack or counter memory device. While some PDP models have been hand-crafted to emulate such a device, it is not clear how a neural network might develop such a device when learning a CFL. This research employs standard backpropagation training techniques for a recurrent neural network (RNN) in the task of learning to predict the next character in a simple deterministic CFL (DCFL). We show that an RNN can learn to recognize the structure of a simple DCFL. We use dynamical systems theory to identify how network states reflect that structure by building counters in phase space. The work is an empirical investigation which is complementary to theoretical analyses of network capabilities, yet original in its specific configuration of dynamics involved. The application of dynamical systems theory helps us relate the simulation results to theoretical results, and the learning task enables us to highlight some issues for understanding dynamical systems that process language with counters.
ISSN:0954-0091
DOI:10.1080/095400999116340
出版商:Taylor & Francis Group
年代:1999
数据来源: Taylor
|
2. |
B-RAAM: A Connectionist Model which Develops Holistic Internal Representations of Symbolic Structures |
|
Connection Science,
Volume 11,
Issue 1,
1999,
Page 41-71
MARTIN J ADAMSON,
Preview
|
PDF (1856KB)
|
|
摘要:
Connectionist models have been criticized as seemingly unable to represent data structures thought necessary to support symbolic processing. However, a class of model-recursive auto-associative memory (RAAM)-has been demonstrated to be capable of encoding/ decoding compositionally such symbolic structures as trees, lists and stacks. Despite RAAM's appeal, a number of shortcomings are apparent. These include: the large number of epochs often required to train RAAM models; the size of encoded representation (and, therefore, of hidden layer) needed; a bias in the (fed-back) representation for more recently-presented information; and a cumulative error effect that results from recursively processing the encoded pattern during decoding. In this paper, the RAAM model is modified to form a new encoder/decoder, called bi-coded RAAM (B-RAAM). In bicoding, there are two mechanisms for holding contextual information: the first is hiddento-input layer feedback as in RAAM but extended with a delay line; the second is an output layer which expands dynamically to hold the concatenation of past input symbols. A comprehensive series of experiments is described which demonstrates the superiority of B-RAAM over RAAM in terms of fewer training epochs, smaller hidden layer, improved ability to represent long-term time dependencies and reduction of the cumulative error effect during decoding.
ISSN:0954-0091
DOI:10.1080/095400999116359
出版商:Taylor & Francis Group
年代:1999
数据来源: Taylor
|
3. |
Single Neuron Rational Model of Arithmetic and Logic Operations |
|
Connection Science,
Volume 11,
Issue 1,
1999,
Page 73-90
CHANG N ZHANG,
Preview
|
PDF (301KB)
|
|
摘要:
This paper will focus on a phase analysis to explore the potential of single neuron local arithmetic and logic operations on their input conductances. The analysis is based on a rational function model of local spatial summation with the equivalent circuits for steadystate membrane potentials. The prototypes of arithmetic and logic operations are then constructed with their input and output range by analyzing the conditions for performing these operations. A mapping from a partition of input conductance space into functionally distinct phases is depicted, and the multiple mode models for arithmetic and logic are then established. This indicates that the single neuron local rational arithmetic and logic is programmable, and the selection of these functional phases can be effectively instructed by presynaptic activities. This programmability makes the single neuron more free to process the input information.
ISSN:0954-0091
DOI:10.1080/095400999116368
出版商:Taylor & Francis Group
年代:1999
数据来源: Taylor
|
4. |
A Topological Semantics for Rule Extraction with Neural Networks |
|
Connection Science,
Volume 11,
Issue 1,
1999,
Page 91-113
MICHAEL J HEALY,
Preview
|
PDF (361KB)
|
|
摘要:
Rule extraction with neural networks is a subject of increasing interest. Research in this area could benefit from the availability of a formal model of the semantics of the rules. A model of this kind would express the relationship between the application data, the neural network learning model and the extracted rules with mathematical rigor, allowing systematic analysis and modification of rule extraction approaches and the neural network architectures used. However, formal models of this kind are not in common use. This paper proposes a formal semantic model and includes an analysis of an example rule extraction architecture and some issues raised by it and other architectures. In the formal model, the semantics of a neural network is expressed through a form of model theory based upon concepts from topology, including limit points and continuous functions. A state of adaptation of the neural network in which it has learned a set of rules from training data corresponds to a continuous function between topological systems. Topological systems, the domains of inputs to the network, are a generalization of the concept of a topological space. The results of an example analysis with this model suggest a direction for improvements to the example architecture and the desirability of applying the model to other rule extraction approaches.
ISSN:0954-0091
DOI:10.1080/095400999116377
出版商:Taylor & Francis Group
年代:1999
数据来源: Taylor
|
|