|
1. |
The Emergence of a 'Language' in an Evolving Population of Neural Networks |
|
Connection Science,
Volume 10,
Issue 2,
1998,
Page 83-97
ANGELO CANGELOSI,
DOMENICO PARISI,
Preview
|
PDF (345KB)
|
|
摘要:
The evolution of language implies the parallel evolution of an ability to respond appropriately to signals (language understanding) and an ability to produce the appropriate signals in the appropriate circumstances (language production). When linguistic signals are produced to inform other individuals, individuals that respond appropriately to these signals may increase their reproductive chances but it is less clear what the reproductive advantage is for the language producers. We present simulations in which populations of neural networks living in an environment evolve a simple language with an informative function. Signals are produced to help other individuals categorize edible and poisonous mushrooms, in order to decide whether to approach or avoid encountered mushrooms. Language production, while not under direct evolutionary pressure, evolves as a byproduct of the independently evolving perceptual ability to categorize mushrooms.
ISSN:0954-0091
DOI:10.1080/095400998116512
出版商:Taylor & Francis Group
年代:1998
数据来源: Taylor
|
2. |
Recovery of Unrehearsed Items in Connectionist Models |
|
Connection Science,
Volume 10,
Issue 2,
1998,
Page 99-119
PAUL W. B ATKINS,
JAAP M. J MURRE,
Preview
|
PDF (504KB)
|
|
摘要:
When gradient-descent models with hidden units are retrained on a portion of a previously learned set of items, performance on both the relearned and unrelearned items improves. Previous explanations of this phenomenon have not adequately distinguished recovery, which is dependent on original learning, from generalization, which is independent of original learning. Using a measure of vector similarity to track global changes in the weight state of three-layer networks, we show that (a) unlike in networks without hidden units, recovery occurs in the absence of generalization in networks with hidden units, and (b) when the conditions of learning are varied, changes in the extent of recovery are reflected in changes in the extent to which the weights move back towards their values held after original learning. The implications of this work for rehabilitation studies, human relearning and models of human long-term memory are also considered.
ISSN:0954-0091
DOI:10.1080/095400998116521
出版商:Taylor & Francis Group
年代:1998
数据来源: Taylor
|
3. |
Catastrophic Forgetting and the Pseudorehearsal Solution in Hopfield-type Networks |
|
Connection Science,
Volume 10,
Issue 2,
1998,
Page 121-135
ANTHONY ROBINS,
SIMON McCALLUM,
Preview
|
PDF (247KB)
|
|
摘要:
Pseudorehearsal is a mechanism proposed by Robins which alleviates catastrophic forgetting in multi-layer perceptron networks. In this paper, we extend the exploration of pseudorehearsal to a Hopfield-type net. The same general principles apply: old information can be rehearsed if it is available, and if it is not available, then generating and rehearsing approximations of old information that 'map' the behaviour of the network can also be effective at preserving the actual old information itself. The details of the pseudorehearsal mechanism, however, benefit from being adapted to the dynamics of Hopfield nets so as to exploit the extra attractors created in state space during learning. These attractors are usually described as 'spurious' or 'cross-talk', and regarded as undesirable, interfering with the retention of the trained population items. Our simulations have shown that, in another sense, such attractors can in fact be useful in preserving the learned population. In general terms, a solution to the catastrophic forgetting problem enables the on-going or sequential learning of information in artificial neural networks, and consequently also provides a framework for the modelling of lifelong learning/developmental effects in cognition.
ISSN:0954-0091
DOI:10.1080/095400998116530
出版商:Taylor & Francis Group
年代:1998
数据来源: Taylor
|
4. |
Are Feedforward and Recurrent Networks Systematic? Analysis and Implications for a Connectionist Cognitive Architecture |
|
Connection Science,
Volume 10,
Issue 2,
1998,
Page 137-160
Steven Phillips,
Preview
|
PDF (323KB)
|
|
摘要:
Human cognition is said to be systematic: cognitive ability generalizes to structurally related behaviours. The connectionist approach to cognitive theorizing has been strongly criticized for its failure to explain systematicity. Demonstrations of generalization notwithstanding, I show that two widely used networks (feedforward and recurrent) do not support systematicity under the condition of local input/output representations. For a connectionist explanation of systematicity, these results leave two choices: either (1) develop models capable of systematicity under local input/output representations or (2) justify the choice of similarity-based (non-local) component representations sufficient for systematicity.
ISSN:0954-0091
DOI:10.1080/095400998116549
出版商:Taylor & Francis Group
年代:1998
数据来源: Taylor
|
|