|
1. |
The Connectionist Construction of Psychological Spaces |
|
Connection Science,
Volume 9,
Issue 4,
1997,
Page 323-352
MICHAEL D LEE,
Preview
|
PDF (373KB)
|
|
摘要:
The application of connectionist learning procedures to the development of psychological internal representations requires a constraining theory of mental structure. The psychological space construct is advanced for this role and, consequently, a connectionist network which learns the multi-dimensionally scaled representations of a set of stimuli is developed. The model assumes that the function relating similarity to distance in psychological space is an exponential decay function, operates under the family of Minkowskian metrics and is able to determine the appropriate dimensionality of the psychological spaces it derives. The model is demonstrated on both separable and integral stimuli, and the validity of its application of gradient descent optimization principles over the city-block metric is examined. Several modelling extensions are discussed, including means by which the model might learn more general psychophysical mappings, and be able to derive internally the measures of psychological similarity currently provided through a similarity matrix.
ISSN:0954-0091
DOI:10.1080/095400997116586
出版商:Taylor & Francis Group
年代:1997
数据来源: Taylor
|
2. |
Pseudo-recurrent Connectionist Networks: An Approach to the 'Sensitivity-Stability' Dilemma |
|
Connection Science,
Volume 9,
Issue 4,
1997,
Page 353-380
ROBERT M FRENCH,
Preview
|
PDF (415KB)
|
|
摘要:
In order to solve the 'sensitivity-stability' problem-and its immediate correlate, the problem of sequential learning-it is crucial to develop connectionist architectures that are simultaneously sensitive to, but not excessively disrupted by, new input. French (1992) suggested that to alleviate a particularly severe form of this disruption, catastrophic forgetting, it was necessary for networks to separate dynamically their internal representations during learning. McClelland et al. (1995) went even further. They suggested that nature's way of implementing this obligatory separation was the evolution of two separate areas of the brain, the hippocampus and the neocortex. In keeping with this idea of radical separation, a 'pseudo-recurrent' memory model is presented here that partitions a connectionist network into two functionally distinct, but continually interacting areas. One area serves as a final-storage area for representations; the other is an early-processing area where new representations are first learned by the system. The final-storage area continually supplies internally generated patterns (pseudopatterns; Robins, 1995), which are approximations of its content, to the early-processing area, where they are interleaved with the new patterns to be learned. Transfer of the new learning is done either by weight-copying from the early-processing area to the final-storage area or by pseudopattern transfer. A number of experiments are presented that demonstrate the effectiveness of this approach, allowing, in particular, effective sequential learning with gradual forgetting in the presence of new input. Finally, it is shown that the two interacting areas automatically produce representational compaction and it is suggested that similar representational streamlining may exist in the brain.
ISSN:0954-0091
DOI:10.1080/095400997116595
出版商:Taylor & Francis Group
年代:1997
数据来源: Taylor
|
3. |
Dynamics of Noisy Neural Nets with Chemical Markers and Gaussian-distributed Connectivities |
|
Connection Science,
Volume 9,
Issue 4,
1997,
Page 381-404
A. KOTINI,
P. A ANNINOS,
Preview
|
PDF (334KB)
|
|
摘要:
We have previously investigated the dynamics of probabilistic neural nets with chemical markers and Gaussian distribution of connectivities of the constituent neurons. These investigations have shown that the change from a Poisson to a Gaussian distribution may cause a net to change class. We have now generalized these studies by considering the intrinsic noise of the systems, caused by the spontaneous release of synaptic transmitter substance. A simple mathematical model is developed, the dynamics of which is compared with the Poisson model.
ISSN:0954-0091
DOI:10.1080/095400997116603
出版商:Taylor & Francis Group
年代:1997
数据来源: Taylor
|
4. |
Combining Boolean Neural Architectures for Image Recognition |
|
Connection Science,
Volume 9,
Issue 4,
1997,
Page 405-418
A. De Carvalho,
M. C Fairhurst,
D. L Bisset,
Preview
|
PDF (235KB)
|
|
摘要:
This paper presents a completely integrated Boolean neural architecture, where a selforganizing Boolean neural network (SOFT) is used as a front-end processor to a feedforward Boolean network based on goal-seeking principles (GSN f). This paper will evaluate the advantages of the integrated SOFT-GSN f over GSN f by showing its increased effectiveness in an optical character recognition task.
ISSN:0954-0091
DOI:10.1080/095400997116612
出版商:Taylor & Francis Group
年代:1997
数据来源: Taylor
|
|