|
1. |
Using Relevance to Reduce Network Size Automatically |
|
Connection Science,
Volume 1,
Issue 1,
1989,
Page 3-16
MICHAELC. MOZER,
PAUL SMOLENSKY,
Preview
|
PDF (231KB)
|
|
摘要:
This paper proposes a means of using the knowledge in a network to determine the functionality or relevance of individual units, both for the purpose of understanding the network's behavior and improving its performance. The basic idea is to iteratively train the network to a certain performance criterion, compute a measure of relevance that identifies which input or hidden units are most critical to performance, and automatically remove the least relevant units. This skeletonization technique can be used to simplify networks by eliminating units that convey redundant information; to improve learning performance by first learning with spare hidden units and then removing the unnecessary ones, thereby constraining generalization; and to understand the behavior of networks in terms of minimal ‘rules’.
ISSN:0954-0091
DOI:10.1080/09540098908915626
出版商:Taylor & Francis Group
年代:1989
数据来源: Taylor
|
2. |
The Design and Implementation of Marker-passing Systems |
|
Connection Science,
Volume 1,
Issue 1,
1989,
Page 17-40
JAMES HENDLER,
Preview
|
PDF (407KB)
|
|
摘要:
Activation-spreading algorithms have recently been regaining attention in the AI and Cognitive Science communities. One form of such algorithms, those which use marker-passing, the spread of symbolic information over an associative network, have recently been shown to be useful in several different areas of AI. In this paper we review a number of the current approaches to symbolic information spread, examine a set of common issues arising from the implementation of these algorithms and describe some of the programming techniques and data structures that can be used for such tasks. The details of the implementation of one particular marker-passer are provided both to clarify some of the trade-offs inherent in the design of these algorithms and to serve as an example to those wishing to implement such systems. In addition, we also discuss the relationship of symbolic marker-passing to connectionist modeling.
ISSN:0954-0091
DOI:10.1080/09540098908915627
出版商:Taylor & Francis Group
年代:1989
数据来源: Taylor
|
3. |
Implementations of the C-calculus |
|
Connection Science,
Volume 1,
Issue 1,
1989,
Page 41-51
EDUARDOR. CAIANIELLO,
PATRIKE. EKLUND,
ALDOG. S. VENTRE,
Preview
|
PDF (175KB)
|
|
摘要:
C-calculus is a method for combining less descriptive information for larger sets to obtain detailed descriptions about smaller sets. This is achieved by iteratively taking products of information in the form of C-sets. A sequential model for implementing the C-calculus for applications in digitized imaging is presented. The sequential model is a base for a parallel implementation, which is measured for efficiency and applicability
ISSN:0954-0091
DOI:10.1080/09540098908915628
出版商:Taylor & Francis Group
年代:1989
数据来源: Taylor
|
4. |
Tensor Product Production System: a Modular Architecture and Representation |
|
Connection Science,
Volume 1,
Issue 1,
1989,
Page 53-68
CHARLESP. DOLAN,
PAUL SMOLENSKY,
Preview
|
PDF (248KB)
|
|
摘要:
Can connectionist networks effectively represent and process structure? A technique called ‘tensor product representations’, which formalizes and generalizes the approaches of several previous connectionist models, was developed by Smolensky and shown to possess a number of desirable general properties. This paper shows how the technique can be effectively used to design a specific symbol-processing task: the serial execution of simple production rules requiring pattern matching, variable binding and structure manipulation. This ‘Tensor Product Production System’ is applied to one of the classes of production rules in Touretzky and Hinton's Distributed Connectionist Production System, and a number of comparisons are made between the two approaches. The mathematical simplicity and analyzability of the tensor product scheme allows the straightforward design of a simpler, more principled, and in some ways more efficient system.
ISSN:0954-0091
DOI:10.1080/09540098908915629
出版商:Taylor & Francis Group
年代:1989
数据来源: Taylor
|
5. |
Learning Mechanisms which Construct Neighbourhood Representations |
|
Connection Science,
Volume 1,
Issue 1,
1989,
Page 69-85
CHRISTOPHERJ. THORNTON,
Preview
|
PDF (262KB)
|
|
摘要:
Learning is currently the focus of much research activity in cognitive science. But, typically, this research is oriented towards either the symbol-processing paradigm or the connectionist paradigm and therefore tends to generate models of two quite different types. A satisfactory theory of learning will, presumably, deal with the phenomenon in general terms rather than in terms of two special cases, so there would appear to be a need to try to identify abstractions which generalize the two types of model typically produced. The aim of the present paper is to identify one such abstraction. It puts forward a framework in which the behaviours of two symbol-processing learning mechanisms are directly commensurable with the behaviours of three connectionist learning mechanisms and shows that there is at least one theoretical constraint affecting all mechanisms covered by the framework.
ISSN:0954-0091
DOI:10.1080/09540098908915630
出版商:Taylor & Francis Group
年代:1989
数据来源: Taylor
|
6. |
Experimental Analysis of the Real-time Recurrent Learning Algorithm |
|
Connection Science,
Volume 1,
Issue 1,
1989,
Page 87-111
RONALDJ. WILLIAMS,
DAVID ZIPSER,
Preview
|
PDF (377KB)
|
|
摘要:
The real-time recurrent learning algorithm is a gradient-following learning algorithm for completely recurrent networks running in continually sampled time. Here we use a series of simulation experiments to investigate the power and properties of this algorithm. In the recurrent networks studied here, any unit can be connected to any other, and any unit can receive external input. These networks run continually in the sense that they sample their inputs on every update cycle, and any unit can have a training target on any cycle. The storage required and computation time on each step are independent of time and are completely determined by the size of the network, so no prior knowledge of the temporal structure of the task being learned is required. The algorithm is nonlocal in the sense that each unit must have knowledge of the complete recurrent weight matrix and error vector. The algorithm is computationally intensive in sequential computers, requiring a storage capacity of the order of the third power of the number of units and a computation time on each cycle of the order of the fourth power of the number of units. The simulations include examples in which networks are taught tasks not possible with tapped delay lines—that is, tasks that require the preservation of state over potentially unbounded periods of time. The most complex example of this kind is learning to emulate a Turing machine that does a parenthesis balancing problem. Examples are also given of networks that do feedforward computations with unknown delays, requiring them to organize into networks with the correct number of layers. Finally, examples are given in which networks are trained to oscillate in various ways, including sinusoidal oscillation.
ISSN:0954-0091
DOI:10.1080/09540098908915631
出版商:Taylor & Francis Group
年代:1989
数据来源: Taylor
|
|