|
1. |
Evaluation of a retrieval system using content addressable memory |
|
Systems and Computers in Japan,
Volume 20,
Issue 7,
1989,
Page 1-9
Masaki Hashizume,
Hirosuke Yamamoto,
Takeomi Tamesada,
Toshiaki Hanibuti,
Preview
|
PDF (644KB)
|
|
摘要:
AbstractBecause of the improvements in LSI technology, larger‐scale CAM's can now be developed at low cost. However, as the amount of data to be handled becomes larger, storing the totality of the data to be searched in CAM's ceases to be cost‐effective. A practical solution is to store the data in other storage devices and partition the full data set into blocks for transfer into the CAM. In such a hierarchical memory system there will occur data transfer bottlenecks and overhead for operations other than searching; furthermore, it is not known what search speed is achievable. In this paper, we therefore make model of such a CAM‐based memory system in order to determine the data transfer overhead, evaluate system performance, and investigate ways of reducing search time. It is concluded that: (1) data transfer time can be reduced more effectively by using a large‐capacity bus or by rapid data access than by using a large, high‐speed CAM; (2) there exists an optimum CAM size and configuration for rapid search; (3) the speed of the control mechanism that transfers the data blocks into the CAM is not critical to system pe
ISSN:0882-1666
DOI:10.1002/scj.4690200701
出版商:Wiley Subscription Services, Inc., A Wiley Company
年代:1989
数据来源: WILEY
|
2. |
On‐line handwritten character recognition using local affine transformation |
|
Systems and Computers in Japan,
Volume 20,
Issue 7,
1989,
Page 10-19
Toru Wakahara,
Preview
|
PDF (761KB)
|
|
摘要:
AbstractIn this paper we formed deformation vector fields between the input patterns and the template patterns. Through the components of a low‐order local affine transformation for the deformation fields, we deform the template patterna prioriand perform a pattern adjustment. We report a method for on‐line handwritten character recognition utilizing a local affine transformation. This method consists of three steps based on the representation of features of the input patterns and those of the template pattern. They are: (1) Through a dynamic programming method, we generate the deformation vector field by extracting the deformation vector from each feature point of the template pattern to the corresponding point of the input pattern. (2) We superpose a Gaussian type function onto the deformation vector field. By iterative applications of the local affine transformation, we expand asymptotically each deformation vector. (3) We perform a superposition of the local affine transformation of low‐order for each feature point and adjust the superposed template pattern with the input pattern. We considered 1,945 different Chinese characters of ordinary use for recognition objectives. For the template pattern, we used character samples consisting of average character patterns by standard handwritten characters. We used 300 deformed characters for 20 sample sets. These 6,000 characters were treated as unknown data for recognition experiments. As a result, we achieved the recognition percentage of 95.6 percent. It was higher than the recognition percentage of 91.9 percent for a case in which the template patterns were not def
ISSN:0882-1666
DOI:10.1002/scj.4690200702
出版商:Wiley Subscription Services, Inc., A Wiley Company
年代:1989
数据来源: WILEY
|
3. |
Speaker‐independent isolated word recognition using n‐segment label histogram method |
|
Systems and Computers in Japan,
Volume 20,
Issue 7,
1989,
Page 20-28
Osaaki Watanuki,
Toyohisa Kaneko,
Preview
|
PDF (587KB)
|
|
摘要:
AbstractA simplified version of a hidden Markov model (HMM), referred to as theN‐segment label histogram (NLH) method, is proposed for speaker‐independent isolated word recognition. The NLH method can be considered as an HMM with a uniform duration for each state. It can also be treated as a statistical pattern recognition approach using linear compander and probabilistic measures. During the training, the label histograms are computed forNequal segments of the input. The probability associated with each label is computed after normalization. During the recognition, the input is partitioned intoNequal segments, the corresponding label that maximizes the label probability of the input word determines the recognition result. Since the NLH method requires only about one‐tenth the computation of the HMM method, it is more suitable for implementation on small computers. Furthermore, it does not require alignment along the time axis as do the HMM and DP matching techniques. The utterance fluctuation along the time axis is handled statistically and is applicable to a large data set. Experimental results indicated that the NLH method rendered almost the same recognition rate as the other two methods while requiring much less computation. The linear approximation of the likelihood function enables the implementation of the proposed algorithm on the IBM PC/AT for real time speech recogn
ISSN:0882-1666
DOI:10.1002/scj.4690200703
出版商:Wiley Subscription Services, Inc., A Wiley Company
年代:1989
数据来源: WILEY
|
4. |
Intelligent retrieval of chest X‐ray image database using sketches |
|
Systems and Computers in Japan,
Volume 20,
Issue 7,
1989,
Page 29-42
Jun‐Ichi Hasegawa,
Noritake Okada,
Jun‐Ichiro Toriwaki,
Preview
|
PDF (1223KB)
|
|
摘要:
AbstractThis paper discusses the improvement of the function of the database system for chest X‐ray images, utilizing sketches described in a previous report. An experimental study was made on the intelligent retrieval of the image database using sketches. More precisely, the following points are investigated: (1) the procedure for constructing the sketch is improved, upgrading the function. The aim primarily is to improve the accuracy in the extracted rib image and the lung border. (2) The earlier sketch is a binary line figure, while in the new system, a gray‐level sketch is also possible. The aim is mostly the diagnosis record assist, as well as the retrieval of figure description and the patterns. (3) In the earlier system, only the feature parameters can be the object of retrieval, while the new system can retrieve the figure description as well as some of the patterns. Those are especially desirable aspects as a basic study for realizing the intelligent retrieval of the image datab
ISSN:0882-1666
DOI:10.1002/scj.4690200704
出版商:Wiley Subscription Services, Inc., A Wiley Company
年代:1989
数据来源: WILEY
|
5. |
Generalized context‐free grammars and multiple context‐free grammars |
|
Systems and Computers in Japan,
Volume 20,
Issue 7,
1989,
Page 43-52
Tadao Kasami,
Hiroyuki Seki,
Mamoru Fujii,
Preview
|
PDF (808KB)
|
|
摘要:
AbstractIt is shown that the class of languages generated by generalized context‐free grammars (gcfg's) introduced by Pollard is exactly the class of recursively enumerable sets. Next, a subclass of gcfg's called multiple context‐free grammars (mcfg's) is introduced and it is shown that the class of languages generated by mcfg's properly contains the class of context‐free languages and is properly contained in the class of context‐sensitive languages. In mcfg's, it is possible to account for structures involving discontinuous constituents in a particularly simple manner. Such concepts as phrase structure and derivation tree in context‐free grammars (cfg's) can be extended naturally in mcfg's. Furthermore, the class of languages generated by mcfg's enjoys the formal language‐theoretic closure properties that the class of context‐free l
ISSN:0882-1666
DOI:10.1002/scj.4690200705
出版商:Wiley Subscription Services, Inc., A Wiley Company
年代:1989
数据来源: WILEY
|
6. |
Rotated checker‐pattern projection and its cross‐point extraction and tracking for 3‐D object profilometry |
|
Systems and Computers in Japan,
Volume 20,
Issue 7,
1989,
Page 53-61
Yoshiharu Yuba,
Hiroshi Hirai,
Kiyoshi Tsutsumi,
Kazuo Watari,
Preview
|
PDF (609KB)
|
|
摘要:
AbstractIn the measurement of 3‐D shape by projecting a regular pattern on the object surface, it is required to extract feature points with a high accuracy and to determine their mutual relations. In this paper, 45‐deg rotated checker‐pattern projection is proposed, and a method for extracting and tracking the crossing point is presented in which the geometrical properties of the rotated checker pattern are utilized. If the pattern projector and the camera are on the same horizontal plane, a large brightness difference is produced between the horizontal and nonhorizontal (vertical and slant directions) neighbors of the crossing‐point pixel. By utilizing this property, crossing points are extracted. The projection profile of the brightness pattern along the vertical direction has a triangular form, and the location of the extremum of the profile gives the vertical coordinate of the crossing point. By utilizing this profile, the crossing points are classified (segmentation). On the basis of the result of segmentation, the crossing points are successively tracked and their mutual relations are determined. To demonstrate the usefulness of the method, the experiment of 3‐D shape measurement was made. The error of the measurement was less than 2 mm, which is due to the error in extracting the crossing point (i.e., 1 pixel at the maximum). In the measurement of a cylinder, the measurement was made along the circumference. It is shown that the measurement of the position of the crossing point on the reference plane is not required if the rotation of the object is
ISSN:0882-1666
DOI:10.1002/scj.4690200706
出版商:Wiley Subscription Services, Inc., A Wiley Company
年代:1989
数据来源: WILEY
|
7. |
Pseudo line‐image coding and drawing method based on fractal theory |
|
Systems and Computers in Japan,
Volume 20,
Issue 7,
1989,
Page 62-71
Masakazu Sato,
Hideyoshi Tominaga,
Preview
|
PDF (583KB)
|
|
摘要:
AbstractThe fractal theory proposed by Mandelbrot is applied primarily to computer graphics and related problems as a means to generate a pseudo‐natural image. In contrast to those problems dealing with natural images, it is possible to apply the fractal theory as a means to approximate actual natural images. This paper considers such natural line‐images as a coastline and the contour of a mountain, and presents a method to generate an image with apparently high resolution by applying the fractal theory based on a small amount of information. A feature of this method is the use of an algorithm called squig to generate the fractal curve, which is also proposed by Mandelbrot. The method recursively partitions the original image and generates the fractal curve by determining the internal path. It has several advantages such as the property that self‐intersection is not produced in principle. In this paper, the properties of the squig are described first. Then the proposed algorithm is applied to the line‐image composed of a coastline, and the result is presented. As a result of experiment, it is seen that the data for the chain code is compressed to some 1/10, resulting in a visually natura
ISSN:0882-1666
DOI:10.1002/scj.4690200707
出版商:Wiley Subscription Services, Inc., A Wiley Company
年代:1989
数据来源: WILEY
|
8. |
A method of extracting a kinematically characteristic point from the 3‐dimensional motion of a rigid body |
|
Systems and Computers in Japan,
Volume 20,
Issue 7,
1989,
Page 72-82
Toyohiko Hayashi,
Hidemitsu Ogawa,
Taizo Iijima,
Preview
|
PDF (766KB)
|
|
摘要:
AbstractThis paper describes a problem of estimating a kinematically characteristic point of a rigid body moving in a three‐dimensional (3‐D) space. We call this point the 1‐D characteristic point and denote it byP1. The pointP1is characterized by its motion range which forms a curved line. Hence, if the rigid body moves periodically, we can specifyP1as a point which moves reciprocally. This paper proposes a method of estimatingP1. By projecting the orbit of a point onto three planes orthogonal with each other, we reduce this estimation problem to that of solving a set of quadratic equations, and we then provide one of its solutions. In addition, we clarify how measurement errors of the motion propagate to an estimation
ISSN:0882-1666
DOI:10.1002/scj.4690200708
出版商:Wiley Subscription Services, Inc., A Wiley Company
年代:1989
数据来源: WILEY
|
9. |
The structure of experts: A parallel processor system for 3‐dimensional graphics |
|
Systems and Computers in Japan,
Volume 20,
Issue 7,
1989,
Page 83-91
Haruo Niimi,
Hiroshi Hagiwara,
Shinji Tomita,
Preview
|
PDF (674KB)
|
|
摘要:
AbstractA parallel processor system, called EXPERTS, is developed to generate realistic images of three‐dimensional (3‐D) scenes at a high speed. EXPERTS inputs a list of 3‐D objects defined by a polyhedron model, executes hidden surface elimination by utilizing a scan‐line algorithm, and calculates a frame of pixel values. To perform these processes efficiently, EXPERTS is constructed as a two‐level hierarchical bus‐connected multiprocessor system. This system architecture is derived from the processing scheme employed in which the scan‐line algorithm is divided into two succeeding stages and parallelism is introduced to each stage, respectively. Two types of special‐purpose processor elements were designed to speed up these two processes: that for the former stage is called Scan‐Line Processor (SLP), and that for the latter stage is called PiXel processor (PXP). This paper describes the parallel processing method of scan‐line algorithm, the interconnection scheme of the processor elements, and the details of their hardware architecture. Performance estimation of the syste
ISSN:0882-1666
DOI:10.1002/scj.4690200709
出版商:Wiley Subscription Services, Inc., A Wiley Company
年代:1989
数据来源: WILEY
|
10. |
A fast digital search algorithm using a double‐array structure |
|
Systems and Computers in Japan,
Volume 20,
Issue 7,
1989,
Page 92-103
Jun‐Ichi Aoe,
Preview
|
PDF (888KB)
|
|
摘要:
AbstractThis paper presents an efficient digital search algorithm by introducing a new internal array structure called a double‐array that combines the fast access of a matrix form with the compactness of a list form. Each arc of a digital search tree called a DS‐tree can be computed from the double‐array inO(1) time; that is, the worst‐case time complexity for retrieving a key becomesO(k)for the lengthkof that key. The double‐array is modified to make the size compact while maintaining fast access and algorithms for retrieval, insertion and deletion are presented. Suppose that the size of the double‐array isn+cm, wherenis the number of nodes of the DS‐tree,mis the number of input symbols, andcis a constant depending on each double‐array. Then it is proved theoretically that the worst‐case times of deletion and insertion are proportional tocmandcm2, respectively, independent ofn. From the experimental results of building the double‐array incrementally for various sets of keys, it is shown that the constantchas an extremely small value, rangin
ISSN:0882-1666
DOI:10.1002/scj.4690200710
出版商:Wiley Subscription Services, Inc., A Wiley Company
年代:1989
数据来源: WILEY
|
|