|
1. |
A Research Agenda for the Nineties in Human-Computer Interaction |
|
Human–Computer Interaction,
Volume 5,
Issue 2-3,
1990,
Page 125-143
Clayton H. Lewis,
Preview
|
PDF (1104KB)
|
|
摘要:
Although the practical importance of user interface technology is now well established, the proper role of research in the development of the technology and the kind of research that is appropriate remain in question. This article takes stock of some of the competing positions and proposes an agenda, identifying areas of work that might command some consensus despite the widely varying viewpoints represented in the research community. The major initiatives proposed are understanding goals and preferences, broadening applied cognitive theory, supporting innovation, and credit assignment.
ISSN:0737-0024
DOI:10.1080/07370024.1990.9667152
出版商:Lawrence Erlbaum Associates, Inc.
年代:1990
数据来源: Taylor
|
2. |
A Semantic Analysis of the Design Space of Input Devices |
|
Human–Computer Interaction,
Volume 5,
Issue 2-3,
1990,
Page 145-190
Jock Mackinlay,
Stuart K. Card,
George G. Robertson,
Preview
|
PDF (2307KB)
|
|
摘要:
A bewildering variety of devices for communication from humans to computers now exists on the market. In this article, we propose a descriptive framework for analyzing the design space of these input devices. We begin with Buxton's (1983) idea that input devices are transducers of physical properties in one, two, or three dimensions. Following Mackinlay's semantic analysis of the design space for graphical presentations, we extend this idea to more comprehensive descriptions of physical properties, space, and transducer mappings. In our reformulation, input devices are transducers of any combination of linear and rotary, absolute and relative, position and force, in any of the six spatial degrees of freedom. Simple input devices are described in terms of semantic mappings from the transducers of physical properties into the parameters of the applications. One of these mappings, the resolution function, allows us to describe the range of possibilities from continuous devices to discrete devices, including possibilities in between. Complex input controls are described in terms of hierarchical families of generic devices and in terms of composition operators on simpler devices. The description that emerges is used to produce a new taxonomy of input devices. The taxonomy is compared with previous taxonomies of Foley, Wallace, and Chan (1984) and of Buxton (1983) by reclassifying the devices previously analyzed by these authors. The descriptive techniques are further applied to the design of complex mouse-based virtual input controls for simulated three-dimensional (3D) egocentric motion. One result is the design of a new virtual egocentric motion control.
ISSN:0737-0024
DOI:10.1080/07370024.1990.9667153
出版商:Lawrence Erlbaum Associates, Inc.
年代:1990
数据来源: Taylor
|
3. |
Theory-Based Design for Easily Learned Interfaces |
|
Human–Computer Interaction,
Volume 5,
Issue 2-3,
1990,
Page 191-220
Peter G. Polson,
Clayton H. Lewis,
Preview
|
PDF (1646KB)
|
|
摘要:
Many important computer applications require that users be able to use them effectively with little or no formal training. Current examples include bank teller machines and airport information kiosks. Today successful systems of this kind can only be developed by iteration using costly empirical testing. This article aims to provide a theoretical foundation for the design of such systems, a model of learning by exploration called CE+. The theory incorporates assumptions from (a) the GOMS model and cognitive complexity theory (CCT) on the representation of procedural knowledge as productions, (b) the EXPL model on learning from examples, and (c) research on problem-solving processes for simple puzzlelike problems. Design guidelines for systems that can be learned by exploration, "design for successful guessing," are derived from the theory. These principles are compared with those developed by Norman (1988).
ISSN:0737-0024
DOI:10.1080/07370024.1990.9667154
出版商:Lawrence Erlbaum Associates, Inc.
年代:1990
数据来源: Taylor
|
4. |
The Growth of Cognitive Modeling in Human-Computer Interaction Since GOMS |
|
Human–Computer Interaction,
Volume 5,
Issue 2-3,
1990,
Page 221-265
Judith Reitman Olson,
Gary M. Olson,
Preview
|
PDF (2338KB)
|
|
摘要:
The purpose of this article is to review where we stand with regard to modeling the kind of cognition involved in human-computer interaction. Card, Moran, and Newell's pioneering work on cognitive engineering models and explicit analyses of the knowledge people need to perform a procedure was a significant advance from the kind of modeling cognitive psychology offered at the time. Since then, coordinated bodies of research have both confirmed the basic set of parameters and advanced the number of parameters that account for the time of certain component activities. Formal modeling in grammars and production systems has provided an account for error production in some cases, as well as a basis for calculating how long a system will take to learn and how much savings there is from previous learning. Recently, we were given a new tool for modeling nonsequential component processes, adapting the "critical path analysis" from engineering to the specification of interacting processes and their consequent durations. Though these advances have helped, there are still significant gaps in our understanding of the whole process of interacting with computers. The cumulative nature of this empirical body and its associated modeling framework has further highlighted important issues central to research in cognitive psychology: how people move smoothly between skilled performance and problem solving, how people learn, how to design for consistent user interfaces, how people produce and manage errors, how we interpret visual displays for meaning, and what processes run concurrently and which depend on the completion of prior processes. In the bigger picture, cognitive modeling is a method that is useful in both initial design (it can narrow the design space and provide early analyses of design alternatives), evaluation, and training. But it does not extend to broader aspects of the context in which people use computers, partly because there are significant gaps in contemporary cognitive theory to inform the modeling and partly because it is the wrong form of model for certain kinds of more global questions in human-computer interaction. Notably, it fails to capture the user's fatigue, individual differences, or mental workload. And it is not the type of model that will aid the designer in designating the set of functions the software ought to contain, to assess the user's judgment of the acceptability of the software, or the change that could be expected in work life and the organization in which this work and person fits. Clearly, these kinds of considerations require modeling and tools of a different granularity and form.
ISSN:0737-0024
DOI:10.1080/07370024.1990.9667155
出版商:Lawrence Erlbaum Associates, Inc.
年代:1990
数据来源: Taylor
|
5. |
Expertise in a Computer Operating System: Conceptualization and Performance |
|
Human–Computer Interaction,
Volume 5,
Issue 2-3,
1990,
Page 267-304
Stephanie M. Doane,
James W. Pellegrino,
Roberta L. Klatzky,
Preview
|
PDF (2101KB)
|
|
摘要:
This article describes a three-part empirical approach to understanding the development of expertise within the UNIX' operating system. We studied UNIX users with varying levels of expertise. The first part of our research attempted to ascertain the nature of their conceptualizations of the UNIX system. The second part measured users' performance in tasks requiring them to comprehend and produce UNIX commands. The third part was a longitudinal rather than cross-sectional analysis of the emergence of expertise. The conceptualization data suggest important differences in the models of UNIX structure formed by each group. Experts best represent the higher levels of the UNIX system; novices more fully represent the lower, more concrete levels of the system, including specific commands. UNIX users also differ markedly in performance, according to their history of use with the operating system. Only experts could successfully produce composite commands that required use of the distinctive features of UNIX (e.g., pipes and other redirection symbols), even though the intermediates and novices evidenced the component knowledge required for composite commands. This finding is somewhat surprising, inasmuch as these are fundamental design features of UNIX, and these features are taught in elementary classes. These data suggest, however, that these features can be used reliably only after extensive experience. The longitudinal data suggest that most subjects increased in expertise. However, expertise can decrease as a function of time, depending on system use. Those subjects who increased in expertise acquired the ability to produce the simple commands and represent the basic modules before they acquired knowledge of complex commands and advanced utilities. The nature of expertise is considered with respect to both system design and user characteristics, including users' conceptual models of system structure.
ISSN:0737-0024
DOI:10.1080/07370024.1990.9667156
出版商:Lawrence Erlbaum Associates, Inc.
年代:1990
数据来源: Taylor
|
6. |
Designing the Design Process: Exploiting Opportunistic Thoughts |
|
Human–Computer Interaction,
Volume 5,
Issue 2-3,
1990,
Page 305-344
Raymonde Guindon,
Preview
|
PDF (2411KB)
|
|
摘要:
This study shows that top-down decomposition is problematic in the early stages of design. Instead, an opportunistic decomposition is better suited to handle the ill-structuredness of design problems. Designers are observed interleaving decisions at various levels of abstraction in the solution decomposition. The verbal protocols of three professionals designing a software system of realistic complexity are analyzed to determine the frequency and causes of opportunistic decompositions. The sudden discovery of new requirements and partial solutions triggered by data-driven rules and associations, the immediate development of solutions for newly discovered requirements, and drifting through partial solutions are shown to be important causes of opportunistic design. A top-down decomposition appears to be a special case for well-structured problems when the designer already knows the correct decomposition. Two cognitive models are briefly discussed in relation to opportunistic design. Finally, implications for training, methods, and computational environments to support the early stages of design are outlined.
ISSN:0737-0024
DOI:10.1080/07370024.1990.9667157
出版商:Lawrence Erlbaum Associates, Inc.
年代:1990
数据来源: Taylor
|
|