|
1. |
Earcons and Icons: Their Structure and Common Design Principles |
|
Human–Computer Interaction,
Volume 4,
Issue 1,
1989,
Page 11-44
Meera M. Blattner,
Denise A. Sumikawa,
Robert M. Greenberg,
Preview
|
PDF (1604KB)
|
|
摘要:
In this article we examine earcons, which are audio messages used in the user-computer interface to provide information and feedback to the user about computer entities. (Earcons include messages and functions, as well as states and labels.) We identify some design principles that are common to both visual symbols and auditory messages, and discuss the use of representational and abstract icons and earcons. We give some examples of audio patterns that may be used to design modules for earcons, which then may be assembled into larger groupings called families. The modules are single pitches or rhythmicized sequences of pitches called motives. The families are constructed about related motives that serve to identify a family of related messages. Issues concerned with learning and remembering earcons are discussed.
ISSN:0737-0024
DOI:10.1207/s15327051hci0401_1
出版商:Lawrence Erlbaum Associates, Inc.
年代:1989
数据来源: Taylor
|
2. |
Soundtrack: An Auditory Interface for Blind Users |
|
Human–Computer Interaction,
Volume 4,
Issue 1,
1989,
Page 45-66
Alistair D.N. Edwards,
Preview
|
PDF (1185KB)
|
|
摘要:
Throughout the history of human-computer interface development, one aspect has remained constant: output from computers has been almost entirely visual. A continued and increasing reliance on visual communication has had a disadvantageous effect on users who have visual disabilities. A visual interface is of no use to a user who is completely blind; communication must use one of the other senses, and hearing is an obvious candidate. A number of human-computer interfaces have been developed and adapted into an auditory form, based on the use of synthetic speech. However, for modern interfaces that use more complex displays, synthetic speech is not sufficient. One attempt to adapt such a mouse-based interface into an auditory form, based on musical tones and synthetic speech is described. This project involved the development of a word processor, called Soundtrack, with an auditory interface. Evaluations of this application suggest that the approach is viable, but that it is difficult to use and there are significant research questions still to be addressed.
ISSN:0737-0024
DOI:10.1207/s15327051hci0401_2
出版商:Lawrence Erlbaum Associates, Inc.
年代:1989
数据来源: Taylor
|
3. |
The SonicFinder: An Interface That Uses Auditory Icons |
|
Human–Computer Interaction,
Volume 4,
Issue 1,
1989,
Page 67-94
William W. Gaver,
Preview
|
PDF (1607KB)
|
|
摘要:
The appropriate use of nonspeech sounds has the potential to add a great deal to the functionality of computer interfaces. Sound is a largely unexploited medium of output, even though it plays an integral role in our everyday encounters with the world, a role that is complementary to vision. Sound should be used in computers as it is in the world, where it conveys information about the nature of sound-producing events. Such a strategy leads to auditory icons, which are everyday sounds meant to convey information about computer events by analogy with everyday events. Auditory icons are an intuitively accessible way to use sound to provide multidimensional, organized information to users. These ideas are instantiated in the SonicFinder, which is an auditory interface I developed at Apple Computer, Inc. In this interface, information is conveyed using auditory icons as well as standard graphical feedback. I discuss how events are mapped to auditory icons in the SonicFinder, and illustrate how sound is used by describing a typical interaction with this interface. Two major gains are associated with using sound in this interface: an increase in direct engagement with the model world of the computer and an added flexibility for users in getting information about that world. These advantages seem to be due to the iconic nature of the mappings used between sound and the information it is to convey. I discuss sound effects and source metaphors as methods of extending auditory icons beyond the limitations implied by literal mappings, and I speculate on future directions for such interfaces.
ISSN:0737-0024
DOI:10.1207/s15327051hci0401_3
出版商:Lawrence Erlbaum Associates, Inc.
年代:1989
数据来源: Taylor
|
|