|
41. |
Click-evoked otoacoustic emissions and the influence of high-frequency hearing losses in humans |
|
The Journal of the Acoustical Society of America,
Volume 101,
Issue 5,
1997,
Page 2771-2777
Paul Avan,
Michel Elbez,
Pierre Bonfils,
Preview
|
PDF (128KB)
|
|
摘要:
Click-evoked otoacoustic emissions (cEOAEs) are thought to reflect the presence of highly tuned mechanisms involved in sound processing inside the cochlea. When the sensitivity and tuning of the inner ear are impaired in some frequency range, the spectral components of cEOAEs in the same frequency range are expected to be altered if the previous premise is correct. Although clinical experience does not contradict such an interpretation, fundamental aspects of cEOAE generation and propagation in the cochlea are not clear enough to preclude possible additional influences of remote cochlear places on cEOAE. In order to analyze this possibility, ultra-high-frequency hearing thresholds between 8 and 16 kHz were assessed in 43 human subjects that had clinically normal hearing thresholds in the frequency range of cEOAEs. The magnitude of their cEOAEs was found to be correlated to their average ultra-high-frequency hearing threshold, especially when ears presenting spontaneous otoacoustic emissions were not taken into account (p=0.002,r2=0.29). Age and ultra-high-frequency hearing thresholds were correlated (p<0.01,r2=0.40); thus it is not possible to exclude that aging was the primary cause of the observed trend. The contribution of ultra-high-frequency hearing status to cEOAE magnitude, perhaps in relation to age, seems to explain a significant part of the variance of “normative” emission data and may be of interest for early detection of high-frequency hearing impairments.
ISSN:0001-4966
DOI:10.1121/1.418564
出版商:Acoustical Society of America
年代:1997
数据来源: AIP
|
42. |
Relations between notched-noise suppressed TEOAE and the psychoacoustical critical bandwidth |
|
The Journal of the Acoustical Society of America,
Volume 101,
Issue 5,
1997,
Page 2778-2788
Joachim Neumann,
Stefan Uppenkamp,
Birger Kollmeier,
Preview
|
PDF (292KB)
|
|
摘要:
Narrow-band transitory evoked otoacoustic emissions (TEOAE) were recorded for nine normal hearing subjects in the presence of a broadband tone complex suppressor. Introducing a spectral notch at the frequency of the narrow-band stimulus causes the suppression effect to decrease, the more so the wider the notch. This decrease in suppression permits an estimate of the size of one critical band. One advantage of this approach is that no active participation of the subjects is required. The estimated critical bandwidth is then compared with independent estimates based on a simultaneous masking experiment, using the same stimuli. The two measures of the critical bandwidth coincide well for those six subjects with spontaneous otoacoustic emissions. However, the bandwidth estimate based on the OAE measurements is too large for the other three subjects without spontaneous emissions. Simulations of the suppression effect with a driven van der Pol oscillator with moderate undamping produce critical bandwidth estimates consistent with those observed in the psychoacoustical experiments. This allows an estimate of the “effective” amount of undamping on the basilar membrane that is required to produce the critical bandwidth observable in psychoacoustic experiments.
ISSN:0001-4966
DOI:10.1121/1.419302
出版商:Acoustical Society of America
年代:1997
数据来源: AIP
|
43. |
A psychoacoustic model for the noise masking of plosive bursts |
|
The Journal of the Acoustical Society of America,
Volume 101,
Issue 5,
1997,
Page 2789-2802
James J. Hant,
Brian P. Strope,
Abeer A. Alwan,
Preview
|
PDF (427KB)
|
|
摘要:
A model for predicting the masked thresholds of the voiceless plosive bursts /k,t,p/ in background noise is proposed. Because plosive bursts are brief, are generated by a noise source, and have different spectral characteristics, the modeling approach accounts for duration, center frequency, and signal bandwidth. Noise-in-noise masking experiments are conducted using a broadband masker and bandpass noise signals of varying bandwidth (100–5483 Hz), duration (10–300 ms), and center frequency (0.4–4 kHz). Data from these experiments are used to parametrize an auditory filter model in which the effective bandwidth and the signal-to-noise ratio at threshold for each filter are duration dependent. The duration-dependent filter model is then used to predict the thresholds of synthetic and naturally spoken plosive bursts in background noise. Finally, results from pilot notched-noise experiments are presented which support duration-dependent frequency selectivity.
ISSN:0001-4966
DOI:10.1121/1.418565
出版商:Acoustical Society of America
年代:1997
数据来源: AIP
|
44. |
Spectral weights in level discrimination by preschool children: Synthetic listening conditions |
|
The Journal of the Acoustical Society of America,
Volume 101,
Issue 5,
1997,
Page 2803-2810
Melodie S. Willihnganz,
Mark A. Stellmack,
Robert A. Lutfi,
Frederic L. Wightman,
Preview
|
PDF (164KB)
|
|
摘要:
On most auditory discrimination and detection tasks young children perform more poorly than adults. The current experiment applies a technique which potentially can reveal the extent to which the adult–child performance difference results from suboptimal attentional strategies or simply greater internal noise in the children. In this experiment preschool children and adults were asked to discriminate between complex tones comprised of three random-amplitude sinusoidal components. A trial-by-trial correlational analysis [R. A. Lutfi, J. Acoust. Soc. Am.97, 1333–1334 (1995)] provided an estimate of the weight listeners placed on the level information from individual spectral components in making the discrimination. The patterns of weights were interpreted as measures of “attentional strategy.” Both children and adults produced reliable patterns of weights. This is an especially important result since measuring a single weighting pattern requires large numbers of trials and hence multiple sessions with the children. While individual weighting patterns were reliable, weighting patterns differed both within and across groups. Moreover, neither the children nor the adults produced weighting patterns that would maximize percent correct in the task. A substantial proportion of the responses from both children and adults could be predicted from their weighting patterns even when performance was near chance. However, differences in overall performance between children and adults could not be accounted for by differences in their weighting functions.
ISSN:0001-4966
DOI:10.1121/1.419478
出版商:Acoustical Society of America
年代:1997
数据来源: AIP
|
45. |
Spectral weights in level discrimination by preschool children: Analytic listening conditions |
|
The Journal of the Acoustical Society of America,
Volume 101,
Issue 5,
1997,
Page 2811-2821
Mark A. Stellmack,
Melodie S. Willihnganz,
Frederic L. Wightman,
Robert A. Lutfi,
Preview
|
PDF (256KB)
|
|
摘要:
In this series of experiments, adult and child listeners were required to attend to a target tone in the presence of two distracters and to indicate in which of two intervals the target tone had the higher level. The attentional weight listeners placed on each component was estimated by computing the correlation between the level change of each component across intervals and the listener’s response. In the first experiment, weights were obtained as a function of the mean level of the distracters (250 and 4000 Hz) for a 1000-Hz target. No consistent differences between the weighting functions of children and adults were observed. In a second experiment, weights were obtained as a function of the harmonic relationship between the distracters (250 and 4000 Hz, or 270 and 4320 Hz) and the 1000-Hz target. No difference was observed between the weighting functions computed with harmonic and inharmonic complexes. In the final experiment, each component of the complex (250, 1000, and 4000 Hz) was identified as the target in separate blocks of trials. In general, adults were able to weight the target component appropriately regardless of its frequency, while children tended to weight all components equally. The results suggest that preschool listeners may exhibit poorer attentional selectivity than adults.
ISSN:0001-4966
DOI:10.1121/1.419479
出版商:Acoustical Society of America
年代:1997
数据来源: AIP
|
46. |
Formant transition duration and speech recognition in normal and hearing-impaired listeners |
|
The Journal of the Acoustical Society of America,
Volume 101,
Issue 5,
1997,
Page 2822-2825
Christopher W. Turner,
Sarah J. Smith,
Patricia L. Aldridge,
Suzanne L. Stewart,
Preview
|
PDF (92KB)
|
|
摘要:
Listeners with sensorineural hearing loss often have difficulty discriminating stop consonants even when the speech signals are presented at high levels. One possible explanation for this deficit is that hearing-impaired listeners cannot use the information contained in the rapid formant transitions as well as normal-hearing listeners. If this is the case, then perhaps slowing the rate of frequency change in formant transitions might assist their ability to perceive these speech sounds. In the present study, sets of consonant plus vowel (CV) syllables were synthesized corresponding to /ba, da, ga/ with formant transitions for each set ranging from 5 to 160 ms in duration. The listener’s task was to identify the consonant in a three-alternative, closed-set response task. The results for normal-hearing listeners showed nearly perfect performance for transitions of 20 ms and longer, whereas the shortest transitions yielded poorer performance. A group of eight hearing-impaired listeners pure-tone averages (PTAs) ranging from 30 to 62 dB HL) was also tested. The hearing-impaired listeners tended to show poorer performance than the normals for transitions of all durations; however, the performance of a few hearing-impaired subjects was equal to that of normals for the shortest-duration transitions. A strong inverse relation was observed between degree of hearing loss and improvement in score as a function of transition duration. These results suggest that increasing the duration of formant transitions for listeners with more severe hearing losses may not provide a helpful solution to their speech recognition difficulties.
ISSN:0001-4966
DOI:10.1121/1.418566
出版商:Acoustical Society of America
年代:1997
数据来源: AIP
|
47. |
An investigation of stop place of articulation as a function of syllable position: A locus equation perspective |
|
The Journal of the Acoustical Society of America,
Volume 101,
Issue 5,
1997,
Page 2826-2838
Harvey M. Sussman,
Nicola Bessell,
Eileen Dalston,
Tivoli Majors,
Preview
|
PDF (905KB)
|
|
摘要:
Locus equations were employed to phonetically describe stop place categories as a function of syllable-initial, -medial, and -final position. Ten speakers, five male and five female, produced a total of 2700 CVC and 4500 VCV utterances that were acoustically analyzed to obtainF2onset,F2vowel, andF2offset frequencies for locus equation regression analyses. In general, degree of coarticulation, as indexed by locus equation slope, was reduced for post-vocalic (VC) stops relative to pre-vocalic stops (pooled data from initial and medial positions), but significant differences were observed as a function of stop consonant. All stops showed significantly reducedR2values and increased standard errors of estimate for VC relative to CV productions. Separability of stop place categories in a higher-order slopeXy-intercept acoustic space also diminished for VC vs CV stop productions. The degradation of classic locus equation form (high correlation and linearity) for VC relative to CV productions was attributed to greater articulatory precision in the production of pre-vocalic compared to post-vocalic stops. This greater articulatory precision was interpreted as reflecting a greater need to normalize vowel context-induced variability of theF2transition for syllable onset relative to final stops. The decline in acoustic lawfulness of syllable-final stops is discussed in terms of coarticulatory interactions and expected perceptual correlates.
ISSN:0001-4966
DOI:10.1121/1.418567
出版商:Acoustical Society of America
年代:1997
数据来源: AIP
|
48. |
Concurrent vowel identification. I. Effects of relative amplitude andF0difference |
|
The Journal of the Acoustical Society of America,
Volume 101,
Issue 5,
1997,
Page 2839-2847
Alain de Cheveigné,
Hideki Kawahara,
Minoru Tsuzaki,
Kiyoaki Aikawa,
Preview
|
PDF (321KB)
|
|
摘要:
Subjects identified concurrent synthetic vowel pairs that differed in relative amplitude and fundamental frequency(F0). Subjects were allowed to report one or two vowels for each stimulus, rather than forced to report two vowels as was the case in previously reported experiments of the same type. At all relative amplitudes, identification was better at a fundamental frequency difference(ΔF0)of 6% than at 0%, but the effect was larger when the target vowel amplitude was below that of the competing vowel (−10 or −20 dB). The existence of aΔF0effect when the target is weak relative to the competing vowel is interpreted as evidence that segregation occurs according to a mechanism of cancellation based on the harmonic structure of the competing vowel. Enhancement of the target based on its own harmonic structure is unlikely, given the difficulty of estimating the fundamental frequency of a weak target. Details of the pattern of identification as a function of amplitude and vowel pair were found to be incompatible with a current model of vowel segregation.
ISSN:0001-4966
DOI:10.1121/1.418517
出版商:Acoustical Society of America
年代:1997
数据来源: AIP
|
49. |
Concurrent vowel identification. II. Effects of phase, harmonicity, and task |
|
The Journal of the Acoustical Society of America,
Volume 101,
Issue 5,
1997,
Page 2848-2856
Alain de Cheveigné,
Stephen McAdams,
Cécile M. H. Marin,
Preview
|
PDF (183KB)
|
|
摘要:
Subjects identified concurrent synthetic vowel pairs in four experiments. The first experiment found that improvements in vowel identification with a difference in fundamental frequency(ΔF0)do not depend on component phase. The second investigated more precisely whether phase patterns resulting from ongoing phase shifts in inharmonic stimuli can by themselves produce effects similar to those attributed to differences in harmonic state of component vowels. No such effects were found. The third experiment found that identification was better for harmonic than for inharmonic backgrounds, and that it did not depend on target harmonicity. The first three experiments employed a task in which subjects were free to report one or two vowels for each stimulus. The fourth experiment reproduced several conditions with a more classic task in which subjects had to report two vowels. Compared to the classic task, the new task gave larger effects and provided an additional measure of segregation: the number of vowels reported per stimulus. Overall, results were consistent with the hypothesis that the auditory system segregates targets by a mechanism of harmonic cancellation of competing vowels. They did not support the hypothesis of harmonic enhancement of targets. The lack of a phase effect places strong constraints on models that exploit pitch period asynchrony (PPA) or beats.
ISSN:0001-4966
DOI:10.1121/1.419476
出版商:Acoustical Society of America
年代:1997
数据来源: AIP
|
50. |
Concurrent vowel identification. III. A neural model of harmonic interference cancellation |
|
The Journal of the Acoustical Society of America,
Volume 101,
Issue 5,
1997,
Page 2857-2865
Alain de Cheveigné,
Preview
|
PDF (265KB)
|
|
摘要:
This paper presents a “neural cancellation filter” capable of segregating weak targets from competing harmonic backgrounds, and a model of concurrent vowel segregation based upon it. The elementary cancellation filter comprises a delay line and an inhibitory synapse. Filters within each peripheral channel are tuned to the period of the competing sound to suppress its correlates within the neural discharge pattern. In combination with a pattern matching model based on autocorrelation functions summed over channels, the cancellation filter forms a model of concurrent vowel identification. The model predicts the number of vowels reported for each stimulus (when subjects are allowed to report one or two) and identification rates. It belongs to the class of “harmonic cancellation” models that are supported by experimental evidence that vowel identification is better when competing sounds are harmonic than inharmonic. Two alternative schemes using the same filter are also considered. One derives a “place” representation from the magnitude of the filter output. The other uses the ratio of filter input/output to select channels.
ISSN:0001-4966
DOI:10.1121/1.419480
出版商:Acoustical Society of America
年代:1997
数据来源: AIP
|
|