Gap detection as a function of stimulus loudness for listeners with and without hearing loss

Gap detection as a function of stimulus loudness for listeners with and without hearing loss

Nelson, Peggy B

or some time, psychoacousticians have been investigating the nature of temporal resolution in listeners with hearing loss, looking for relationships between poor temporal resolution and speech perception. Previous studies of temporal resolution using gap detection tasks have yielded a variety of results that have varied with individuals and that have depended on stimulus bandwidth and stimulus level. Numerous investigators have found that some listeners with hearing loss had poorer temporal resolution abilities than did listeners with normal hearing, but that this could be explained simply by the elevation of the listeners’ thresholds or by the configuration of the listeners’ hearing loss causing high-frequency portions of the stimuli to be inaudible (e.g., Buus & Florentine, 1985; Florentine & Buus, 1984; Glasberg, Moore, & Bacon, 1987). Other listeners with hearing loss apparently have poorer temporal resolution abilities even when poorer thresholds and low sensation levels are accounted for (e.g., Buus & Florentine,1985; Fitzgibbons & Wightman, 1982; Grose, Eddins, & Hall, 1989; Irwin, Hinchcliff, & Kemp, 1981; Irwin & Purdy, 1982). The cause of the individual differences in temporal resolution abilities is unclear.

The role of loudness recruitment and the influence of stimulus level must be considered when evaluating temporal resolution in listeners with hearing loss. Fitzgibbons and Gordon-Salant ( 1987) concluded that all listeners with hearing loss required signal sensation levels of 25-30 dB for maximum gap detection ability, regardless of degree of hearing loss and presumed resulting recruitment. Although increasing overall stimulus level to ensure stimulus audibility may cause temporal resolution abilities of some listeners with hearing loss to approach those of listeners with normal hearing, listeners with loudness recruitment may require stimulus levels that approach loudness discomfort in order to attain maximum performance. Listeners with loudness recruitment are unlikely to select loudness levels that approach discomfort for listening in typical daily environments. Laboratory studies showing near-normal performance at these high levels, then, may not be reflective of performance in more typical listening situations.

No studies to date have equated listeners with and without hearing loss in terms of stimulus loudness to determine differences between groups’ temporal resolution abilities at both comfortable and loud stimulus levels. Shannon (1989) evaluated gap detection abilities as a function of loudness perception in users of cochlear implants. He found that gap detection abilities improved as stimulus loudness increased from soft to comfortable, but no further improvement was noted in these listeners at loudness levels rated higher than comfortable.

The intent of the current study was to further describe gap detection performance in listeners with hearing loss as stimulus audibility and loudness are increased. Loudness judgments of the stimuli allow comparison of listeners with and without hearing loss at similar sound pressure levels, sensation levels, and loudness categories. A high-frequency narrow-bandnoise stimulus, presented at a variety of levels above listeners’ thresholds, was used. Results were analyzed in terms of the overall sound pressure level (SPL), sensation level (SL), and the perceived loudness of the signal by the listener to determine the relationship between gap detection performance and intensity and/ or loudness.



Eight listeners with normal hearing and 8 listeners with hearing loss of presumed cochlear origin detected silent gaps in a high-frequency narrow-band noise over a wide range of fixed intensities above their thresholds, and they made perceptual judgments of the loudness of the signal. The age range of listeners with normal hearing was 24 to 50 years, with a mean age of 31 years; the age range of listeners with hearing loss was 18 to 56 years, with a mean age of 45 years. Listeners with normal hearing had thresholds of 0 to 10 dB HL; listeners with sensorineural hearing loss had thresholds in the mild to moderate (25-60 dB HL) range. Full audiological evaluations and case histories were performed on all listeners with hearing loss. No signs of conductive loss, retrocochlear pathologies, or fluctuating hearing loss were noted. All listeners had hearing loss of unknown origin, except for listener #7, who had a known noise-induced hearing loss. Thresholds and word recognition scores for the listeners with hearing loss are found in Table 1.


A digitally generated pseudorandom filtered noise of 120 ms duration was used to mark the gaps. From 650 ms of stored, flat-spectrum noise, 120-ms sections were randomly selected for each interval. Ithaco filters having a low-frequency cutoff of 2500 Hz and a high-frequency cutoff of 3150 Hz filtered the noises with attenuation rates of 24 dB/octave. For each interval, a 120-ms section of the noise was played, and then the same 120-ms section was played again in reverse order to minimize spectral discontinuities in the region of the gap. For the signal interval, a brief silent section was added between forward and reverse sections. All signal and standard intervals began and ended with 5-ms cos2 ramps, and the temporal center was bound by 1-ms cos2 ramps. All three noise stimuli for each trial were similarly ramped at temporal onset, center, and offset; the difference between the standard and the test stimuli was that test stimuli contained zero-voltage points between the center ramps. The duration of the silent interval was determined by the zerovoltage points on the envelope. Standard ramped stimuli with no zero-voltage points were not perceptually different from noise stimuli containing no ramping.

Experiments and Procedures

An adaptive three-interval forced-choice (3IFC) tracking procedure, which tracked the 79.4% correct point, was used with feedback to determine the listeners’ absolute threshold for the noise marker and their minimum detectable gap (MDG). Threshold for each adaptive run was calculated as the mean of the last six reversals using the smallest step size. Runs were terminated after the standard deviation of the last six reversals was less than the smallest step size. If the number of trials needed to obtain this stopping rule reached 100, the run was discarded. Threshold for each run was the mean of the last six reversals.

Absolute Threshold Measures

Absolute threshold was measured at the beginning of every session. The obtained threshold was used as a reference for presentation levels for gap detection and loudness perception tasks on that day. The thresholds given in the Results section are the average threshold values for each listener over all sessions. The initial step size was 8 dB, changing to 4 dB after one reversal. After two reversals the step size was changed to 2 dB, where it remained for the duration of the run. Threshold for the day was determined as the average of the two lowest threshold runs. If two threshold estimates were within 2 dB of each other, they were averaged; if not, a third run was performed and the best two results were averaged.

Gap Detection Threshold Measures

To determine a listener’s MDG the same tracking procedure was used. Initial gap durations were 50 ms; durations were reduced to 20 ms after the first reversal, to 10 ms after the second reversal, to 4 ms after the third reversal, and to 2 ms after the fourth reversal. Three training runs were completed. Following training, two experimental runs were completed on two separate days. If gap detection thresholds were within 2 ms of each other, they were averaged. If they were not, a third run was performed and the best two were averaged. Listeners with normal hearing were tested at four different SLs: 16, 32, 48, and 64 dB. Listeners with hearing loss were tested at six different SLs: 8, 16, 24, 32, 40, and 48 dB.

Loudness Perception

Each listener was asked to rate the comfort level of the signal at each presentation level using a method of categorical magnitude estimation in which listeners provided labels to match sound intensities. A rating scale of six steps (“too soft,” “soft,” “OK,” “loud,” “very loud,” and “too loud”), devised by Allen, Hall, and Jeng ( 1990), was used to identify each sound’s loudness. Reliability was determined by the listener’s ability to identify the noise as being the same loudness in at least two out of three presentations per run. Listeners were asked to perform at least two different runs on separate days to ensure consistency in the rating procedure.


Two-way repeated-measures analyses of covariance (MANCOVA) were computed on log-transformed data points to determine the effects of hearing loss, intensity/loudness, and age on MDG. The age of the listener was determined to be an insignificant covariate for all analyses described below [t(14) = 1.49, p = 0.16].

Results in dB SL

MDGs for listeners with normal hearing were similar to previous reports in the literature, showing that gap thresholds improved up to approximately 30 dB SL, above which no additional improvement in threshold was noted. The mean MDG at the lowest SL (16 dB) was 5.6 ms and at the highest SL (64 dB) was 3.0 ms. For all SLs tested, the standard deviation was less than 1 ms. Figure 1 shows the area bounded by 1 standard deviation for the listeners with normal hearing at selected SLs (16, 32, and 48 dB). These selected SLs were those at which all listeners, with and without hearing loss, were tested. The current results for listeners with normal hearing are consistent with the performance of listeners with normal hearing in other gap detection studies (e.g., Buus & Florentine, 1985; Irwin et al., 1981).

Listeners with hearing loss demonstrated MDGs that generally improved with increased SL. The mean MDG at the lowest SL (8 dB) was 12.3 ms and at the highest SL (48 dB) was 3.9 ms. At lower SLs the standard deviation was 3.5 ms, and at higher SLs was less than 1.0 ms. Individual MDG functions for the listeners with hearing loss are shown in Figure 1 for 16, 32, and 48 dB SL. High-frequency hearing thresholds did not correlate significantly with MDGs at the highest SLs (r = -0.11, p = 0.79). Although some listeners’ hearing thresholds were 10-15 dB poorer at 4000 Hz than at 2000 Hz, possibly rendering parts of the noise stimulus inaudible at low SLs, these listeners were not the poorest gap detectors at those levels.

Comparisons were made between the two listener groups at 16, 32, and 48 dB SL, the common presentation levels for both groups. A significant effect of hearing loss [F(1, 14) = 20.70, p

Results in dB SPL

The same data were analyzed in terms of SPL. Because individual thresholds varied by a few dB and presentation levels were determined at fixed SLs, not all listeners were tested at exactly the same SPLs. Ranges of 5 dB steps were used to plot and analyze the data versus SPL (e.g., 50-54 dB SPL, 55-59 dB SPL, etc.). These data are shown in Figure 2 for SPL ranges common to both listener groups: 65-69, 80-85, and 93-99 dB SPL. Listeners with normal hearing showed MDGs that decreased as intensity increased, with the MDG ranging from 3.0 to 3.3 ms at 80 dB SPL and above. The standard deviation was less than 1 ms at 60 dB SPL and above. These results are consistent with those of Irwin et al. (1981), whose listeners had MDGs of 3.2 ms at 80 dB SPL with a standard deviation of .4 ms. MDG performance for listeners with hearing loss improved as intensity increased. At levels of 100-105 dB SPL, the mean MDG for listeners with hearing loss was 3.6 ms, with the standard deviation less than 1 ms.

The MANCOVA found a highly significant effect of hearing loss [F(1, 10) = 51.97, p

To describe the effects of SPL and the resulting interaction, three ranges of levels were compared between the groups: 65-69, 80-85, and 93-99 dB SPL. These ranges were chosen in order to include intensities at which at least six listeners from both groups were tested. The significant interaction between hearing loss and SPL suggested that as the SPL increased the difference between the groups narrowed. The MDGs of both groups were more similar at higher SPLs than at lower SPLs, indicating that SPL did not affect both groups equally (see Figure 2). The performance of listeners with hearing loss improved with each increase in intensity, whereas the performance of listeners with normal hearing changed very little.

Results in Loudness Judgment Units

The best MGs for listeners with normal hearing were obtained at loudness levels described by the listeners as “soft” or louder. Further increases in level resulted in no further improvement in gap detection thresholds.

For listeners with hearing loss, MDG performance improved markedly from levels described as “too soft” to “soft.” The best MDG occurred at levels described as “loud” or higher for listeners with hearing loss. Comparison of SPL values for the assigned loudness categories indicated that loudness growth functions were very similar for all listeners with hearing loss. An approximate 8-dB increase in intensity resulted in a change in loudness rating category for the listeners with hearing loss. In contrast, listeners with normal hearing indicated a change in loudness rating after an increase of approximately 10 dB throughout most of the range of levels tested. Thus, loudness recruitment was noted for all listeners with hearing loss.

Figure 3 illustrates MDG performance of individual listeners with hearing loss for the different loudness categories. The range of performance (+1 standard deviation from the mean) for the listeners with normal hearing is shown for comparison. Both groups had relatively large MDGs at levels judged as “too soft,” with MDGs decreasing as loudness increased. A simple one-way analysis of variance was performed on the log-transformed data points judged as “comfortable” by both groups. A significant between-groups effect was noted [F(1, 11) = 18.65, p = 0.0012]. At the “comfortable” loudness rating the mean MDG for listeners with hearing loss was 5.04 ms; for listeners with normal hearing it was 3.3 ms. The same analysis performed at levels judged as “very/too loud” indicated no significant between-group effect [F(1, 11) = 1.34, p = 0.31]. It is notable that listeners with normal hearing reached their maximum performance at levels judged as “soft,” whereas listeners with hearing loss showed performance that was best at levels judged as “loud” or higher.

Individual Differences

As in previous investigations, individual differences in performance were noted among listeners with hearing loss. For example, the performance of most of the listeners with hearing loss (#2, #3, #4, #5, #8) reached that of listeners with normal hearing at the higher SLs (Figure 1) and loudness categories (Figure 3). One of these listeners (#3) demonstrated MDGs at near-normal values at all levels; one additional listener (#6) showed normal MDGs at comfortable loudness (32 dB SL) only; others (#1, #7) had MDGs greater than normal at all levels. No significant correlates (i.e., age of listener or high-frequency hearing threshold) were found that explained these individual differences.


The purpose of this study was to determine the effects of loudness and intensity on gap detection abilities of listeners with hearing loss. The results of this study indicate that there is a difference between the ability of listeners with and without hearing loss to detect gaps, but that for 5 of the 8 listeners with hearing loss, MDGs were within 1 ms of those obtained from the listeners with normal hearing at high intensities. These findings, taken together with those of previous studies such as Buus and Florentine (1985) and Turner, Souza, and Forget (1995), suggest that most listeners with hearing loss have temporal processing abilities that approach normal at high intensities. There were individual differences among these 8 listeners that were not predicted by age or specific hearing thresholds, as in previous investigations. These individual differences are unexplained.

Of particular interest here were the comparisons between listeners with and without hearing loss in terms of loudness categories. Group results were obviously different at comfortable loudness levels, but appeared quite similar at louder-than-comfortable levels. The performance of listeners with hearing loss continued to improve at levels perceived as louder than “comfortable,” whereas the performance of listeners with normal hearing reached an asymptote at levels labeled “soft.” This finding has important implications for comparisons of listeners with and without hearing loss at selected stimulus levels only.

The finding that these listeners with hearing loss do not reach best performance below a level they judge as “loud” may have relevance for understanding their potential for using temporal cues in typical listening situations. When listeners with hearing loss attend to signals at self-selected comfortable levels, their functional temporal resolution abilities are not at their maximum; they are poorer than normal. This finding could have a bearing on the findings of Turner et al. (1995), who suggested that sensorineural hearing loss up to 70 dB HL does not impair temporal resolution abilities in listeners, except when conditions result in reduced stimulus audibility. They investigated listeners’ ability to identify speech stimuli based on the temporal waveform envelope of the speech alone. Differences in identification ability between listeners with and without hearing loss at high stimulus levels were minimal. They concluded that the temporal information in the speech waveform is equally accessible to listeners with hearing loss and those with normal hearing. It is notable that the default stimulus intensity (100 dB SPL) was increased for some listeners with hearing loss if their performance was poorer than normal at the default level. For some of those listeners the stimulus level was increased until it approached loudness discomfort levels (see p. 2570). It is inferred, then, that for at least some listeners in that study (as for some listeners in the current study) temporal resolution ability at comfortable loudness levels was poorer than normal. Thus their functional temporal resolution abilities may be less than optimal when listening to signals amplified to comfortable loudness levels.


The results from this study confirm that the temporal resolution ability in listeners with hearing loss is strongly affected by the intensity of the noise stimulus.

As in previous investigations, a few listeners with hearing loss did not demonstrate normal gap detection performance at any intensity. At the highest intensities tested, however, most listeners’ performance was within 1 ms of that of listeners with normal hearing. The current findings extend those of previous studies by comparing listeners with and without hearing loss at equal loudness. In particular, in order for listeners with hearing loss to reach their best performance, the signal loudness was judged to be “loud” or higher, whereas listeners with normal hearing reached their best performance at a loudness judged as “soft.” At comfortable loudness levels, those targeted by many current hearing aid fitting schemes (i.e., Pascoe 1994), listeners with hearing loss had gap detection thresholds significantly poorer than normal. Therefore they would be expected to have functionally poorer-than-normal temporal resolution ability when listening to signals at comfortable levels in typical listening situations.


The authors appreciate the helpful suggestions of Pete Fitzgibbons on an earlier version of this manuscript. In addition, the constructive comments of Mary Florentine, Robert Shannon, Sid Bacon, and two anonymous reviewers contributed to this article’s final form and are appreciated.

Copyright American Speech-Language-Hearing Association Dec 1997

Provided by ProQuest Information and Learning Company. All rights Reserved.