Remember Me
Or use your Academic/Social account:


Or use your Academic/Social account:


You have just completed your registration at OpenAire.

Before you can login to the site, you will need to activate your account. An e-mail will be sent to you with the proper instructions.


Please note that this site is currently undergoing Beta testing.
Any new content you create is not guaranteed to be present to the final version of the site upon release.

Thank you for your patience,
OpenAire Dev Team.

Close This Message


Verify Password:
Verify E-mail:
*All Fields Are Required.
Please Verify You Are Human:
fbtwitterlinkedinvimeoflicker grey 14rssslideshare1
Languages: English
Types: Article
Recent research suggests that the ability of an extraneous formant to impair intelligibility depends on the variation of its frequency contour. This idea was explored using a method that ensures interference occurs only through informational masking. Three-formant analogues of sentences were synthesized using a monotonous periodic source (F0 = 140 Hz). Target formants were presented monaurally; the target ear was assigned randomly on each trial. A competitor for F2 (F2C) was presented contralaterally; listeners must reject F2C to optimize recognition. In experiment 1, F2Cs with various frequency and amplitude contours were used. F2Cs with time-varying frequency contours were effective competitors; constant-frequency F2Cs had far less impact. Amplitude contour also influenced competitor impact; this effect was additive. In experiment 2, F2Cs were created by inverting the F2 frequency contour about its geometric mean and varying its depth of variation over a range from constant to twice the original (0–200%). The impact on intelligibility was least for constant F2Cs and increased up to ~100% depth, but little thereafter. The effect of an extraneous formant depends primarily on its frequency contour; interference increases as the depth of variation is increased until the range exceeds that typical for F2 in natural speech.
  • The results below are discovered through our pilot algorithms. Let us know how we are doing!

    • Bacon, S. P., and Opie, J. M. (1994). “Monotic and dichotic modulation detection interference in practiced and unpracticed subjects,” J. Acoust. Soc. Am. 95, 2637-2641.
    • Bench, J., Kowal, A., and Bamford, J. (1979). “The BKB (Bamford-KowalBench) sentence lists for partially-hearing children,” Br. J. Audiol. 13, 108-112.
    • Boersma, P., and Weenink, D. (2010). “PRAAT, a system for doing phonetics by computer, software package, version 5.1.28. Institute of Phonetic Sciences, University of Amsterdam, The Netherlands,” Retrieved 10 March 2010 from http://www.praat.org/ (Last viewed 9/29/2014).
    • Bregman, A. S. (1990). Auditory Scene Analysis: The Perceptual Organization of Sound (MIT Press, Cambridge, MA), pp. 1-790.
    • Brown, G. J., and Cooke, M. (1994). “Computational auditory scene analysis,” Comput. Speech Lang. 8, 297-336.
    • Brungart, D. S., Chang, P. S., Simpson, B. D., and Wang, D. L. (2006). “Isolating the energetic component of speech-on-speech masking with an ideal time-frequency segregation,” J. Acoust. Soc. Am. 120, 4007-4018.
    • Brungart, D. S., Chang, P. S., Simpson, B. D., and Wang, D. L. (2009). “Multitalker speech perception with ideal time-frequency segregation: Effects of voice characteristics and number of talkers,” J. Acoust. Soc. Am. 125, 4006-4022.
    • Brungart, D. S., Simpson, B. D., Darwin, C. J., Arbogast, T. L., and Kidd, G. (2005). “Across-ear interference from parametrically degraded synthetic speech signals in a dichotic cocktail-party listening task,” J. Acoust. Soc. Am. 117, 292-304.
    • Cherry, E. C. (1953). “Some experiments on the recognition of speech, with one and with two ears,” J. Acoust. Soc. Am. 25, 975-979.
    • Cooke, M. (2006). “A glimpsing model of speech perception in noise,” J. Acoust. Soc. Am. 119, 1562-1573.
    • Cooke, M., Green, P., Josifovski, L., and Vizinho, A. (2001). “Robust automatic speech recognition with missing and unreliable acoustic data,” Speech Commun. 34, 267-285.
    • Darwin, C. J. (1981). “Perceptual grouping of speech components differing in fundamental frequency and onset-time,” Q. J. Exp. Psychol. 33A, 185-207.
    • Darwin, C. J. (2008). “Listening to speech in the presence of other sounds,” Philos. Trans. R. Soc. B 363, 1011-1021.
    • Davis, M. H., Johnsrude, I. S., Hervais-Adelman, A., Taylor, K., and McGettigan, C. (2005). “Lexical information drives perceptual learning of distorted speech: Evidence from the comprehension of noise-vocoded sentences,” J. Exp. Psychol. Gen. 134, 222-241.
    • Dubbelboer, F., and Houtgast, T. (2008). “The concept of signal-to-noise ratio in the modulation domain and speech intelligibility,” J. Acoust. Soc. Am. 124, 3937-3946.
    • Durlach, N. I., Mason, C. R., Kidd, G., Arbogast, T. L., Colburn, H. S., and Shinn-Cunningham, B. G. (2003). “Note on informational masking,” J. Acoust. Soc. Am. 113, 2984-2987.
    • Foster, J. R., Summerfield, A. Q., Marshall, D. H., Palmer, L., Ball, V., and Rosen, S. (1993). “Lip-reading the BKB sentence lists: Corrections for list and practice effects,” Br. J. Audiol. 27, 233-246.
    • Gardner, R. B., Gaskill, S. A., and Darwin, C. J. (1989). “Perceptual grouping of formants with static and dynamic differences in fundamental frequency,” J. Acoust. Soc. Am. 85, 1329-1337.
    • Hall, J. W., Haggard, M. P., and Fernandes, M. A. (1984). “Detection in noise by spectro-temporal pattern analysis,” J. Acoust. Soc. Am. 76, 50-56.
    • Henke, W. L. (2005). “MITSYN: A coherent family of high-level languages for time signal processing, software package (Belmont, MA),” www.mitsyn. com (Last viewed 9/29/2014).
    • Institute of Electrical and Electronics Engineers (IEEE) (1969). “IEEE recommended practice for speech quality measurements,” IEEE Trans. Audio Electroacoust. AU-17, 225-246.
    • Jørgensen, S., and Dau, T. (2011). “Predicting speech intelligibility based on the signal-to-noise envelope power ratio after modulation-frequency selective processing,” J. Acoust. Soc. Am. 130, 1475-1487.
    • Keppel, G., and Wickens, T. D. (2004). Design and Analysis: A Researcher's Handbook, 4th ed. (Pearson Prentice Hall, Englewood Cliffs, NJ), pp. 1-611.
    • Kidd, G., Mason, C. R., Richards, V. M., Gallun, F. J., and Durlach, N. I. (2008). “Informational masking,” in Auditory Perception of Sound Sources, Springer Handbook of Auditory Research, edited by W. A. Yost and R. R. Fay (Springer, Berlin), Vol. 29, pp. 143-189.
    • Klatt, D. H. (1980). “Software for a cascade/parallel formant synthesizer,” J. Acoust. Soc. Am. 67, 971-995.
    • Lewis, D. E., and Carrell, T. D. (2007). “The effect of amplitude modulation on intelligibility of time-varying sinusoidal speech in children and adults,” Percept. Psychophys. 69, 1140-1151.
    • Lindblom, B. E. F., and Sundberg, J. E. F. (1971). “Acoustical consequences of lip, tongue, jaw, and larynx movement,” J. Acoust. Soc. Am. 50, 1166-1179.
    • Lyzenga, J., and Carlyon, R. P. (2000). “Binaural effects in centerfrequency modulation detection interference for vowel formants,” J. Acoust. Soc. Am. 108, 753-759.
    • Mattys, S. L., Davis, M. H., Bradlow, A. R., and Scott, S. K. (2012). “Speech recognition in adverse conditions: A review,” Lang. Cognit. Proc. 27, 953-978.
    • Neff, D. L. (1995). “Signal properties that reduce masking by simultaneous, random-frequency maskers,” J. Acoust. Soc. Am. 98, 1909-1920.
    • Porter, R. J., and Whittaker, R. G. (1980). “Dichotic and monotic masking of CV's by CV second formants with different transition starting values,” J. Acoust. Soc. Am. 67, 1772-1780.
    • Remez, R. E., Dubowski, K. R., Davids, M. L., Thomas, E. F., Paddu, N. U., Grossman, Y. S., and Moskalenko, M. (2011). “Estimating speech spectra for copy synthesis by linear prediction and by hand,” J. Acoust. Soc. Am. 130, 2173-2178.
    • Remez, R. E., Rubin, P. E., Berns, S. M., Pardo, J. S., and Lang, J. M. (1994). “On the perceptual organization of speech,” Psychol. Rev. 101, 129-156.
    • Roberts, B., Summers, R. J., and Bailey, P. J. (2010). “The perceptual organization of sine-wave speech under competitive conditions,” J. Acoust. Soc. Am. 128, 804-817.
    • Roberts, B., Summers, R. J., and Bailey, P. J. (2011). “The intelligibility of noise-vocoded speech: Spectral information available from across-channel comparison of amplitude envelopes,” Proc. R. Soc. London, Ser. B 278, 1595-1600.
    • Roberts, B., Summers, R. J., and Bailey, P. J. (2014). “Formant-frequency variation and informational masking of speech by extraneous formants: Evidence against dynamic and speech-specific acoustical constraints,” J. Exp. Psychol. Hum. Percept. Perform. 40, 1507-1525.
    • Roberts, B., Summers, R. J., and Bailey, P. J. (2015). “Acoustic source characteristics, across-formant integration, and speech intelligibility under competitive conditions,” J. Exp. Psychol. Hum. Percept. Perform. (published online).
    • Rosenberg, A. E. (1971). “Effect of glottal pulse shape on the quality of natural vowels,” J. Acoust. Soc. Am. 49, 583-590.
    • Shinn-Cunningham, B. G. (2008). “Object-based auditory and visual attention,” Trends Cognit. Sci. 12, 182-186.
    • Snedecor, G. W., and Cochran, W. G. (1967). Statistical Methods, 6th ed. (Iowa Press, Ames, IA), pp. 1-310.
    • Stone, M. A., Fu€llgrabe, C., Mackinnon, R. C., and Moore, B. C. J. (2011). “The importance for speech intelligibility of random fluctuations in 'steady' background noise,” J. Acoust. Soc. Am. 130, 2874-2881.
    • Stone, M. A., Fu€llgrabe, C., and Moore, B. C. J. (2012). “Notionally steady background noise acts primarily as a modulation masker of speech,” J. Acoust. Soc. Am. 132, 317-326.
    • Summers, R. J., Bailey, P. J., and Roberts, B. (2010). “Effects of differences in fundamental frequency on across-formant grouping in speech perception,” J. Acoust. Soc. Am. 128, 3667-3677.
    • Summers, R. J., Bailey, P. J., and Roberts, B. (2012). “Effects of the rate of formant-frequency variation on the grouping of formants in speech perception,” J. Assoc. Res. Otolaryngol. 13, 269-280.
    • Wang, D. L. (2005). “On ideal binary mask as the computational goal of auditory scene analysis,” in Speech Separation by Humans and Machines, edited by P. Divenyi (Kluwer Academic, Norwell, MA), pp. 181-197.
    • Wang, D. L., and Brown, G. J. (1999). “Separation of speech from interfering sounds based on oscillatory correlation,” IEEE Trans. Neural Networks 10, 684-697.
    • Weismer, G., and Berry, J. (2003). “Effects of speaking rate on second formant trajectories of selected vocalic nuclei,” J. Acoust. Soc. Am. 113, 3362-3378.
  • Inferred research data

    The results below are discovered through our pilot algorithms. Let us know how we are doing!

    Title Trust
  • Discovered through pilot similarity algorithms. Send us your feedback.

    Title Year Similarity

    FSSP to SCOP and CATH (F2CS) Prediction Server


Share - Bookmark

Cite this article