research-article

Scanning for Digital Content: How Blind and Sighted People Perceive Concurrent Speech

Abstract

The widespread availability of digital media has changed the way that people consume information and has impacted the consumption of auditory information. Despite this recent popularity among sighted people, the use of auditory feedback to access digital information is not new for visually impaired users. However, its sequential nature undermines both blind and sighted people’s ability to efficiently find relevant information in the midst of several potentially useful items. We propose taking advantage of the Cocktail Party Effect, which states that people are able to focus on a single speech source among several conversations, but still identify relevant content in the background. Therefore, in contrast to one sequential speech channel, we hypothesize that people can leverage concurrent speech channels to quickly get the gist of digital information. In this article, we present an experiment with 46 (23 blind, 23 sighted) participants, which aims to understand people’s ability to search for relevant content listening to two, three, or four concurrent speech channels. Our results suggest that both blind and sighted people are able to process concurrent speech in scanning scenarios. In particular, the use of two concurrent sources may be used both to identify and understand the content of the relevant sentence. Moreover, three sources may be used for most people depending on the task intelligibility demands and user characteristics. Contrasting with related work, the use of different voices did not affect the perception of concurrent speech but was highly preferred by participants. To complement the analysis, we propose a set of scenarios that may benefit from the use of concurrent speech sources, for both blind and sighted people, toward a Design for All paradigm.

References

  1. Faisal Ahmed, Yevgen Borodin, Yury Puzis, and I. V. Ramakrishnan. 2012a. Why read if you can skim: Towards enabling faster screen reading. In International Cross-Disciplinary Conference on Web Accessibility. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. F. Ahmed, Y. Borodin, A. Soviak, M. Islam, I. V. Ramakrishnan, and T. Hedgpeth. 2012b. Accessible skimming: Faster screen reading of web pages. In Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology. http://dl.acm.org/citation.cfm?id=2380164 Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Claude Alain. 2007. Breaking the wave: Effects of attention and learning on concurrent sound perception. Hearing Research 229, 1, 225--236.Google ScholarGoogle ScholarCross RefCross Ref
  4. Paul M. Aoki, Matthew Romaine, Margaret H. Szymanski, James D. Thornton, Daniel Wilson, and Allison Woodruff. 2003. The Mad Hatter’s cocktail party: A social mobile audio space supporting multiple simultaneous conversations. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, New York, NY, 425--432. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Barry Arons. 1997. SpeechSkimmer: A system for interactively skimming recorded speech. ACM Transactions on Computer-- Human Interaction (TOCHI)—Special Issue on Speech as Data 4, 1, 3--38. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. C. Asakawa, H. Takagi, S. Ino, and T. Ifukube. 2003. Maximum listening speeds for the blind. Proceedings of ICAD (ICAD’03), 276--279. Retrieved December 22, 2015 from http://www.icad.org/Proceedings/2003/AsakawaTakagi2003.pdf.Google ScholarGoogle Scholar
  7. Norman Backhaus and Rico Tuor. 2015. OLwA: Use of literature. Retrieved December 22, 2015 from http://www.geo.uzh.ch/microsite/olwa/olwa/en/html/unit3_kap33.html. (2015).Google ScholarGoogle Scholar
  8. M. F. Bear, B. W. Connors, and M. A. Paradiso. 2006. Neuroscience: Exploring the Brain. Lippincott Williams & Wilkins, Philadelphia, PA.Google ScholarGoogle Scholar
  9. J. P. Bigham, A. C. Cavender, J. T. Brudvik, J. O. Wobbrock, and R. E. Ladner. 2007. WebinSitu: A comparative analysis of blind and sighted browsing behavior. In Proceedings of the 9th International ACM SIGACCESS Conference on Computers and Accessibility. http://dl.acm.org/citation.cfm?id=1296854. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Paul Boersma and David Weenink. 2014. Praat: doing phonetics by computer. Retrieved December 22, 2015 from http://www.fon.hum.uva.nl/praat/. (2014). Accessed in: 06-2015.Google ScholarGoogle Scholar
  11. Y. Borodin and J. P. Bigham. 2010. More than meets the eye: A survey of screen-reader browsing strategies. Proceedings of the 2010 International Cross Disciplinary Conference on Web Accessibility (W4A’10). http://dl.acm.org/citation.cfm?id=1806005 Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Yevgen Borodin, Yuri Puzis, Andrii Soviak, James Bouker, Bo Feng, Richard Sicoli, Andrii Melnyk, Valentyn Melnyk, Vikas Ashok, Glenn Dausch, and I. V. Ramakrishnan. 2014. Listen to everything you want to read with Capti Narrator. In Proceedings of the 11th Web for All Conference(W4A’14). ACM, New York, NY, Article 33, 2 pages. DOI:http://dx.doi.org/10.1145/2596695.2596728 Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. A. S. Bregman. 1990. Auditory Scene Analysis: The Perceptual Organization of Sound. Bradford Books, MIT Press, Cambridge, MA.Google ScholarGoogle Scholar
  14. S. A. Brewster, P. C. Wright, and A. D. N. Edwards. 1993. An evaluation of earcons for use in auditory human--computer interfaces. In Proceedings of the INTERACT’93 and CHI’93 Conference on Human Factors in Computing Systems. 222--227. http://dl.acm.org/citation.cfm?id=169179 Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Adelbert W. Bronkhorst. 2000. The cocktail party phenomenon: A review of research on speech intelligibility in multiple-talker conditions. Acta Acustica United with Acustica 86, 1, 117--128.Google ScholarGoogle Scholar
  16. Douglas S. Brungart and Brian D. Simpson. 2005a. Improving multitalker speech communication with advanced audio displays. Air Force Research Lab Wright-Patterson AFB OH.Google ScholarGoogle Scholar
  17. Douglas S. Brungart and Brian D. Simpson. 2005b. Optimizing the spatial configuration of a seven-talker speech display. ACM Transactions on Applied Perception 2, 4, 430--436. DOI:http://dx.doi.org/10.1145/1101530.1101538 Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. H. Burton. 2003. Visual cortex activity in early and late blind people. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience 23, 10, 4005--11. http://www.ncbi.nlm.nih.gov/pubmed/12764085Google ScholarGoogle ScholarCross RefCross Ref
  19. Denis Byrne and William Noble. 1998. Optimizing sound localization with hearing aids. Trends in Amplification 3, 2, 51--73. DOI:http://dx.doi.org/10.1177/108471389800300202Google ScholarGoogle ScholarCross RefCross Ref
  20. Sharon Cameron and Harvey Dillon. 2008. The listening in spatialized noise-sentences test (LISN-S): Comparison to the prototype LISN and results from children with either a suspected (central) auditory processing disorder or a confirmed language disorder. Journal of the American Academy of Audiology 19, 5, 377--391. DOI:http://dx.doi.org/10.3766/jaaa.19.5.2Google ScholarGoogle ScholarCross RefCross Ref
  21. Robert P. Carlyon. 2004. How the brain separates sounds. Trends in Cognitive Sciences 8, 10, 465--71. DOI:http://dx.doi.org/10.1016/j.tics.2004.08.008Google ScholarGoogle ScholarCross RefCross Ref
  22. E. C. Cherry. 1953. Some experiments on the recognition of speech, with one and with two ears. The Journal of the Acoustical Society of America 25, 5, 975--979.Google ScholarGoogle ScholarCross RefCross Ref
  23. Kai Crispien, Klaus Fellbaum, Anthony Savidis, and Constantine Stephanidis. 1996. A 3D-auditory environment for hierarchical navigation in non-visual interaction. Proceedings of ICAD.Google ScholarGoogle Scholar
  24. C. J. Darwin. 1997. Auditory grouping. Trends in Cognitive Sciences 1, 9, 327--333. DOI:http://dx.doi.org/10.1016/S1364-6613(97)01097-8Google ScholarGoogle ScholarCross RefCross Ref
  25. Christopher J. Darwin, Douglas S. Brungart, and Brian D. Simpson. 2003. Effects of fundamental frequency and vocal-tract length changes on attention to one of two simultaneous talkers. The Journal of the Acoustical Society of America 114, 5, 2913. DOI:http://dx.doi.org/10.1121/1.1616924Google ScholarGoogle ScholarCross RefCross Ref
  26. Donald Dirks. 1964. Perception of dichotic and monaural verbal material and cerebral dominance for speech. Acta Oto-Laryngologica 58, 1--6, 73--80. DOI:http://dx.doi.org/10.3109/00016486409121363Google ScholarGoogle ScholarCross RefCross Ref
  27. R. Drullman and A. W. Bronkhorst. 2000. Multichannel speech intelligibility and talker recognition using monaural, binaural, and three-dimensional auditory presentation. The Journal of the Acoustical Society of America 107, 2224. http://link.aip.org/link/jasman/v107/i4/p2224/s1.Google ScholarGoogle ScholarCross RefCross Ref
  28. W. E. Feddersen, T. T. Sandel, D. C. Teas, and L. A. Jeffress. 1957. Localization of high-frequency tones. The Journal of the Acoustical Society of America 29, 9, 988--991. DOI:http://dx.doi.org/10.1121/1.1909356Google ScholarGoogle ScholarCross RefCross Ref
  29. Prathik Gadde and Davide Bolchini. 2014. From screen reading to aural glancing: Towards instant access to key page sections. In Proceedings of the 16th International ACM SIGACCESS Conference on Computers & Accessibility. ACM, New York, NY, 67--74. DOI:http://dx.doi.org/10.1145/2661334.2661363 Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Bill Gardner and Keith Martin. 2000. HRTF Measurements of a KEMAR Dummy-Head Microphone. Retrieved December 22, 2015 from http://sound.media.mit.edu/resources/KEMAR.html.Google ScholarGoogle Scholar
  31. William Gaver. 1986. Auditory icons: Using sound in computer interfaces. Human--Computer Interaction 2, 2, 167--177. DOI:http://dx.doi.org/10.1207/s15327051hci0202_3 Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Stuart Goose and Carsten Möller. 1999. A 3D audio only interactive web browser: Using spatialization to convey hypermedia document structure. In Proceedings of the 7th ACM International Conference on Multimedia (Part 1). ACM, New York, NY, 363--371. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. João Guerreiro and Daniel Gonçalves. 2014. Text-to-speeches: Evaluating the perception of concurrent speech by blind people. In Proceedings of the 16th International ACM SIGACCESS Conference on Computers & Accessibility. ACM, New York, NY, 169--176. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. João Guerreiro and Daniel Gonçalves. 2015. Faster text-to-speeches: Enhancing blind people’s information scanning with faster concurrent speech. In Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility. ACM, New York, NY. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. João Guerreiro, André Rodrigues, Kyle Montague, Tiago Guerreiro, Hugo Nicolau, and Daniel Gonçalves. 2015b. TabLETS get physical: Non-visual text entry on tablet devices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, New York, NY. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Tiago Guerreiro, Kyle Montague, João Guerreiro, Rafael Nunes, Hugo Nicolau, and Daniel Gonçalves. 2015a. Blind people interacting with large touch surfaces: Strategies for one-handed and two-handed exploration. In Proceedings of the 2015 ACM International Conference on Interactive Tabletops and Surfaces. ACM, New York, NY. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Simon Harper. 2012. Deep accessibility: Adapting interfaces to suit our senses. Invited Talk-Technical Superior Institute, LaSIGE, Lisbon, Portugal.Google ScholarGoogle Scholar
  38. Simon Harper and Neha Patel. 2005. Gist summaries for visually impaired surfers. In Proceedings of the 7th International ACM SIGACCESS Conference on Computers and Accessibility. 90--97. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Kenneth Hugdahl, Maria Ek, Fiia Takio, Taija Rintee, Jyrki Tuomainen, Christian Haarala, and Heikki Hämäläinen. 2004. Blind individuals show enhanced perceptual and attentional sensitivity for identification of speech sounds. Brain Research. Cognitive Brain Research 19, 1, 28--32. DOI:http://dx.doi.org/10.1016/j.cogbrainres.2003.10.015Google ScholarGoogle ScholarCross RefCross Ref
  40. Gregory Kramer, Bruce Walker, Terri Bonebright, Perry Cook, John H. Flowers, Nadine Miner, and John Neuhoff. 2010. Sonification Report: Status of the Field and Research Agenda. Faculty Publications, Department of Psychology, Paper 444.Google ScholarGoogle Scholar
  41. T. Kujala, K. Alho, M. Huotilainen, R. J. Ilmoniemi, A. Lehtokoski, A. Leinonen, T. Rinne, O. C. Salonen, J. Sinkkonen, C.-G. Standertskjöuld-Nordenstam, and others. 1997. Electrophysiological evidence for cross-modal plasticity in humans with early- and late-onset blindness. Psychophysiology 34, 2, 213--216. DOI:http://dx.doi.org/10.1111/j.1469-8986.1997.tb02134.xGoogle ScholarGoogle ScholarCross RefCross Ref
  42. L2F 2015. INESC-ID’s Spoken Language Systems Laboratory. Retrieved December 22, 2015 from http://www.l2f.inesc-id.pt/.Google ScholarGoogle Scholar
  43. Paul Lamb. 2015. Paul Lamb’s 3D Sound System. Retrieved December 22, 2015 from http://www.paulscode.com/forum/index.php?topic=4.0. (2015). Accessed in: 06-2015.Google ScholarGoogle Scholar
  44. Kevin A. Li, Patrick Baudisch, and Ken Hinckley. 2008. BlindSight: Eyes-free access to mobile phones. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 1389--1398. Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Ruth Y. Litovsky, Aaron Parkinson, and Jennifer Arcaroli. 2009. Spatial hearing and speech intelligibility in bilateral cochlear implant users. Ear and Hearing 30, 4, 419. DOI:http://dx.doi.org/10.1097/aud.0b013e3181a165beGoogle ScholarGoogle ScholarCross RefCross Ref
  46. Darren Lunn, Simon Harper, and Sean Bechhofer. 2011. Identifying behavioral strategies of visually impaired users to improve access to web content. ACM Transactions on Accessible Computing 3, 4, 13. Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. LWJGL 2.0. 2015. LightWeight Java Game Library. Retrieved December 22, 2015 from http://legacy.lwjgl.org/.Google ScholarGoogle Scholar
  48. Jalal Mahmud, Yevgen Borodin, Dipanjan Das, and I. V. Ramakrishnan. 2007. Combating information overload in non-visual web access using context. Proceedings of the 12th International Conference on Intelligent User Interfaces (IUI’07), 341. DOI:http://dx.doi.org/10.1145/1216295.1216362 Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. B. C. J. Moore. 1995. Hearing. Handbook of perception and cognition. Academic Press.Google ScholarGoogle Scholar
  50. B. C. J. Moore and H. E. Gockel. 2011. Resolvability of components in complex tones and implications for theories of pitch perception. Hearing Research 276, 1--2, 88--97. http://www.sciencedirect.com/science/article/pii/S0378595511000062Google ScholarGoogle ScholarCross RefCross Ref
  51. W. Niemeyer and I. Starlinger. 1981. Do the blind hear better? Investigations on auditory processing in congenital or early acquired blindness II. Central functions. International Journal of Audiology 20, 6, 510--515. http://informahealthcare.com/doi/abs/10.3109/00206098109072719Google ScholarGoogle ScholarCross RefCross Ref
  52. OpenAL Soft 2014. OpenAL Soft 1.15.1. Retrieved December 22, 2015 from http://kcat.strangesoft.net/openal.html.Google ScholarGoogle Scholar
  53. K. Papadopoulos. 2010. Differences among sighted individuals and individuals with visual impairments in word intelligibility presented via synthetic and natural speech. Augmentative and Alternative Communication 26, 4, 278--288. http://informahealthcare.com/doi/abs/10.3109/07434618.2010.522200Google ScholarGoogle ScholarCross RefCross Ref
  54. Peter Parente. 2006. Clique: A conversant, task-based audio display for GUI applications. SIGACCESS Accessibility and Computing 84, 34--37. DOI:http://dx.doi.org/10.1145/1127564.1127571 Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Sérgio Paulo, Luís C. Oliveira, Carlos Mendes, Luís Figueira, Renato Cassaca, Céu Viana, and Helena Moniz. 2008. DIXI--A generic text-to-speech system for european portuguese. Computational Processing of the Portuguese Language 91--100. Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Brock Peoples and Carol Tilley. 2011. Podcasts as an emerging information resource. College & Undergraduate Libraries 18, 1, 44--57. DOI:http://dx.doi.org/10.1080/10691316.2010.550529Google ScholarGoogle ScholarCross RefCross Ref
  57. Daisuke Sato, Shaojian Zhu, M. Kobayashi, Hironobu Takagi, and Chieko Asakawa. 2011. Sasayaki: Voice augmented web browsing experience. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2769--2778. Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. C. Schmandt and A. Mullins. 1995. AudioStreamer: Exploiting simultaneity for listening. Conference Companion on Human Factors in Computing Systems 218--219. http://dl.acm.org/citation.cfm?id=223355.223533 Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. Andrew Schwartz, Josh H. McDermott, and Barbara Shinn-Cunningham. 2012. Spatial cues alone produce inaccurate sound segregation: The effect of interaural time differences. The Journal of the Acoustical Society of America 132, 1, 357--368. DOI:http://dx.doi.org/10.1121/1.4718637Google ScholarGoogle ScholarCross RefCross Ref
  60. Donald Shankweiler and Michael Studdert-Kennedy. 1967. Identification of consonants and vowels presented to left and right ears. The Quarterly Journal of Experimental Psychology 19, 1, 59--63. DOI:http://dx.doi.org/10.1080/14640746708400069Google ScholarGoogle ScholarCross RefCross Ref
  61. B. Shinn-Cunningham and A. Ihlefeld. 2004. Selective and divided attention: Extracting information from simultaneous sound sources. (2004).Google ScholarGoogle Scholar
  62. Jaka Sodnik and Sašo Tomažič. 2009. Spatial speaker : 3D Java text-to-speech converter. In Proceedings of the World Congress on Engineering and Computer Science, Vol. II.Google ScholarGoogle Scholar
  63. Amanda Stent, Park Ave, Florham Park, Ann Syrdal, and Taniya Mishra. 2011. On the intelligibility of fast synthesized speech for individuals with early-onset blindness. In Proceedings of the 13th International ACM SIGACCESS Conference on Computers and Accessibility. 211--218. Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. Constantine Stephanidis. 2001. User interfaces for all: New perspectives into human-- computer interaction. User Interfaces for All-Concepts, Methods, and Tools 1, 3--17.Google ScholarGoogle Scholar
  65. Hironobu Takagi, Shin Saito, Kentarou Fukuda, and Chieko Asakawa. 2007. Analysis of navigability of web applications for improving blind usability. ACM Transactions on Computer-- Human Interaction 14, 3 13--es. DOI:http://dx.doi.org/10.1145/1279700.1279703 Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. Thomas M. Talavage, Martin I. Sereno, Jennifer R. Melcher, Patrick J. Ledden, Bruce R. Rosen, and Anders M. Dale. 2004. Tonotopic organization in human auditory cortex revealed by progressions of frequency sensitivity. Journal of Neurophysiology 91, 3, 1282--1296. DOI:http://dx.doi.org/10.1152/jn.01125.2002Google ScholarGoogle ScholarCross RefCross Ref
  67. Hugo Théoret, Lotfi Merabet, and Alvaro Pascual-Leone. 2004. Behavioral and neuroplastic changes in the blind: Evidence for functionally relevant cross-modal interactions. Journal of Physiology--Paris 98, 1 221--233. DOI:http://dx.doi.org/10.1016/j.jphysparis.2004.03.009Google ScholarGoogle ScholarCross RefCross Ref
  68. J. Trouvain. 2007. On the comprehension of extremely fast synthetic speech. Saarland Working Papers in Linguistics 1, 5--13. http://scidok.sulb.uni-saarland.de/volltexte/2007/1176/Google ScholarGoogle Scholar
  69. Martine Turgeon, Albert S. Bregman, and Brian Roberts. 2005. Rhythmic masking release: Effects of asynchrony, temporal overlap, harmonic relations, and source separation on cross-spectral grouping. Journal of Experimental Psychology. Human Perception and Performance 31, 5, 939--953. DOI:http://dx.doi.org/10.1037/0096-1523.31.5.939Google ScholarGoogle ScholarCross RefCross Ref
  70. Yolanda Vazquez-Alvarez and Stephen A. Brewster. 2011. Eyes-free multitasking: The effect of cognitive load on mobile spatial audio interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2173--2176. Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. Martin D. Vestergaard, Nicholas R. C. Fyson, and Roy D. Patterson. 2009. The interaction of vocal characteristics and audibility in the recognition of concurrent syllables. The Journal of the Acoustical Society of America 125, 2, 1114--1124.Google ScholarGoogle ScholarCross RefCross Ref
  72. Markel Vigo and Simon Harper. 2013. Coping tactics employed by visually disabled users on the web. International Journal of Human--Computer Studies 71, 11, 1013--1025. Google ScholarGoogle ScholarDigital LibraryDigital Library
  73. Voice Interaction 2014. Voice Interaction. Retrieved December 22, 2015 from http://www.voiceinteraction.eu/.Google ScholarGoogle Scholar
  74. B. N. Walker, A. Nance, and J. Lindsay. 2006. Spearcons: Speech-based earcons improve navigation performance in auditory menus. Proceedings of ICAD. 63--68. Retrieved December 22, 2015 from http://sonify.psych.gatech.edu/∼walkerb/publications/pdfs/2006ICAD-WalkerNanceLindsay.pdf.Google ScholarGoogle Scholar
  75. Takayuki Watanabe. 2007. Experimental evaluation of usability and accessibility of heading elements components of web accessibility. Disability & Rehabilitation: Assistive Technology 1--8.Google ScholarGoogle Scholar
  76. David Wechsler. 1981. WAIS-R Manual: Wechsler Adult Intelligence Scale-Revised. Harcourt Brace Jovanovich, New York, NY.Google ScholarGoogle Scholar
  77. Elizabeth M. Wenzel, Marianne Arruda, Doris J. Kistler, and Frederic L. Wightman. 1993. Localization using nonindividualized head-related transfer functions. The Journal of the Acoustical Society of America 94, 1, 111--123.Google ScholarGoogle ScholarCross RefCross Ref
  78. Elizabeth Grace Winkler. 2008. Understanding language. Continuum International.Google ScholarGoogle Scholar
  79. Beverly A. Wright and Matthew B. Fitzgerald. 2001. Different patterns of human discrimination learning for two interaural cues to sound-source location. Proceedings of the National Academy of Sciences 98, 21, 12307--12312. DOI:http://dx.doi.org/10.1073/pnas.211220498Google ScholarGoogle ScholarCross RefCross Ref
  80. Yeliz Yesilada, Robert Stevens, Simon Harper, and Carole Goble. 2007. Evaluating DANTE: Semantic transcoding for visually disabled users. ACM Transactions on Computer--Human Interaction 14, 3 14--es. DOI:http://dx.doi.org/10.1145/1279700.1279704 Google ScholarGoogle ScholarDigital LibraryDigital Library
  81. W. A. Yost. 1998. Spatial hearing: The psychophysics of human sound localization, revisited edition. Ear and Hearing 19, 2, 167.Google ScholarGoogle ScholarCross RefCross Ref
  82. P. M. Zurek. 1993. Binaural advantages and directional effects in speech intelligibility. Acoustical Factors Affecting Hearing Aid Performance 2, 255--275.Google ScholarGoogle Scholar

Index Terms

  1. Scanning for Digital Content

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          Full Access

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader
          About Cookies On This Site

          We use cookies to ensure that we give you the best experience on our website.

          Learn more

          Got it!