10.1145/3308561.3353786acmconferencesArticle/Chapter ViewAbstractPublication PagesassetsConference Proceedings
research-article

The Design Space of Nonvisual Word Completion

ABSTRACT

Word completion interfaces are ubiquitously available in mobile virtual keyboards; however, there is no prior research on how to design these interfaces for screen reader users. In addressing this, we propose a design space for nonvisual representation of word completions. The design space covers seven categories aiming to identify challenges and opportunities for interaction design in an unexplored research topic. It is intended to guide the design of novel interaction techniques, serving as a framework for researchers and practitioners working on nonvisual word completion. To demonstrate its potential, we engaged blind users in an exploration of the design space, to create their own bespoke word completion solutions. Through this study we found that users create alternative interfaces that extended current screen readers' capabilities. Resulting interfaces are less conservative than mainstream solutions on notification frequency and cardinality. Customization decisions were based on perceived benefits/costs and varied depending on multiple factors such as users' perceived prediction accuracy, potential keystroke gains, and situational restrictions.

References

  1. Ahmed Sabbir Arif, Sunjun Kim, Wolfgang Stuerzlinger, Geehyuk Lee, and Ali Mazalek. 2016. Evaluation of a Smart-Restorable Backspace Technique to Facilitate Text Entry Error Correction. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16), 5151--5162. https://doi.org/10.1145/2858036.2858407Google ScholarGoogle Scholar
  2. Kenneth C. Arnold, Krzysztof Z. Gajos, and Adam T. Kalai. 2016. On Suggesting Phrases vs. Predicting Words for Mobile Text Composition. Proceedings of the 29th Annual Symposium on User Interface Software and Technology - UIST '16: 603--608. https://doi.org/10.1145/2984511.2984584Google ScholarGoogle Scholar
  3. Shiri Azenkot and Nicole B. Lee. 2013. Exploring the use of speech input by blind people on mobile devices. In Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility - ASSETS '13, 1--8. https://doi.org/10.1145/2513383.2513440Google ScholarGoogle Scholar
  4. Shiri Azenkot, JAcob O. Wobbrock, Sanjana Prasain, and Richard E. Ladner. 2012. Input finger detection for nonvisual touch screen text entry in Perkinput. In Proceedings of Graphics Interface, 121--129. https://doi.org/2305276.2305297Google ScholarGoogle Scholar
  5. Rafael Ballagas, Jan Borchers, Michael Rohs, Jennifer G Sheridan, James Foley, Victor Wallace, and Peggy Chan. 2006. The Smart Phone?: A The input design space. Shoot 5, 1: 70--77. https://doi.org/10.1109/MPRV.2006.18Google ScholarGoogle Scholar
  6. Rafael Ballagas, Sarthak Ghosh, and James Landay. 2018. The Design Space of 3D Printable Interactivity. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 2, 2: 1--21. https://doi.org/10.1145/3214264Google ScholarGoogle Scholar
  7. Xiaojun Bi, Tom Ouyang, and Shumin Zhai. 2014. Both Complete and Correct?: Multi-objective Optimization of Touchscreen Keyboard. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14), 2297--2306. https://doi.org/10.1145/2556288.2557414Google ScholarGoogle Scholar
  8. Matthew N. Bonner, Jeremy T. Brudvik, Gregory D. Abowd, and W. Keith Edwards. 2010. No-look notes: Accessible eyes-free multi-touch text entry. Lecture Notes in Computer Science 6030 LNCS: 409--426. https://doi.org/10.1007/978--3--642--12654--3_24Google ScholarGoogle Scholar
  9. Derek Bridge and Paul Healy. 2011. GhostWriter-2.0: Product Reviews with Case-Based Support. In Research and Development in Intelligent Systems XXVII, 467--480.Google ScholarGoogle Scholar
  10. Douglas S Brungart and Brian D Simpson. 2005. Improving multitalker speech communication with advanced audio displays.Google ScholarGoogle Scholar
  11. Stuart K. Card, Jock D. Mackinlay, and George G. Robertson. 2003. The design space of input devices. June: 117--124. https://doi.org/10.1145/97243.97263Google ScholarGoogle Scholar
  12. Girish Dalvi, Shashank Ahire, Nagraj Emmadi, Manjiri Joshi, Anirudha Joshi, Sanjay Ghosh, Prasad Ghone, and Narendra Parmar. 2016. Does Prediction Really Help in Marathi Text Input?: Empirical Analysis of a Longitudinal Study. In Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI '16), 35--46. https://doi.org/10.1145/2935334.2935366Google ScholarGoogle Scholar
  13. Torsten Felzer, I Scott MacKenzie, and Stephan Rinderknecht. 2013. OnScreenDualScribe: A Computer Operation Tool for Users with a Neuromuscular Disease. In Universal Access in Human-Computer Interaction. Design Methods, Tools, and Interaction Techniques for eInclusion, 474--483.Google ScholarGoogle Scholar
  14. James D. Foley, Victor L. Wallace, and Peggy Chan. 1984. Human Factors of Computer Graphics Interaction Techniques. IEEE Computer Graphics and Applications 4, 11: 13--48. https://doi.org/10.1109/MCG.1984.6429355Google ScholarGoogle Scholar
  15. Spence Green, Jason Chuang, Jeffrey Heer, and Christopher D Manning. 2014. Predictive Translation Memory: A Mixed-initiative System for Human Language Translation. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology (UIST '14), 177--187. https://doi.org/10.1145/2642918.2647408Google ScholarGoogle Scholar
  16. João Guerreiro and Daniel Gonçalves. 2014. Text-to-speeches: Evaluating the Perception of Concurrent Speech by Blind People. In Proceedings of the 16th International ACM SIGACCESS Conference on Computers & Accessibility (ASSETS '14), 169--176. https://doi.org/10.1145/2661334.2661367Google ScholarGoogle Scholar
  17. João Guerreiro and Daniel Gonçalves. 2015. Faster Text-to-Speeches: Enhancing Blind People's Information Scanning with Faster Concurrent Speech. In Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility (ASSETS '15), 3--11. https://doi.org/10.1145/2700648.2809840Google ScholarGoogle Scholar
  18. João Guerreiro, André Rodrigues, Kyle Montague, Tiago Guerreiro, Hugo Nicolau, and Daniel Gonçalves. 2015. TabLETS Get Physical: Non-Visual Text Entry on Tablet Devices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems.Google ScholarGoogle Scholar
  19. T Guerreiro, P Lagoá, H Nicolau, D Gonçalves, and J A Jorge. 2008. From tapping to touching: Making touch screens accessible to blind users. IEEE MultiMedia: 48--50.Google ScholarGoogle Scholar
  20. D Jeffery Higginbotham. 1992. Evaluation of keystroke savings across five assistive communication technologies. Augmentative and Alternative Communication 8, 4: 258--272. https://doi.org/10.1080/07434619212331276303Google ScholarGoogle Scholar
  21. Teresa Hirzle, Jan Gugenheimer, Florian Geiselhart, Andreas Bulling, and Rukzio Enrico. 2019. A Design Space for Gaze Interaction on Head-Mounted Displays. Proceedings of the ACM CHI Conference on Human Factors in Computing Systems: 1--12. https://doi.org/10.1145/3290605.3300855Google ScholarGoogle Scholar
  22. S K Kane, J P Bigham, and J O Wobbrock. 2008. Slide rule: making mobile touch screens accessible to blind people using multi-touch interaction techniques. In Proceedings of the 10th international ACM SIGACCESS conference on Computers and accessibility, 73--80.Google ScholarGoogle Scholar
  23. Shaun K Kane and Meredith Ringel Morris. 2017. Let's Talk About X: Combining Image Recognition and Eye Gaze to Support Conversation for People with ALS. In Proceedings of the 2017 Conference on Designing Interactive Systems (DIS '17), 129--134. https://doi.org/10.1145/3064663.3064762Google ScholarGoogle Scholar
  24. Anjuli Kannan, Karol Kurach, Sujith Ravi, Tobias Kaufmann, Andrew Tomkins, Balint Miklos, Greg Corrado, Laszlo Lukacs, Marina Ganea, Peter Young, and Vivek Ramavajjala. 2016. Smart Reply: Automated Response Suggestion for Email. In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '16), 955--964. https://doi.org/10.1145/2939672.2939801Google ScholarGoogle Scholar
  25. Heidi Horstmann Koester and Simon P Levine. 1994. Modeling the speed of text entry with a word prediction interface. IEEE transactions on rehabilitation engineering 2, 3: 177--187.Google ScholarGoogle Scholar
  26. Matthijs Kwak, Kasper Hornbæk, Panos Markopoulos, and Miguel Bruns Alonso. 2014. The design space of shape-changing interfaces. June: 181--190. https://doi.org/10.1145/2598510.2598573Google ScholarGoogle Scholar
  27. Sergio Mascetti, Cristian Bernareggi, and Matteo Belotti. 2011. TypeInBraille: A Braille-based Typing Application for Touchscreen Devices. In Proceedings of the 13th International ACM SIGACCESS Conference on Computers and Accessibility, 295. https://doi.org/10.1145/2049536.2049614Google ScholarGoogle Scholar
  28. Meredith Ringel Morris, Jazette Johnson, Cynthia L Bennett, and Edward Cutrell. 2018. Rich Representations of Visual Content for Screen Reader Users. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems: To Appear.Google ScholarGoogle Scholar
  29. Hugo Nicolau, Kyle Montague, Tiago Guerreiro, João Guerreiro, and Vicki L Hanson. 2014. B#: Chord-based Correction for Multitouch Braille Input. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'14), 1705--1708. https://doi.org/10.1145/2556288.2557269Google ScholarGoogle Scholar
  30. Hugo Nicolau, Kyle Montague, Tiago Guerreiro, André Rodrigues, and Vicki L Hanson. 2015. Typing Performance of Blind Users: An Analysis of Touch Behaviors, Learning Effect, and In-Situ Usage. In ACM SIGACCESS Conference on Computers and Accessibility. https://doi.org/10.1145/2700648.2809861Google ScholarGoogle Scholar
  31. Hugo Nicolau, Kyle Montague, Tiago Guerreiro, André Rodrigues, and Vicki L Hanson. 2017. Investigating Laboratory and Everyday Typing Performance of Blind Users. ACM Transactions on Accessible Computing (TACCESS) 10, 1: 4:1--4:26. https://doi.org/10.1145/3046785Google ScholarGoogle Scholar
  32. Jakob Nielsen. 1995. 10 usability heuristics for user interface design. Nielsen Norman Group 1, 1.Google ScholarGoogle Scholar
  33. João Oliveira, Tiago Guerreiro, Hugo Nicolau, Joaquim Jorge, and Daniel Gonçalves. 2011. Blind people and mobile touch-based text-entry: Acknowledging the Need for Different Flavors. In The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility - ASSETS '11, 179. https://doi.org/10.1145/2049536.2049569Google ScholarGoogle Scholar
  34. João Oliveira, Tiago Guerreiro, Hugo Nicolau, Joaquim Jorge, and Daniel Gonçalves. 2011. BrailleType: unleashing braille over touch screen mobile phones. Lecture Notes in Computer Science 6946 LNCS, PART 1: 100--107. https://doi.org/10.1007/978--3--642--23774--4_10Google ScholarGoogle Scholar
  35. Philip Quinn and Shumin Zhai. 2016. A Cost-Benefit Study of Text Entry Suggestion Interaction. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems - CHI '16: 83--88. https://doi.org/10.1145/2858036.2858305Google ScholarGoogle Scholar
  36. Ramesh Raskar. 2012. Idea Hexagon. Retrieved from https://de.slideshare.net/cameraculture/Google ScholarGoogle Scholar
  37. André Rodrigues, Kyle Montague, Hugo Nicolau, and Tiago Guerreiro. Getting Smartphones to Talkback: Understanding Smartphone Adoption Process of Blind Users. In ACM SIGACCESS Conference on Computers and Accessibility (ASSETS'15).Google ScholarGoogle Scholar
  38. André Rodrigues, Hugo Nicolau, Kyle Montague, Luis Carriço, and Tiago Guerreiro. 2016. Effect of Target Size on Non-visual Text-entry. In Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI '16), 47--52. https://doi.org/10.1145/2935334.2935376Google ScholarGoogle Scholar
  39. Caleb Southern, James Clawson, Brian Frey, Gregory Abowd, and Mario Romero. 2012. An Evaluation of BrailleTouch: Mobile Touchscreen Text Entry for the Visually Impaired. In Proceedings of the 14th International Conference on Human-computer Interaction with Mobile Devices and Services (MobileHCI '12), 317--326. https://doi.org/10.1145/2371574.2371623Google ScholarGoogle Scholar
  40. Daniel Trindade, André Rodrigues, Tiago Guerreiro, and Hugo Nicolau. 2018. Hybrid-Brailler: Combining Physical and Gestural Interaction for Mobile Braille Input and Editing. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18), 27:1--27:12. https://doi.org/10.1145/3173574.3173601Google ScholarGoogle Scholar
  41. Keith Trnka, John McCaw, Debra Yarrington, Kathleen F McCoy, and Christopher Pennington. 2009. User Interaction with Word Prediction: The Effects of Prediction Quality. ACM Trans. Access. Comput. 1, 3: 17:1--17:34. https://doi.org/10.1145/1497302.1497307Google ScholarGoogle Scholar
  42. Keith Trnka and Kathleen F McCoy. 2008. Evaluating Word Prediction: Framing Keystroke Savings. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers (HLT-Short '08), 261--264. Retrieved from http://dl.acm.org/citation.cfm?id=1557690.1557766Google ScholarGoogle Scholar
  43. Bruce N Walker, Amanda Nance, and Jeffrey Lindsay. 2006. Spearcons: Speech-based earcons improve navigation performance in auditory menus. In Proceedings of the 12th International Conference on Auditory Display.Google ScholarGoogle Scholar
  44. Georgios Yfantidis and Grigori Evreinov. 2004. Adaptive blind interaction technique for touchscreens. Universal Access in the Information Society 4, 4: 344--353. https://doi.org/10.1007/s10209-004-0109--7Google ScholarGoogle Scholar
  45. Shengdong Zhao, Fanny Chevalier, Wei Tsang Ooi, Chee Yuan Lee, and Arpit Agarwal. 2012. AutoComPaste: Auto-completing Text As an Alternative to Copy-paste. In Proceedings of the International Working Conference on Advanced Visual Interfaces (AVI '12), 365--372. https://doi.org/10.1145/2254556.2254626Google ScholarGoogle Scholar
  46. Suwen Zhu, Tianyao Luo, Xiaojun Bi, and Shumin Zhai. 2018. Typing on an Invisible Keyboard. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18), 439:1--439:13. https://doi.org/10.1145/3173574.3174013Google ScholarGoogle Scholar
  47. F Zwicky. 1967. The Morphological Approach to Discovery, Invention, Research and Construction. In New Methods of Thought and Procedure, 273--297.Google ScholarGoogle Scholar

Index Terms

  1. The Design Space of Nonvisual Word Completion

                    Comments

                    Login options

                    Check if you have access through your login credentials or your institution to get full access on this article.

                    Sign in
                    • Article Metrics

                      • Downloads (Last 12 months)165
                      • Downloads (Last 6 weeks)12

                      Other Metrics

                    PDF Format

                    View or Download as a PDF file.

                    PDF

                    eReader

                    View online with eReader.

                    eReader
                    About Cookies On This Site

                    We use cookies to ensure that we give you the best experience on our website.

                    Learn more

                    Got it!