10.1145/3236112.3236141acmconferencesArticle/Chapter ViewAbstractPublication PagesmobilehciConference Proceedings
research-article

Aidme: interactive non-visual smartphone tutorials

ABSTRACT

The constant barrage of updates and novel applications to explore creates a ceaseless cycle of new layouts and interaction methods that we must adapt to. One way to address these challenges is through in-context interactive tutorials. Most applications provide onboarding tutorials using visual metaphors to guide the user through the core features available. However, these tutorials are limited in their scope and are often inaccessible to blind people. In this paper, we present AidMe, a system-wide authoring and playthrough of non-visual interactive tutorials. Tutorials are created via user demonstration and narration. Using AidMe, in a user study with 11 blind participants we identified issues with instruction delivery and user guidance providing insights into the development of accessible interactive non-visual tutorials.

References

  1. AppleViz, iOS blind support community, August 22, 2017 from https://www.applevis.com/Google ScholarGoogle Scholar
  2. Eyes-Free, Android blind and low-vision forum, August 22, 2017 from https://goo.gl/uiib1FGoogle ScholarGoogle Scholar
  3. Tovi Grossman and George Fitzmaurice. 2010. ToolClips: an investigation of contextual video assistance for functionality understanding. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '10). ACM, New York, NY, USA, 1515--1524. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. João Guerreiro and Daniel Gonçalves. 2014. Text-to-speeches: evaluating the perception of concurrent speech by blind people. In Proc. ASSETS '14. ACM, New York, NY, USA, 169--176. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Toshiyuki Hagiya, Tomonori Yazaki, Toshiharu Horiuchi, and Tsuneo Kato. 2015. Typing Tutor: Automatic Error Detection and Instruction in Text Entry for Elderly People. In Proc. (MobileHCI '15). ACM, New York, NY, USA, 696--703. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Henry Lieberman, Elizabeth Rosenzweig, and Christopher Fry. 2014. Steptorials: mixed-initiative learning of high-functionality applications. In Proc. IUI '14. ACM, New York, NY, USA, 359--364. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. André Rodrigues, Kyle Montague, Hugo Nicolau, and Tiago Guerreiro. 2015. Getting Smartphones to Talkback: Understanding the Smartphone Adoption Process of Blind Users. In Proc. ASSETS '15. ACM, New York, NY, USA, 23--32. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. André Rodrigues, Kyle Montague, Hugo Nicolau, João Guerreiro, and Tiago Guerreiro. 2017. In-context Q&A to Support Blind People Using Smartphones. In Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '17). ACM, New York, NY, USA, 32--36. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Vyshnavi, Tech Accessibility Youtube Channel, August 22, 2017 from https://goo.gl/3ijYhpGoogle ScholarGoogle Scholar
  10. Cheng-Yao Wang, Wei-Chen Chu, Hou-Ren Chen, Chun-Yen Hsu, and Mike Y. Chen. 2014. EverTutor: automatically creating interactive guided tutorials on smartphones by user demonstration. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14). ACM, New York, NY, USA, 4027--4036. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Aidme

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader
    About Cookies On This Site

    We use cookies to ensure that we give you the best experience on our website.

    Learn more

    Got it!