10.1145/3517428.3550372acmconferencesArticle/Chapter ViewAbstractPublication PagesassetsConference Proceedingsconference-collections
poster

Understanding How People with Visual Impairments Take Selfies: Experiences and Challenges

Authors Info & Claims
Published:22 October 2022Publication History

ABSTRACT

Selfies are a pervasive form of communication in social media. While there has been some work on systems that guide people with visual impairments (PVI) in taking photos, nearly all has focused on using the camera on the back of the device. We do not know whether and how PVI take selfies. The aim of our work is to understand (1) PVI selfie-taking experiences and challenges, (2) what information do PVI need when taking selfies, and (3) what modalities do PVI prefer (e.g., tactile, verbal, or non-verbal audio) to support selfie-taking. To address this gap, we conducted interviews with 10 PVI. Our findings show that current selfie-taking applications do not provide enough assistance to meet the needs of PVI. We contribute design guidelines that researchers and designers can implement for creating accessible selfie-taking applications.

References

  1. Samuel White, Hanjie Ji, and Jeffrey P. Bigham. 2010. EasySnap: real-time audio feedback for blind photography. In Adjunct proceedings of the 23nd annual ACM symposium on User interface software and technology (UIST '10). Association for Computing Machinery, New York, NY, USA, 409–410. DOI:https://doi.org/10.1145/1866218.1866244Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Chandrika Jayant, Hanjie Ji, Samuel White, and Jeffrey P. Bigham. 2011. Supporting blind photography. In The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility (ASSETS '11). Association for Computing Machinery, New York, NY, USA, 203–210. DOI:https://doi.org/10.1145/2049536.2049573Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Marynel Vázquez and Aaron Steinfeld. 2012. Helping visually impaired users properly aim a camera. In Proceedings of the 14th international ACM SIGACCESS conference on Computers and accessibility (ASSETS '12). Association for Computing Machinery, New York, NY, USA, 95–102. DOI:https://doi.org/10.1145/2384916.2384934Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Dustin Adams, Tory Gallagher, Alexander Ambard, and Sri Kurniawan. 2013. Interviewing blind photographers: design insights for a smartphone application. In Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '13). Association for Computing Machinery, New York, NY, USA, Article 54, 1–2. DOI:https://doi.org/10.1145/2513383.2513418Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Marynel Vázquez and Aaron Steinfeld. 2014. An Assisted Photography Framework to Help Visually Impaired Users Properly Aim a Camera. ACM Trans. Comput.-Hum. Interact. 21, 5, Article 25 (November 2014), 29 pages. DOI:https://doi.org/10.1145/2651380Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Roberto Manduchi and James M. Coughlan. 2014. The last meter: blind visual guidance to a target. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14). Association for Computing Machinery, New York, NY, USA, 3113–3122. DOI:https://doi.org/10.1145/2556288.2557328Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Jan Balata, Zdenek Mikovec, and Lukas Neoproud. 2015. BlindCamera: Central and Golden-ratio Composition for Blind Photographers. In Proceedings of the Mulitimedia, Interaction, Design and Innnovation (MIDI '15). Association for Computing Machinery, New York, NY, USA, Article 8, 1–8. DOI:https://doi.org/10.1145/2814464.2814472Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Minju Kim and Jungjin Lee. 2019. PicMe: Interactive Visual Guidance for Taking Requested Photo Composition. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). Association for Computing Machinery, New York, NY, USA, Paper 395, 1–12. DOI:https://doi.org/10.1145/3290605.3300625Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Michael P. Cutter and Roberto Manduchi. 2015. Towards Mobile OCR: How to Take a Good Picture of a Document Without Sight. In Proceedings of the 2015 ACM Symposium on Document Engineering (DocEng '15). Association for Computing Machinery, New York, NY, USA, 75–84. DOI:https://doi.org/10.1145/2682571.2797066Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Jongho Lim, Yongjae Yoo, Hanseul Cho, and Seungmoon Choi. 2019. TouchPhoto: Enabling Independent Picture Taking and Understanding for Visually-Impaired Users. In 2019 International Conference on Multimodal Interaction (ICMI '19). Association for Computing Machinery, New York, NY, USA, 124–134. DOI:https://doi.org/10.1145/3340555.3353728Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Dragan Ahmetovic, Daisuke Sato, Uran Oh, Tatsuya Ishihara, Kris Kitani, and Chieko Asakawa. 2020. ReCog: Supporting Blind People in Recognizing Personal Objects. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1–12. DOI:https://doi.org/10.1145/3313831.3376143Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Cynthia L. Bennett, Jane E, Martez E. Mott, Edward Cutrell, and Meredith Ringel Morris. 2018. How Teens with Visual Impairments Take, Edit, and Share Photos on Social Media. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). Association for Computing Machinery, New York, NY, USA, Paper 76, 1–12. DOI:https://doi.org/10.1145/3173574.3173650Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Dustin Adams, Sri Kurniawan, Cynthia Herrera, Veronica Kang, and Natalie Friedman. 2016. Blind Photographers and VizSnap: A Long-Term Study. In Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '16). Association for Computing Machinery, New York, NY, USA, 201–208. DOI:https://doi.org/10.1145/2982142.2982169Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Jeffrey P. Bigham, Chandrika Jayant, Hanjie Ji, Greg Little, Andrew Miller, Robert C. Miller, Robin Miller, Aubrey Tatarowicz, Brandyn White, Samual White, and Tom Yeh. 2010. VizWiz: nearly real-time answers to visual questions. In Proceedings of the 23nd annual ACM symposium on User interface software and technology (UIST '10). Association for Computing Machinery, New York, NY, USA, 333–342. DOI:https://doi.org/10.1145/1866029.1866080Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. J. P. Bigham, C. Jayant, A. Miller, B. White and T. Yeh, "VizWiz::LocateIt - enabling blind people to locate objects in their environment," 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops, 2010, pp. 65-72, doi: 10.1109/CVPRW.2010.5543821.Google ScholarGoogle Scholar
  16. Michele A. Burton, Erin Brady, Robin Brewer, Callie Neylan, Jeffrey P. Bigham, and Amy Hurst. 2012. Crowdsourcing subjective fashion advice using VizWiz: challenges and opportunities. In Proceedings of the 14th international ACM SIGACCESS conference on Computers and accessibility (ASSETS '12). Association for Computing Machinery, New York, NY, USA, 135–142. DOI:https://doi.org/10.1145/2384916.2384941Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Ericsson, K. A., & Simon, H. A. (1984). Protocol analysis: Verbal reports as data. the MIT Press.Google ScholarGoogle Scholar
  18. Yuhang Zhao, Shaomei Wu, Lindsay Reynolds, and Shiri Azenkot. 2018. A Face Recognition Application for People with Visual Impairments: Understanding Use Beyond the Lab. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). Association for Computing Machinery, New York, NY, USA, Paper 215, 1–14. DOI:https://doi.org/10.1145/3173574.3173789Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Violeta Voykinska, Shiri Azenkot, Shaomei Wu, and Gilly Leshed. 2016. How Blind People Interact with Visual Content on Social Networking Services. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing (CSCW '16). Association for Computing Machinery, New York, NY, USA, 1584–1595. DOI:https://doi.org/10.1145/2818048.2820013Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Erin L. Brady, Yu Zhong, Meredith Ringel Morris, and Jeffrey P. Bigham. 2013. Investigating the appropriateness of social network question asking as a resource for blind users. In Proceedings of the 2013 conference on Computer supported cooperative work (CSCW '13). Association for Computing Machinery, New York, NY, USA, 1225–1236. DOI:https://doi.org/10.1145/2441776.2441915Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Shaomei Wu and Lada A. Adamic. 2014. Visually impaired users on an online social network. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14). Association for Computing Machinery, New York, NY, USA, 3133–3142. DOI:https://doi.org/10.1145/2556288.2557415Google ScholarGoogle ScholarDigital LibraryDigital Library
  22.  D. Gurari , "VizWiz Grand Challenge: Answering Visual Questions from Blind People," 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 3608-3617, doi: 10.1109/CVPR.2018.00380.Google ScholarGoogle Scholar
  23. Susumu Harada, Daisuke Sato, Dustin W. Adams, Sri Kurniawan, Hironobu Takagi, and Chieko Asakawa. 2013. Accessible photo album: enhancing the photo sharing experience for people with visual impairment. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13). Association for Computing Machinery, New York, NY, USA, 2127–2136. DOI:https://doi.org/10.1145/2470654.2481292Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Lee, Y., & Oh, U. (2020). SelfieHelper: Improving Selfie Experiences for People with Visual Impairments. 한국 HCI 학회 논문지, 15(3), 23-30.Google ScholarGoogle Scholar
  25. Fang, N., Xie, H., & Igarashi, T. (2018). Selfie Guidance System in Good Head Postures. IUI Workshops.Google ScholarGoogle Scholar
  26. Yuhang Zhao, Shaomei Wu, Lindsay Reynolds, and Shiri Azenkot. 2017. The Effect of Computer-Generated Descriptions on Photo-Sharing Experiences of People with Visual Impairments. Proc. ACM Hum.-Comput. Interact. 1, CSCW, Article 121 (November 2017), 22 pages. DOI:https://doi.org/10.1145/3134756Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Hugo Nicolau, André Rodrigues, André Santos, Tiago Guerreiro, Kyle Montague, and João Guerreiro. 2019. The Design Space of Nonvisual Word Completion. In The 21st International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '19) Association for Computing Machinery, New York, NY, USA, 249–261. DOI:https://doi-org.proxy.library.cornell.edu/10.1145/3308561.3353786Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Pradhan, A., & Daniels, G. (2021). Inclusive beauty: how buying and using cosmetics can be made more accessible for the visually impaired (VI) and blind consumer. Cosmetics and Toiletries, 136(4), DM4-DM15. Association, USA, Article 109, 1929–1948.Google ScholarGoogle Scholar

Index Terms

  1. Understanding How People with Visual Impairments Take Selfies: Experiences and Challenges

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Article Metrics

        • Downloads (Last 12 months)90
        • Downloads (Last 6 weeks)7

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format .

      View HTML Format
      About Cookies On This Site

      We use cookies to ensure that we give you the best experience on our website.

      Learn more

      Got it!