ABSTRACT
In the midst of the coronavirus disease 2019 pandemic, online meetings are rapidly increasing. Deaf or hard of hearing (DHH) people participating in an online meeting often face difficulties in capturing the affective states of other speakers. Recent studies have shown the effectiveness of emoji-based representation of spoken text to capture such affective states. Nevertheless, in voice-only online meetings, it is still not clear how emoji-based spoken texts can assist DHH people to understand the feelings of speakers without perceiving their facial expressions. We therefore conducted a preliminary experiment to understand the effect of emoji-based text representation during voice-only online meetings by leveraging an emoji-based captioning system. Our preliminary results demonstrate the necessity of designing an advanced system to help DHH people understanding the voice-only online meetings more meaningfully.
References
- Larwan Berke, Khaled Albusays, Matthew Seita, and Matt Huenerfauth. 2019. Preferred Appearance of Captions Generated by Automatic Speech Recognition for Deaf and Hard-of-Hearing Viewers. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI EA ’19). Association for Computing Machinery, New York, NY, USA, 1–6. https://doi.org/10.1145/3290607.3312921Google Scholar
Digital Library
- Google. 2020 (accessed July 31, 2020). Cloud Speech-to-Text - Speech Recognition | Cloud Speech-to-Text | Google Cloud. https://cloud.google.com/speech-to-text/?hl=enGoogle Scholar
- Jiaxiong Hu, Qianyao Xu, Limin Paul Fu, and Yingqing Xu. 2019. Emojilization: An Automated Method For Speech to Emoji-Labeled Text. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI EA ’19). Association for Computing Machinery, New York, NY, USA, 1–6. https://doi.org/10.1145/3290607.3313071Google Scholar
Digital Library
- Empath Inc.2020 (accessed July 31, 2020). Empath. https://webempath.com/Google Scholar
- Sushant Kafle, Peter Yeung, and Matt Huenerfauth. 2019. Evaluating the Benefit of Highlighting Key Words in Captions for People Who Are Deaf or Hard of Hearing. In The 21st International ACM SIGACCESS Conference on Computers and Accessibility (Pittsburgh, PA, USA) (ASSETS ’19). Association for Computing Machinery, New York, NY, USA, 43–55. https://doi.org/10.1145/3308561.3353781Google Scholar
Digital Library
- Shao-Kang Lo. 2008. The nonverbal communication functions of emoticons in computer-mediated communication. CyberPsychology & Behavior 11, 5 (2008), 595–597.Google Scholar
Cross Ref
- Apostolos Meliones and Cosmin Duta. 2019. SeeSpeech: An Android Application for the Hearing Impaired. In Proceedings of the 12th ACM International Conference on PErvasive Technologies Related to Assistive Environments (Rhodes, Greece) (PETRA’19). Association for Computing Machinery, New York, NY, USA, 509–516. https://doi.org/10.1145/3316782.3324013Google Scholar
Digital Library
- Carol Padden and Tom Humphries. 1989. Deaf in America: Voices from a culture. Ear and Hearing 10, 2 (1989), 139.Google Scholar
Cross Ref
- Jaram Park, Vladimir Barash, Clay Fink, and Meeyoung Cha. 2013. Emoticon style: Interpreting differences in emoticons across cultures. In Seventh International AAAI Conference on Weblogs and Social Media.Google Scholar
- OBS Open Broadcaster Software. 2020 (accessed July 31, 2020). OBS-VirtualCam 2.0.4. https://obsproject.com/forum/resources/obs-virtualcam.539/Google Scholar
- Garreth W. Tigwell and David R. Flatla. 2016. Oh That’s What You Meant! Reducing Emoji Misunderstanding. In Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct (Florence, Italy) (MobileHCI’16). Association for Computing Machinery, New York, NY, USA, 859–866. https://doi.org/10.1145/2957265.2961844Google Scholar
- Twitter. 2020 (accessed July 31, 2020). Twemoji. https://twemoji.twitter.com/Google Scholar


Yoichi Ochiai

Comments