skip to main content
research-article

A Novel Social Interaction Assistive Device for Arab Deaf People

Published:25 August 2022Publication History
Skip Abstract Section

Abstract

Many deaf people worldwide face problems with integrating into society and interacting with people who do not understand sign language. This can lead to isolation and difficulty in expressing feelings. In this research, our primary goal is to help deaf people communicate, express their feelings, and socialize with others. Toward that end, 40 Arabic words that are commonly used in social interactions were used to build a dataset of hand movements used by deaf people to express these words. These movements were recorded using a Leap Motion Controller (LMC). The resulting dataset consists of 1,579 instances and 112 features, recorded with the help of five deaf persons. Feature reduction and oversampling techniques were applied to analyze the dataset. Machine learning algorithms were then used to build a model that is able to classify any given hand posture or gesture into one of those 40 words. This work compared the performance of nine classification algorithms: Random Forest, Decision Table, Classification via Regression, K-Nearest Neighbor (KNN), Simple Logistic, Input Mapped Classifier, Random Tree, J48, and Bayes network. Results show that the Random Forest model achieved the highest accuracy with over 90%, outperforming the other eight models. Subsequently, a usability study was conducted by 10 deaf people to test the effectiveness of the proposed assistive device. The results suggest that the proposed device is useful for facilitating social communication with deaf people. It also suggests that the device was preferred, when compared with other relevant devices.

REFERENCES

  1. [1] Assistive Technology Industry Association: ATIA. Retrieved Feb. 2022 from https://www.atia.org.Google ScholarGoogle Scholar
  2. [2] World Health Organization (WHO). Retrieved Feb. 2022 from https://www.who.int/.Google ScholarGoogle Scholar
  3. [3] Sign language–Communication–Sense. Retrieved Feb. 2022 from https://www.sense.org.uk/get-support/information-and-advice/communication/sign-language/.Google ScholarGoogle Scholar
  4. [4] Buttussi Fabio, Chittaro Luca, Carchietti Elio, and Coppo Marco. 2010. Using mobile devices to support communication between emergency medical responders and deaf people. In Proceedings of the 12th International Conference on Human Computer Interaction with Mobile Devices and Services. 716.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. [5] Aziz Kazi Ehsan, Wadud Ashzabin, Sultana Sadia, Hussain Md Akter, and Bhuiyan Alauddin. 2017. Bengali Sign Language Recognition using dynamic skin calibration and geometric hashing. In Proceedings of the 2017 6th International Conference on Informatics, Electronics and Vision and 2017 7th International Symposium on Computational Medical and Health Technology (ICIEV-ISCMHT’17). IEEE, 15.Google ScholarGoogle ScholarCross RefCross Ref
  6. [6] Prasad M. Mahadeva. 2018. Gradient feature based static sign language recognition. International Journal of Computer Sciences and Engineering 6, 12 (2018), 531534.Google ScholarGoogle ScholarCross RefCross Ref
  7. [7] Bigham Jeffrey P., Kushalnagar Raja, Huang Ting-Hao Kenneth, Flores Juan Pablo, and Savage Saiph. 2017. On how deaf people might use speech to control devices. In Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility. 383384.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. [8] Kakde Manisha U., Nakrani Mahender G., and Rawate Amit M.. 2016. A review paper on sign language recognition system for deaf and dumb people using image processing. International Journal of Engineering Research & Technology (IJERT) 5, 3 (2016), 590592.Google ScholarGoogle Scholar
  9. [9] Ulisses João, Oliveira Tiago, Escudeiro Paula Maria, Escudeiro Nuno, and Barbosa Fernando Maciel. 2018. ACE assisted communication for education: Architecture to support blind and deaf communication. In Proceedings of the 2018 IEEE Global Engineering Education Conference (EDUCON’18). IEEE, 10151023.Google ScholarGoogle ScholarCross RefCross Ref
  10. [10] Uchil Aditya P., Jha Smriti, and Sudha B. G.. 2019. Vision based deep learning approach for dynamic Indian sign language recognition in healthcare. In Proceedings of the International Conference on Computational Vision and Bio Inspired Computing. Springer, Cham, 371383.Google ScholarGoogle Scholar
  11. [11] Tao Wenjin, Lai Ze-Hao, Leu Ming C., and Yin Zhaozheng. 2018. American sign language alphabet recognition using Leap Motion Controller. In Proceedings of the 2018 Institute of Industrial and Systems Engineers Annual Conference (IISE’18).Google ScholarGoogle Scholar
  12. [12] Shao Lin. 2016. Hand movement and gesture recognition using Leap Motion Controller. Virtual Reality, Course Report.Google ScholarGoogle Scholar
  13. [13] Naidu Chetna and Ghotkar Archana. 2016. Hand gesture recognition using Leap Motion Controller. International Journal of Science and Research (IJSR) ISSN (Online) 5, 10 (2016), 436441.Google ScholarGoogle Scholar
  14. [14] Funasaka Makiko, Ishikawa Yu, Takata Masami, and Joe Kazuki. 2015. Sign language recognition using Leap Motion Controller. In Proceedings of the International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA’15). The Steering Committee of The World Congress in Computer Science, Computer Engineering and Applied Computing (WorldComp).Google ScholarGoogle Scholar
  15. [15] Toghiani-Rizi Babak, Lind Christofer, Svensson Maria, and Windmark Marcus. 2017. Static gesture recognition using leap motion. arXiv:1705.05884. Retrieved from https://arxiv.org/abs/1705.05884v1.Google ScholarGoogle Scholar
  16. [16] Hisham Basma and Hamouda Alaa. 2017. Arabic static and dynamic gestures recognition using leap motion. Journal of Computational Science 13, 8 (2017), 337354.Google ScholarGoogle ScholarCross RefCross Ref
  17. [17] Khelil Bassem, Amiri Hamid, Chen T., Kammüller F., Nemli I., and Probst C. W.. 2016. Hand gesture recognition using Leap Motion Controller for recognition of Arabic sign language. In 3rd International Conference ACECS’16. 233238.Google ScholarGoogle Scholar
  18. [18] Mohandes Mohamed, Aliyu S., and Deriche M.. 2014. Arabic sign language recognition using the Leap Motion Controller. In Proceedings of the 2014 IEEE 23rd International Symposium on Industrial Electronics (ISIE’14). IEEE, 960965.Google ScholarGoogle ScholarCross RefCross Ref
  19. [19] Hisham Basma and Hamouda Alaa. 2021. Arabic sign language recognition using Ada-Boosting based on a Leap Motion Controller. International Journal of Information Technology 13, 3 (2021), 12211234.Google ScholarGoogle ScholarCross RefCross Ref
  20. [20] Ammar AlnahhasAlkhatib Bassel, Al-Boukaee Nazeer, Alhakim Noor, Alzabibi Ola, and Ajalyakeen Noor. 2020. Enhancing the Recognition of Arabic Sign Language by using Deep Learning and Leap Motion Controller. International Journal of Scientific & Technology Research 9, 4 (2020), 18651870.Google ScholarGoogle Scholar
  21. [21] Slim KammounDarwish Dawlat, Althubeany Hanan, and Alfull Reem. 2020. ArSign: Toward a mobile based Arabic sign language translator using LMC. In International Conference on Human-Computer Interaction. Springer, Cham, 92101.Google ScholarGoogle Scholar
  22. [22] Al-Nafjan Abeer, Al-Abdullatef Layan, Al-Ghamdi Mayar, Al-Khalaf Nada, and Al-Zahrani Wejdan. 2021. Co-design of gesture-based Arabic sign language (ArSL) recognition. In Proceedings of the International Conference on Intelligent Human Systems Integration. Springer, Cham, 715720.Google ScholarGoogle ScholarCross RefCross Ref
  23. [23] Patel Dhawal L., Tapase Harshal S., Landge Paraful A., More Parmeshwar P., and Bagade A. P.. 2018. Smart hand gloves for disable people. International Research Journal of Engineering and Technology (IRJET) 5 (2018), 14231426.Google ScholarGoogle Scholar
  24. [24] Mohammed ElmahgiubiEnnajar Mohamed, Nabil Drawil , and Mohamed Samir Elbuni . 2015. Sign language translator and gesture recognition. In 2015 Global Summit on Computer & Information Technology (GSCIT). IEEE, 16.Google ScholarGoogle Scholar
  25. [25] Chiu Che-Min, Chen Shuo-Wen, Pao Yu-Ping, Huang Ming-Zheng, Chan Shuen-Wen, and Lin Zong-Hong. 2019. A smart glove with integrated triboelectric nanogenerator for self-powered gesture recognition and language expression. Science and Technology of Advanced Materials 20, 1 (2019), 964971.Google ScholarGoogle ScholarCross RefCross Ref
  26. [26] Raheem Firas A. and Raheem Hadeer A.. 2018. ASL recognition quality analysis based on sensory gloves and MLP neural network. American Scientific Research Journal for Engineering, Technology, and Sciences (ASRJETS) 47, 1 (2018), 120.Google ScholarGoogle Scholar
  27. [27] Archana GhotkarVidap Pujashree, and Deo Kshitish. 2016. Dynamic hand gesture recognition using hidden Markov model by Microsoft Kinect sensor. International Journal of Computer Applications 150, 5 (2016), 59.Google ScholarGoogle ScholarCross RefCross Ref
  28. [28] Wijayanti Nurul Khotimah , Susanto Yohanes Aditya, and Suciati Nanik. 2017. Combining decision tree and back propagation genetic algorithm neural network for recognizing word gestures in Indonesian sign language using Kinect. Journal of Theoretical and Applied Information Technology 95, 2 (2017), 292.Google ScholarGoogle Scholar
  29. [29] Saha Sriparna, Ghosh Shreya, Konar Amit, and Nagar Atulya K.. Gesture recognition from Indian classical dance using Kinect sensor. In Proceedings of the 2013 5th International Conference on Computational Intelligence, Communication Systems and Networks. IEEE, 38.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. [30] Nikita P. Nagori and Malode Vandana. 2016. Communication interface for deaf-mute people using Microsoft Kinect. In Proceedings of the 2016 International Conference on Automatic Control and Dynamic Optimization Techniques (ICACDOT’16). IEEE, 640644.Google ScholarGoogle Scholar
  31. [31] Kawatsu Christopher, Li Jiaxing, and Chung Chan-Jin. 2012. Development of a fall detection system with Microsoft Kinect. In Robot Intelligence Technology and Applications. Springer, Berlin, 623630.Google ScholarGoogle Scholar
  32. [32] Bari ASM Hossain and Gavrilova Marina L.. 2019. Artificial neural network based gait recognition using Kinect sensor. IEEE Access 7 (2019), 162708162722.Google ScholarGoogle ScholarCross RefCross Ref
  33. [33] Hazari Shihab Shahriar, Alam Lamia, and Goni Nasim Al. 2017. Designing a sign language translation system using Kinect motion sensor device. In Proceedings of the 2017 International Conference on Electrical, Computer and Communication Engineering (ECCE’17). IEEE, 344349.Google ScholarGoogle ScholarCross RefCross Ref
  34. [34] Miao Qiguang, Li Yunan, Ouyang Wanli, Ma Zhenxin, Xu Xin, Shi Weikang, and Cao Xiaochun. 2017. Multimodal gesture recognition based on the resc3d network. In Proceedings of the IEEE International Conference on Computer Vision Workshops. 30473055.Google ScholarGoogle ScholarCross RefCross Ref
  35. [35] Anderson Ricky, Wiryana Fanny, Chandra Ariesta Meita, and Kusuma Gede Putra. 2017. Sign language recognition application systems for deaf-mute people: A review based on input-process-output. Procedia Computer Science 116 (2017), 441448.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. [36] Stokoe William C.. 2005. Sign language structure: An outline of the visual communication systems of the American deaf. Journal of Deaf Studies and Deaf Education 10, 1 (2005), 337.Google ScholarGoogle ScholarCross RefCross Ref
  37. [37] Shin Hoo-Chang, Roth Holger R., Gao Mingchen, Lu Le, Xu Ziyue, Nogues Isabella, Yao Jianhua, Mollura Daniel, and Summers Ronald M.. 2016. Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Transactions on Medical Imaging 35, 5 (2016), 12851298.Google ScholarGoogle Scholar
  38. [38] Escobedo Edwin, Ramirez Lourdes, and Camara Guillermo. 2019. Dynamic sign language recognition based on convolutional neural networks and texture maps. In Proceedings of the 2019 32nd SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI’19). IEEE, 265272.Google ScholarGoogle ScholarCross RefCross Ref
  39. [39] Pigou Lionel, Aäron Van Den Oord , Dieleman Sander, Mieke Van Herreweghe , and Dambre Joni. 2018. Beyond temporal pooling: Recurrence and temporal convolutions for gesture recognition in video. International Journal of Computer Vision 126, 2 (2018), 430439.Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. [40] Berrar Daniel. 2019. Cross-validation. Encyclopedia of Bioinformatics and Computational Biology 1 (2019), 542545.Google ScholarGoogle ScholarCross RefCross Ref
  41. [41] Arabic Programmer, Retrieved Feb. 2022 from https://arabicprogrammer.com/.Google ScholarGoogle Scholar
  42. [42] Rohini K. and Suseendran G.. 2017. Predicting lung disease severity evaluation and comparison of hybird decision tree algorithm. Indian Journal of Innovations and Developments 6 (2017), 1.Google ScholarGoogle Scholar
  43. [43] Remco R. Bouckaert , Frank Eibe, Hall Mark, Kirkby Richard, Reutemann Peter, Seewald Alex, and Scuse David. 2016. WEKA Manual for Version 3-9-1. University of Waikato, Hamilton, New Zealand.Google ScholarGoogle Scholar
  44. [44] Ganganwar Vaishali. 2012. An overview of classification algorithms for imbalanced datasets. International Journal of Emerging Technology and Advanced Engineering 2, 4 (2012), 4247.Google ScholarGoogle Scholar

Index Terms

  1. A Novel Social Interaction Assistive Device for Arab Deaf People

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image ACM Transactions on Asian and Low-Resource Language Information Processing
        ACM Transactions on Asian and Low-Resource Language Information Processing  Volume 21, Issue 5
        September 2022
        486 pages
        ISSN:2375-4699
        EISSN:2375-4702
        DOI:10.1145/3533669
        Issue’s Table of Contents

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 25 August 2022
        • Online AM: 22 March 2022
        • Accepted: 1 December 2021
        • Received: 1 November 2021
        Published in tallip Volume 21, Issue 5

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Refereed
      • Article Metrics

        • Downloads (Last 12 months)119
        • Downloads (Last 6 weeks)7

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Full Text

      View this article in Full Text.

      View Full Text

      HTML Format

      View this article in HTML Format .

      View HTML Format
      About Cookies On This Site

      We use cookies to ensure that we give you the best experience on our website.

      Learn more

      Got it!