skip to main content
research-article

A Novel Assistive Glove to Convert Arabic Sign Language into Speech

Published:21 February 2023Publication History
Skip Abstract Section

Abstract

People with speech disorders often communicate through special gestures and sign language gestures. However, other people around them might not understand the meaning of those gestures. The research described in this article is aimed at providing an assistive device to help those people communicate with others by translating their gestures into a spoken voice that others can understand. The proposed device includes an electronic glove that is worn on the hand. It employs an MPU6050 accelerometer/gyro with 6 degrees of freedom to continuously monitor hand orientation and movement, plus a potentiometer for each finger, to monitor changes in finger posture. The signals from the MPU6050 and the potentiometers are routed to an Arduino board, where they are processed to determine the meaning of each gesture, which is then voiced using the audio streams stored in an SD memory card. The audio output drives a speaker, allowing the listener to understand the meaning of each gesture. We built a database with the help of 10 deaf people who cannot speak. We asked them to wear the glove while performing a set of 40 Arabic sign language words and recorded the resulting data stream from the glove. That data was then used to train seven different learning algorithms. The results showed that the Decision Tree learning algorithm achieved the highest accuracy of 98%. A usability study was then conducted to determine the usefulness of the assistive device in real-time.

REFERENCES

  1. [1] Ali Memon Zulfiqar et al. 2017. Real time translator for sign languages. In International Conference on Frontiers of Information Technology (FIT).Google ScholarGoogle Scholar
  2. [2] Narayan Sawant Shreyashi. 2014. Sign language recognition system to aid deaf-dumb people using PCA. Int. J. Comput. Sci. Eng. Technol. 5, 05 (2014).Google ScholarGoogle Scholar
  3. [3] Rathi Surbhi and Gawande Ujwalla. 2017. Development of full duplex intelligent communication system for deaf and dumb people. In 7th International Conference on Cloud Computing, Data Science & Engineering-Confluence. IEEE, 2017.Google ScholarGoogle ScholarCross RefCross Ref
  4. [4] Subha Rajam P. and Balakrishnan G.. 2013. Design and development of Tamil sign alphabets using image processing with right hand palm to aid deaf-dumb people IETE J. Res. 59, 6 (2013), 709718.Google ScholarGoogle ScholarCross RefCross Ref
  5. [5] Koli P. B. et al. 2015. Image processing based language converter for deaf and dumb people. IOSR J. Electron. Commun. Eng. e-ISSN 1, 2015 (2015), 22782834.Google ScholarGoogle Scholar
  6. [6] Ahire Prashant G. et al. 2015. Two way communicator between deaf and dumb people and normal people. In International Conference on Computing Communication Control and Automation. IEEE, 2015.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. [7] Ahmed Syed Faiz, Ali Syed Muhammad Baber, and Qureshi Sh Saqib Munawwar. 2010. Electronic speaking glove for speechless patients, a tongue to a dumb. In IEEE Conference on Sustainable Utilization and Development in Engineering and Technology. IEEE, 2010.Google ScholarGoogle ScholarCross RefCross Ref
  8. [8] Mehdi Syed Atif and Khan Yasir Niaz. 2002. Sign language recognition using sensor gloves. In 9th International Conference on Neural Information Processing.Google ScholarGoogle ScholarCross RefCross Ref
  9. [9] Mahajan Mukesh P. et al. 2016. Electronic hand glove through gestures for verbally challenged persons. Int. Journal of Engineering Research and Applications 6, 4 (Part - 1) (2016), 17--19.Google ScholarGoogle Scholar
  10. [10] Chuan Ching-Hua, Regina Eric, and Guardino Caroline. 2014. American Sign Language recognition using leap motion sensor. In 13th International Conference on Machine Learning and Applications.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. [11] Jayant Chandrika et al. 2010. V-Braille: Haptic Braille perception using a touch-screen and vibration on mobile phones. In 12th International ACM SIGACCESS Conference on Computers and Accessibility.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. [12] Satpute A. Y. and Bhoi A. D.. 2013. Electronic speaking glove for speechless patients a regional tongue to a dumb. International Journal of Electrical and Computer Engineering 4, 2 (2013), 507511.Google ScholarGoogle Scholar
  13. [13] Auti Abjhijt, Puranik V. G., and Kureshi A. K.. 2014. Speaking gloves for speechless persons. International Journal of Innovative Research in Science, Engineering and Technology 3, 4 (2014).Google ScholarGoogle Scholar
  14. [14] Kasar M. S., Gavande Akshada, Deshmukh Anvita, and Ghadage Priyanka. 2016. Smart speaking glove-virtual tongue for deaf and dumb. Int. J. Adv. Res. Electric. Electron. Instrum. Eng. 5, 3 (2016), 7.Google ScholarGoogle Scholar
  15. [15] Ahmed Syed Faiz, Ali Syed Muhammad Baber, and Qureshi Sh Saqib Munawwar. 2010. Electronic speaking glove for speechless patients, a tongue to a dumb. In IEEE Conference on Sustainable Utilization and Development in Engineering and Technology.Google ScholarGoogle ScholarCross RefCross Ref
  16. [16] Al Mamun Abdullah, Jahan Khan Polash Md Sarwar, and Mashuque Alamgir Fakir. 2017. Flex sensor based hand glove for deaf and mute people. Int. J. Comput. Netw. Commun. Secur. 5, 2 (2017), 38.Google ScholarGoogle Scholar
  17. [17] Brashear Helene et al. 2003. Using multiple sensors for mobile sign language recognition. In Proceeding of the 7th IEEE International Symposium on Wearable Computers (ISWC'03), Georgia Institute of Technology, White Plains, New York.Google ScholarGoogle Scholar
  18. [18] Jayant Chandrika et al. 2010. V-Braille: Haptic Braille perception using a touch-screen and vibration on mobile phones. In 12th International ACM SIGACCESS Conference on Computers and Accessibility.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. [19] Dinesh Sruthi. 2015. Talking glove—A boon for the deaf, dumb and physically challenged. Int. J. Adv. Res. Electron. Commun. Eng. 4, 5 (2015), 13661369.Google ScholarGoogle Scholar
  20. [20] Rastogi Rohit, Mittal Shashank, and Agarwal Sajan. 2015. A novel approach for communication among blind, deaf and dumb people. In 2nd International Conference on Computing for Sustainable Global Development (INDIACom).Google ScholarGoogle Scholar
  21. [21] Nashat Dalia et al. 2014. An Android application to aid uneducated deaf-dumb people. Int. J. Comput. Sci. Mob. Applic. 2, 9 (2014), 18.Google ScholarGoogle Scholar
  22. [22] Shull Peter B. et al. 2019. Hand gesture recognition and finger angle estimation via wrist-worn modified barometric pressure sensing. IEEE Trans. Neural Syst. Rehab. Eng. 27, 4 (2019), 724732.Google ScholarGoogle ScholarCross RefCross Ref
  23. [23] Shrote S. B. et al. 2014. Assistive translator for deaf & dumb people. Int. J. Electron. Commun. Comput. Eng. 5, 4 (2014), 8689.Google ScholarGoogle Scholar
  24. [24] Anupreethi H. V. and Vijayakumar S.. 2012. MSP430 based sign language recognizer for dumb patients. Procedia Eng. 38 (2012), 13741380.Google ScholarGoogle ScholarCross RefCross Ref
  25. [25] Bukhari J. et al. 2015. American Sign Language translation through sensory glove; SignSpeak. Int. J. u-and e-Serv. Sci. Technol. 8, 1 (2015), 131142.Google ScholarGoogle ScholarCross RefCross Ref
  26. [26] Nagori Nikita P. and Malode Vandana. 2016. Communication interface for deaf-mute people using Microsoft Kinect. In International Conference on Automatic Control and Dynamic Optimization Techniques (ICACDOT).Google ScholarGoogle ScholarCross RefCross Ref
  27. [27] Hazari Shihab Shahriar, Alam Lamia, and Goni Nasim Al. 2017. Designing a sign language translation system using kinect motion sensor device. In International Conference on Electrical, Computer and Communication Engineering (ECCE).Google ScholarGoogle ScholarCross RefCross Ref
  28. [28] Miao Qiguang et al. 2017. Multimodal gesture recognition based on the ResC3D network. In IEEE International Conference on Computer Vision Workshops.Google ScholarGoogle ScholarCross RefCross Ref
  29. [29] Anderson Ricky et al. 2017. Sign language recognition application systems for deaf-mute people: A review based on input-process-output. Procedia Comput. Sci. 116 (2017), 441448.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. [30] Stokoe William C.. 2005. Sign language structure: An outline of the visual communication systems of the American deaf. J. Deaf Stud. Deaf Educ. 10, 1 (2005), 337.Google ScholarGoogle ScholarCross RefCross Ref
  31. [31] Escobedo Edwin, Ramirez Lourdes, and Camara Guillermo. 2019. Dynamic sign language recognition based on convolutional neural networks and texture maps. In 32nd SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI).Google ScholarGoogle ScholarCross RefCross Ref
  32. [32] Lee Junghee, Elzawawy A., and Rahemi H. A.. 2014. A. Ds: Aid device for deaf drivers. In 12th Latin American and Caribbean Conference for Engineering and Technology (LACCEI).Google ScholarGoogle Scholar
  33. [33] Guinn Josh et al. Emergency Vehicle Alert Device (EVADE). https://www.ece.ucf.edu/seniordesign/su2010fa2010/gd/cp.pdf.Google ScholarGoogle Scholar
  34. [34] Padgilwar Aniket and Borkar Yuga. 2014. Cars for deaf people. Int. Rev. Appl. Eng. Res. 4, 4 (2014), 307312.Google ScholarGoogle Scholar
  35. [35] I. H. Witten, E. Frank, L. Trigg, M. Hall, G. Holmes, and S. J. Cunningham. 1999. Weka: Practical machine learning tools and techniques with Java implementations. (Working paper 99/11). University of Waikato, Department of Computer Science, Hamilton. https://researchcommons.waikato.ac.nz/handle/10289/1040.Google ScholarGoogle Scholar
  36. [36] Desai Aaditya and Sunil R.. 2012. Analysis of machine learning algorithms using WEKA. Int. J. Comput. Applic. 975 (2012), 8887.Google ScholarGoogle Scholar
  37. [37] Khanum Memoona et al. 2015. A survey on unsupervised machine learning algorithms for automation, classification and maintenance. Int. J. Comput. Applic. 119, 13 (2015).Google ScholarGoogle ScholarCross RefCross Ref
  38. [38] Lloyd Seth, Mohseni Masoud, and Rebentrost Patrick. 2013. Quantum algorithms for supervised and unsupervised machine learning. arXiv preprint arXiv:1307.0411 (2013).Google ScholarGoogle Scholar
  39. [39] Sahoo G. and Kumar Yugal. 2012. Analysis of parametric & non parametric classifiers for classification technique using WEKA. Int. J. Inf. Technol. Comput. Sci. 4, 7 (2012).Google ScholarGoogle Scholar
  40. [40] Mahmood Deeman Y. and Hussein Mohammed A.. 2013. Intrusion detection system based on K-star classifier and feature set reduction. Int. Organiz. Scient. Res. J. Comput. Eng. 15, 5 (2013), 107112.Google ScholarGoogle Scholar
  41. [41] Lin Weiwei et al. 2017. An ensemble random forest algorithm for insurance big data analysis. IEEE Access 5 (2017), 1656816575.Google ScholarGoogle ScholarCross RefCross Ref
  42. [42] Deshmukh Akshay D. and Shinde Ulhas B.. 2016. A low cost environment monitoring system using Raspberry Pi and Arduino with Zigbee. In International Conference on Inventive Computation Technologies (ICICT).Google ScholarGoogle ScholarCross RefCross Ref
  43. [43] Udhana A., Rahmawan J., and Negara C. U. P.. 2018. Flex sensors and MPU6050 sensors responses on smart glove for sign language translation. In IOP Conference Series: Materials Science and Engineering 403, 1 (2018), 012032. IOP Publishing.Google ScholarGoogle ScholarCross RefCross Ref
  44. [44] Fedorov D. S. et al. 2015. Using of measuring system MPU6050 for the determination of the angular velocities and linear accelerations. Automat. Softw. Eng. 11, 1 (2015), 7580.Google ScholarGoogle Scholar
  45. [45] Sholihul Hadi Mokh et al. 2018. Stand-alone data logger for solar panel energy system with RTC and SD card. J. Phys.: Conf. Series. 1028, 1 (2018).Google ScholarGoogle Scholar
  46. [46] Kalmegh Sushilkumar. 2015. Analysis of Weka data mining algorithm RepTree, Simple Cart and RandomTree for classification of Indian news. Int. J. Innov. Sci. Eng. Technol. 2, 2 (2015), 438446.Google ScholarGoogle Scholar
  47. [47] Uchil Aditya P., Jha Smriti, and Sudha B. G.. 2019. Vision based deep learning approach for dynamic Indian sign language recognition in healthcare. In International Conference on Computational Vision and Bio Inspired Computing.Google ScholarGoogle Scholar
  48. [48] Tao Wenjin et al. 2018. American Sign Language alphabet recognition using LM controller. In Institute of Industrial and Systems Engineers Annual Conference.Google ScholarGoogle Scholar
  49. [49] Naidu Chetna and Ghotkar Archana. 2016. Hand gesture recognition using LM controller. Int. J. Sci. Res. 5 (2016), 436441.Google ScholarGoogle Scholar
  50. [50] Funasaka Makiko et al. 2015. Sign language recognition using LM controller. In International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA).Google ScholarGoogle Scholar
  51. [51] Toghiani-Rizi Babak et al. 2017. Static gesture recognition using LM. arXiv preprint arXiv:1705.05884 (2017).Google ScholarGoogle Scholar
  52. [52] Mohandes Mohamed, Aliyu S., and Deriche M.. 2014. Arabic sign language recognition using the LM controller. In 23rd International Symposium on Industrial Electronics (ISIE).Google ScholarGoogle Scholar
  53. [53] Hisham Basma and Hamouda Alaa. 2017. Arabic static and dynamic gestures recognition using LM. J. Comput. Sci. 13, 8 (2017), 337354Google ScholarGoogle ScholarCross RefCross Ref
  54. [54] Khelil Bassem et al. 2016. Hand gesture recognition using LM controller for recognition of Arabic sign language. In Proceeding of the 3rd International Conference on Automation, Control, Engineering, and Computer Science (ACECS'16), Proceedings of Engineering & Technology (PET). 233--238. http://ipco-co.com/PET_Journal/Acecs-2016/39.pdf.Google ScholarGoogle Scholar
  55. [55] Saha Sriparna et al. 2013. Gesture recognition from Indian classical dance using kinect sensor. In 5th International Conference on Computational Intelligence, Communication Systems and Networks.Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. [56] Nurul Khotimah Wijayanti, Aditya Susanto Yohanes, and Suciati Nanik. 2017. Combining decision tree and back propagation genetic algorithm neural network for recognizing word gestures in Indonesian sign language using Kinect. J. Theoret. Appl. Inf. Technol. 95, 2 (2017), 292.Google ScholarGoogle Scholar
  57. [57] Ghotkar Archana, Vidap Pujashree, and Deo Kshitish. 2016. Dynamic hand gesture recognition using hidden Markov model by Microsoft Kinect sensor. Int. J. Comput. Applic. 150, 5 (2016), 59.Google ScholarGoogle ScholarCross RefCross Ref
  58. [58] Pigou Lionel et al. 2018. Beyond temporal pooling: Recurrence and temporal convolutions for gesture recognition in video. Int. J. Comput. Vis. 126, 2 (2018), 430439.Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. [59] Schaffer Cullen. 1993. Selecting a classification method by cross-validation. Mach. Learn. 13, 1 (1993), 135143.Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. [60] Pouriyeh Seyedamin et al. 2017. A comprehensive investigation and comparison of machine learning techniques in the domain of heart disease. In IEEE Symposium on Computers and Communications (ISCC).Google ScholarGoogle ScholarCross RefCross Ref
  61. [61] Schwarz Julia et al. 2010. Cord input: An intuitive, high-accuracy, multi-degree-of-freedom input method for mobile devices. In SIGCHI Conference on Human Factors in Computing Systems.Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. [62] Lydia M. et al. 2013. Advanced algorithms for wind turbine power curve modeling. IEEE Trans. Sustain. Energy 4, 3 (2013), 827835.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. A Novel Assistive Glove to Convert Arabic Sign Language into Speech

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image ACM Transactions on Asian and Low-Resource Language Information Processing
        ACM Transactions on Asian and Low-Resource Language Information Processing  Volume 22, Issue 2
        February 2023
        624 pages
        ISSN:2375-4699
        EISSN:2375-4702
        DOI:10.1145/3572719
        Issue’s Table of Contents

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 21 February 2023
        • Online AM: 24 June 2022
        • Accepted: 15 June 2022
        • Received: 30 March 2022
        Published in tallip Volume 22, Issue 2

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Full Text

      View this article in Full Text.

      View Full Text

      HTML Format

      View this article in HTML Format .

      View HTML Format
      About Cookies On This Site

      We use cookies to ensure that we give you the best experience on our website.

      Learn more

      Got it!