skip to main content
research-article

G-Gene: A Gene Alignment Method for Online Partial Stroke Gestures Recognition

Published:19 June 2018Publication History
Skip Abstract Section

Abstract

The large availability of touch-sensitive screens fostered the research in gesture recognition. The Machine Learning community focused mainly on accuracy and robustness to noise, creating classifiers that precisely recognize gestures after their performance. Instead, the User Interface Engineering community developed compositional gesture descriptions that model gestures and their sub-parts. They are suitable for building guidance systems, but they lack a robust and accurate recognition support. In this paper, we establish a compromise between the accuracy and the provided information introducing G-Gene, a method for transforming compositional stroke gesture definitions into profile Hidden Markov Models (HMMs), able to provide both a good accuracy and information on gesture sub-parts. It supports online recognition without using any global feature, and it updates the information while receiving the input stream, with an accuracy useful for prototyping the interaction. We evaluated the approach in a user interface development task, showing that it requires less time and effort for creating guidance systems with respect to common gesture classification approaches.

References

  1. Lisa Anthony and Jacob O. Wobbrock. 2012. $N-protractor: A Fast and Accurate Multistroke Recognizer. In Proceedings of Graphics Interface 2012. 117--120. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Caroline Appert and Olivier Bau. 2010. Scale Detection for a Priori Gesture Recognition. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '10). ACM, New York, NY, USA, 879--882. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. James Arvo and Kevin Novins. 2000. Fluid Sketches: Continuous Recognition and Morphing of Simple Hand-drawn Shapes. In Proceedings of the 13th Annual ACM Symposium on User Interface Software and Technology (UIST '00). ACM, New York, NY, USA, 73--80. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Gonzalo Bailador, Daniel Roggen, Gerhard Tröster, and Gracián Triviño. 2007. Real Time Gesture Recognition Using Continuous Time Recurrent Neural Networks. In Proc. of the ICST 2007 (BodyNets '07). ICST, ICST, Brussels, Belgium, Belgium, Article 15, 8 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Ruben Balcazar, Francisco R. Ortega, Katherine Tarre, Armando Barreto, Mark Weiss, and Naphtali D. Rishe. 2017. CircGR: Interactive Multi-Touch Gesture Recognition Using Circular Measurements. In Proc. of ISS 2017. ACM, New York, NY, USA, 12--21. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Olivier Bau and Wendy E. Mackay. 2008. OctoPocus: A Dynamic Guide for Learning Gesture-based Command Sets. In Proceedings of UIST 2008 (UIST '08). ACM, New York, NY, USA, 37--46. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Mike Bennett, Kevin McCarthy, Sile O'modhrain, and Barry Smyth. 2011. Simpleflow: enhancing gestural interaction with gesture prediction, abbreviation and autocompletion. In IFIP Conference on Human-Computer Interaction. Springer, 591--608. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Laurent Besacier, Etienne Barnard, Alexey Karpov, and Tanja Schultz. 2014. Automatic speech recognition for underresourced languages: A survey. Speech Communication 56 (2014), 85 -- 100. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Fabio Marco Caputo, Pietro Prebianca, Alessandro Carcangiu, Lucio D Spano, and Andrea Giachetti. 2017. A 3 Cent Recognizer: Simple and Effective Retrieval and Classification of Mid-air Gestures from Single 3D Traces. In Smart Tools and Apps for Graphics. Eurographics Association.Google ScholarGoogle Scholar
  10. Fabio M Caputo, Pietro Prebianca, Alessandro Carcangiu, Lucio D Spano, and Andrea Giachetti. 2018. Comparing 3D trajectories for simple mid-air gesture recognition. Computers & Graphics (2018).Google ScholarGoogle Scholar
  11. Qing Chen, Nicolas D Georganas, and Emil M Petriu. 2007. Real-time vision-based hand gesture recognition using haar-like features. In Proceedings of IMTC 2007. IEEE, 1--6.Google ScholarGoogle ScholarCross RefCross Ref
  12. Hong Cheng, Lu Yang, and Zicheng Liu. 2016. Survey on 3D Hand Gesture Recognition. IEEE Trans. Circuits Syst. Video Techn. 26, 9 (2016), 1659--1673. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Mauricio Cirelli and Ricardo Nakamura. 2014. A Survey on Multi-touch Gesture Recognition and Multi-touch Frameworks. In Proc. of ITS 2014. ACM, New York, NY, USA, 35--44. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Adrien Coyette, Suzanne Kieffer, and Jean Vanderdonckt. 2007. Multi-fidelity prototyping of user interfaces. In IFIP Conference on Human-Computer Interaction. Springer, 150--164. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Adrien Coyette, Sascha Schimke, Jean Vanderdonckt, and Claus Vielhauer. 2007. Trainable sketch recognizer for graphical user interface design. In IFIP Conference on Human-Computer Interaction. Springer, 124--135. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. William Delamare, Céline Coutrix, and Laurence Nigay. 2015. Designing guiding systems for gesture-based interaction. In Proceedings of EICS 2015. ACM, 44--53. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Tim Dittmar, Claudia Krull, and Graham Horton. 2015. A new approach for touch gesture recognition: Conversive Hidden non-Markovian Models. Journal of Computational Science 10 (2015), 66--76.Google ScholarGoogle ScholarCross RefCross Ref
  18. G David Forney. 1973. The viterbi algorithm. Proc. IEEE 61, 3 (1973), 268--278.Google ScholarGoogle ScholarCross RefCross Ref
  19. Sandra G Hart and Lowell E Staveland. 1988. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In Advances in psychology. Vol. 52. Elsevier, 139--183.Google ScholarGoogle Scholar
  20. Pengyu Hong, Matthew Turk, and Thomas S Huang. 2000. Constructing finite state machines for fast gesture recognition. In Pattern Recognition, 2000. Proceedings. 15th International Conference on, Vol. 3. IEEE, 691--694. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Lode Hoste, Bruno Dumas, and Beat Signer. 2011. Mudra: a unified multimodal interaction framework. In Proceedings of ICMI 2011. ACM, New York, NY, USA, 97--104. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Shahedul Huq Khandkar and Frank Maurer. 2010. A domain specific language to define gestures for multi-touch applications. In Proceedings of the DSM 2010. ACM, New York, NY, USA, 2:1--2:6. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Kenrick Kin, B Hartmann, T DeRose, and Maneesh Agrawala. 2012. Proton++ : A Customizable Declarative Multitouch Framework. In Proceedings of UIST 2012. ACM Press, Berkeley, California, USA, 477--486. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Kenrick Kin, B Hartmann, T DeRose, and Maneesh Agrawala. 2012. Proton: multitouch gestures as regular expressions. In Proceedings of CHI 2012. ACM Press, Austin, Texas, USA, 2885--2894. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Per Ola Kristensson, Thomas Nicholson, and Aaron Quigley. 2012. Continuous Recognition of One-handed and Twohanded Gestures Using 3D Full-body Motion Tracking Sensors. In Proceedings of the 2012 ACM International Conference on Intelligent User Interfaces (IUI '12). ACM, New York, NY, USA, 89--92. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Vladimir I Levenshtein. 1966. Binary codes capable of correcting deletions, insertions, and reversals. In Soviet physics doklady, Vol. 10. 707--710.Google ScholarGoogle Scholar
  27. Yang Li. 2010. Protractor: a fast and accurate gesture recognizer. In Proceedings of CHI 2012. ACM, 2169--2172. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Hao Lü and Yang Li. 2012. Gesture Coder: A Tool for Programming Multi-touch Gestures by Demonstration. In Proc, of the CHI 2012. ACM, New York, NY, USA, 2875--2884. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Microsoft. {n. d.}. Easily Write Custom Gesture Recognizers for Your Tablet PC Applications. https://msdn.microsoft. com/en-us/library/aa480673.aspx. ({n. d.}). Accessed: 2017-05--22.Google ScholarGoogle Scholar
  30. S. Mitra and T. Acharya. 2007. Gesture Recognition: A Survey. IEEE Trans. Systems, Man, and Cybernetics, Part C 37, 3 (2007), 311--324. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Donald A. Norman. 2010. Natural User Interfaces Are Not Natural. interactions 17, 3 (May 2010), 6--10. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Halla Olafsdottir and Caroline Appert. 2014. Multi-touch Gestures for Discrete and Continuous Control. In Proceedings of the 2014 International Working Conference on Advanced Visual Interfaces (AVI '14). ACM, New York, NY, USA, 177--184. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Lawrence R Rabiner. 1989. A tutorial on hidden Markov models and selected applications in speech recognition. Proc. IEEE 77, 2 (1989), 257--286.Google ScholarGoogle ScholarCross RefCross Ref
  34. Siddharth S. Rautaray and Anupam Agrawal. 2015. Vision based hand gesture recognition for human computer interaction: a survey. Artif. Intell. Rev. 43, 1 (2015), 1--54. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Dean Rubine. 1991. The automatic recognition of gestures. Ph.D. Dissertation. Citeseer. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Philip Schneider and David H Eberly. 2002. Geometric tools for computer graphics. Elsevier. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Christophe Scholliers, Lode Hoste, Beat Signer, and Wolfgang De Meuter. 2011. Midas: a declarative multi-touch interaction framework. In Proceedings of TEI 2011. ACM, New York, NY, USA, 49--56. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Jacob Schreiber. 2016. pomegranate. https://github.com/jmschrei/pomegranate. (2016).Google ScholarGoogle Scholar
  39. Tevfik Metin Sezgin and Randall Davis. 2005. HMM-based Efficient Sketch Recognition. In Proc. of IUI 2015. ACM, New York, NY, USA, 281--283. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Lucio Davide Spano, Antonio Cisternino, and Fabio Paternò. 2012. A Compositional Model for Gesture Definition. In Proceedings of HCSE 2012. Springer, 34--52 Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Lucio Davide Spano, Antonio Cisternino, Fabio Paternò, and Gianni Fenu. 2013. GestIT: a Declarative and Compositional Framework for Multiplatform Gesture Definition. In Proceedings of EICS 2013. ACM, 187--196. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Eugene M. Taranta II, Amirreza Samiei, Mehran Maghoumi, Pooya Khaloo, Corey R. Pittman, and Joseph J. LaViola Jr. 2017. Jackknife: A Reliable Recognizer with Few Samples and Many Modalities. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17). ACM, New York, NY, USA, 5850--5861. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Radu-Daniel Vatavu, Lisa Anthony, and Jacob O Wobbrock. 2012. Gestures as point clouds: a $P recognizer for user interface prototypes. In Proceedings of ICMI-2012. ACM, 273--280. Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Jo Vermeulen, Kris Luyten, Elise van den Hoven, and Karin Coninx. 2013. Crossing the bridge over Norman's Gulf of Execution: revealing feedforward's true identity. In Proceedings of CHI 2013. ACM, 1931--1940. Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Jacob O. Wobbrock, Andrew D. Wilson, and Yang Li. 2007. Gestures Without Libraries, Toolkits or Training: A $1 Recognizer for User Interface Prototypes. In Proceedings of UIST 2007 (UIST '07). ACM, New York, NY, USA, 159--168. Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Yang Yang, Imran Saleemi, and Mubarak Shah. 2013. Discovering motion primitives for unsupervised grouping and one-shot learning of human actions, gestures, and expressions. IEEE transactions on pattern analysis and machine intelligence 35, 7 (2013), 1635--1648. Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Byung-Jun Yoon. 2009. Hidden Markov models and their applications in biological sequence analysis. Current genomics 10, 6 (2009), 402--415.Google ScholarGoogle Scholar

Index Terms

  1. G-Gene: A Gene Alignment Method for Online Partial Stroke Gestures Recognition

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader
        About Cookies On This Site

        We use cookies to ensure that we give you the best experience on our website.

        Learn more

        Got it!