skip to main content
research-article

DG3: Exploiting Gesture Declarative Models for Sample Generation and Online Recognition

Published:18 June 2020Publication History
Skip Abstract Section

Abstract

In this paper, we introduce DG3, an end-to-end method for exploiting gesture interaction in user interfaces. The method allows to declaratively model stroke gestures and their sub-parts, generating the training samples for the recognition algorithm. In addition, we extend the algorithms of the $-family for supporting the online (i.e., real-time ) stroke recognition and their parts, as declared in the models. Finally, we show that the method outperforms existing approaches for online recognition and has comparable accuracy with offline methods after a few gesture segments.

References

  1. Lisa Anthony and Jacob O. Wobbrock. 2010. A Lightweight Multistroke Recognizer for User Interface Prototypes. In Proceedings of Graphics Interface 2010 (GI '10). Canadian Information Processing Society, CAN, 245--252.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Lisa Anthony and Jacob O Wobbrock. 2012. $N-protractor: a fast and accurate multistroke recognizer. In Proceedings of Graphics Interface 2012. 117--120.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Daniel Ashbrook and Thad Starner. 2010. MAGIC: A Motion Gesture Design Tool. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '10). Association for Computing Machinery, New York, NY, USA, 2159--2168. https://doi.org/10.1145/1753326.1753653Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Olivier Bau and Wendy E. Mackay. 2008. OctoPocus: A Dynamic Guide for Learning Gesture-Based Command Sets. In Proceedings of the 21st Annual ACM Symposium on User Interface Software and Technology (UIST '08). Association for Computing Machinery, New York, NY, USA, 37--46. https://doi.org/10.1145/1449715.1449724Google ScholarGoogle Scholar
  5. Baptiste Caramiaux, Nicola Montecchio, Atau Tanaka, and Frédéric Bevilacqua. 2014. Adaptive Gesture Recognition with Variation Estimation for Interactive Systems. ACM Trans. Interact. Intell. Syst., Vol. 4, 4, Article Article 18 (Dec. 2014), 34 pages. https://doi.org/10.1145/2643204Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Alessandro Carcangiu and Lucio Davide Spano. 2018. G-gene: A gene alignment method for online partial stroke gestures recognition. Proceedings of the ACM on Human-Computer Interaction, Vol. 2, EICS (2018), 1--17.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Alessandro Carcangiu, Lucio Davide Spano, Giorgio Fumera, and Fabio Roli. 2017. Gesture modelling and recognition by integrating declarative models and pattern recognition algorithms. In International Conference on Image Analysis and Processing. Springer, 84--95.Google ScholarGoogle ScholarCross RefCross Ref
  8. Alessandro Carcangiu, Lucio Davide Spano, Giorgio Fumera, and Fabio Roli. 2019. DEICTIC: A compositional and declarative gesture description based on hidden markov models. International Journal of Human-Computer Studies, Vol. 122 (2019), 113--132.Google ScholarGoogle Scholar
  9. Lode Hoste, Bruno Dumas, and Beat Signer. 2011. Mudra: a unified multimodal interaction framework. In Proceedings of ICMI 2011. ACM, New York, NY, USA, 97--104. https://doi.org/10.1145/2070481.2070500Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Dietrich Kammer, Jan Wojdziak, Mandy Keck, Rainer Groh, and Severin Taranko. 2010. Towards a formalization of multi-touch gestures. In Proceedings of ITS 2010. ACM, New York, NY, USA, 49--58. https://doi.org/10.1145/1936652.1936662Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Shahedul Huq Khandkar and Frank Maurer. 2010. A domain specific language to define gestures for multi-touch applications. In Proceedings of the DSM 2010. ACM, New York, NY, USA, 2:1---2:6. https://doi.org/10.1145/2060329.2060339Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Kenrick Kin, B Hartmann, T DeRose, and Maneesh Agrawala. 2012a. Proton+: A Customizable Declarative Multitouch Framework. In Proceedings of UIST 2012. ACM Press, Berkeley, California, USA, 477--486.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Kenrick Kin, B Hartmann, T DeRose, and Maneesh Agrawala. 2012b. Proton: multitouch gestures as regular expressions. In Proceedings of CHI 2012. ACM Press, Austin, Texas, USA, 2885--2894.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Luis A. Leiva, Daniel Mart'in-Albo, and Réjean Plamondon. 2015. Gestures à Go Go: Authoring Synthetic Human-Like Stroke Gestures Using the Kinematic Theory of Rapid Movements. ACM Trans. Intell. Syst. Technol., Vol. 7, 2, Article Article 15 (Nov. 2015), 29 pages. https://doi.org/10.1145/2799648Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Luis A. Leiva, Daniel Mart'in-Albo, Réjean Plamondon, and Radu-Daniel Vatavu. 2018b. KeyTime: Super-Accurate Prediction of Stroke Gesture Production Times. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). Association for Computing Machinery, New York, NY, USA, Article Paper 239, 12 pages. https://doi.org/10.1145/3173574.3173813Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Luis A. Leiva, Daniel Mart'in-Albo, and Radu-Daniel Vatavu. 2018a. GATO: Predicting Human Performance with Multistroke and Multitouch Gesture Input. In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI '18). Association for Computing Machinery, New York, NY, USA, Article Article 32, 11 pages. https://doi.org/10.1145/3229434.3229478Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Jiajun Li, Jianguo Tao, Liang Ding, Haibo Gao, Zongquan Deng, Yang Luo, and Zhandong Li. 2018. A new iterative synthetic data generation method for CNN based stroke gesture recognition. Multimedia Tools and Applications, Vol. 77, 13 (2018), 17181--17205.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Yang Li. 2010. Protractor: A Fast and Accurate Gesture Recognizer. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '10). Association for Computing Machinery, New York, NY, USA, 2169--2172. https://doi.org/10.1145/1753326.1753654Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Hao Lü, James A. Fogarty, and Yang Li. 2014. Gesture Script: Recognizing Gestures and Their Structure Using Rendering Scripts and Interactively Trained Parts. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14). Association for Computing Machinery, New York, NY, USA, 1685--1694. https://doi.org/10.1145/2556288.2557263Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Nathan Magrofuoco, Paolo Roselli, Jean Vanderdonckt, Jorge Luis Pérez-Medina, and Radu-Daniel Vatavu. 2019. GestMan: A Cloud-Based Tool for Stroke-Gesture Datasets. In Proceedings of the ACM SIGCHI Symposium on Engineering Interactive Computing Systems (EICS '19). Association for Computing Machinery, New York, NY, USA, Article Article 7, 6 pages. https://doi.org/10.1145/3319499.3328227Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. S. Mitra and T. Acharya. 2007. Gesture Recognition: A Survey. IEEE Trans. Systems, Man, and Cybernetics, Part C, Vol. 37, 3 (2007), 311--324. https://doi.org/10.1109/TSMCC.2007.893280Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Fabio Paterno. 1999. Model-based design and evaluation of interactive applications .Springer Science & Business Media.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Fabio Paternò, Cristiano Mancini, and Silvia Meniconi. 1997. ConcurTaskTrees: A diagrammatic notation for specifying task models. In Human-computer interaction INTERACT'97. Springer, 362--369.Google ScholarGoogle Scholar
  24. Corey Pittman, Eugene M. Taranta II, and Joseph J. LaViola. 2016. A $-Family Friendly Approach to Prototype Selection. In Proceedings of the 21st International Conference on Intelligent User Interfaces (IUI '16). Association for Computing Machinery, New York, NY, USA, 370--374. https://doi.org/10.1145/2856767.2856808Google ScholarGoogle Scholar
  25. Réjean Plamondon. 1995. A kinematic theory of rapid human movements. Biological Cybernetics, Vol. 72, 4 (01 Mar 1995), 295--307. https://doi.org/10.1007/BF00202785Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Siddharth S. Rautaray and Anupam Agrawal. 2015. Vision based hand gesture recognition for human computer interaction: a survey. Artif. Intell. Rev., Vol. 43, 1 (2015), 1--54. https://doi.org/10.1007/s10462-012--9356--9Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Christophe Scholliers, Lode Hoste, Beat Signer, and Wolfgang De Meuter. 2011. Midas: a declarative multi-touch interaction framework. In Proceedings of TEI 2011. ACM, New York, NY, USA, 49--56. https://doi.org/10.1145/1935701.1935712Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Lucio Davide Spano, Antonio Cisternino, and Fabio Paternò. 2012. A Compositional Model for Gesture Definition. In Human-Centered Software Engineering, Marco Winckler, Peter Forbrig, and Regina Bernhaupt (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 34--52.Google ScholarGoogle Scholar
  29. Lucio Davide Spano, Antonio Cisternino, Fabio Paternò, and Gianni Fenu. 2013. GestIT: A Declarative and Compositional Framework for Multiplatform Gesture Definition. In Proceedings of the 5th ACM SIGCHI Symposium on Engineering Interactive Computing Systems (EICS '13). Association for Computing Machinery, New York, NY, USA, 187--196. https://doi.org/10.1145/2494603.2480307Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Jean Vanderdonckt, Bruno Dumas, and Mauro Cherubini. 2018. Comparing Some Distances in Template-Based 2D Gesture Recognition. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (CHI EA '18). Association for Computing Machinery, New York, NY, USA, Article Paper LBW121, 6 pages. https://doi.org/10.1145/3170427.3188452Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Radu-Daniel Vatavu. 2017. Improving Gesture Recognition Accuracy on Touch Screens for Users with Low Vision. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17). Association for Computing Machinery, New York, NY, USA, 4667--4679. https://doi.org/10.1145/3025453.3025941Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Radu-Daniel Vatavu, Lisa Anthony, and Jacob O. Wobbrock. 2012. Gestures as Point Clouds: A $P Recognizer for User Interface Prototypes. In Proceedings of the 14th ACM International Conference on Multimodal Interaction (ICMI '12). Association for Computing Machinery, New York, NY, USA, 273--280. https://doi.org/10.1145/2388676.2388732Google ScholarGoogle Scholar
  33. Radu-Daniel Vatavu, Lisa Anthony, and Jacob O. Wobbrock. 2018. $Q: A Super-Quick, Articulation-Invariant Stroke-Gesture Recognizer for Low-Resource Devices. In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI '18). Association for Computing Machinery, New York, NY, USA, Article Article 23, 12 pages. https://doi.org/10.1145/3229434.3229465Google ScholarGoogle Scholar
  34. Jo Vermeulen, Kris Luyten, Elise van den Hoven, and Karin Coninx. 2013. Crossing the bridge over Norman's Gulf of Execution: revealing feedforward's true identity. In Proceedings of CHI 2013. ACM, 1931--1940.Google ScholarGoogle Scholar
  35. Jacob O. Wobbrock, Andrew D. Wilson, and Yang Li. 2007. Gestures Without Libraries, Toolkits or Training: A $1 Recognizer for User Interface Prototypes. In Proceedings of UIST 2007 (UIST '07). ACM, New York, NY, USA, 159--168. https://doi.org/10.1145/1294211.1294238Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. DG3: Exploiting Gesture Declarative Models for Sample Generation and Online Recognition

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader
        About Cookies On This Site

        We use cookies to ensure that we give you the best experience on our website.

        Learn more

        Got it!