skip to main content
research-article

Emotion Recognition Using Multiple Kernel Learning toward E-learning Applications

Published:04 January 2018Publication History
Skip Abstract Section

Abstract

Adaptive Educational Hypermedia (AEH) e-learning models aim to personalize educational content and learning resources based on the needs of an individual learner. The Adaptive Hypermedia Architecture (AHA) is a specific implementation of the AEH model that exploits the cognitive characteristics of learner feedback to adapt resources accordingly. However, beside cognitive feedback, the learning realm generally includes both the affective and emotional feedback of the learner, which is often neglected in the design of e-learning models. This article aims to explore the potential of utilizing affect or emotion recognition research in AEH models. The framework is referred to as Multiple Kernel Learning Decision Tree Weighted Kernel Alignment (MKLDT-WFA). The MKLDT-WFA has two merits over classical MKL. First, the WFA component only preserves the relevant kernel weights to reduce redundancy and improve the discrimination for emotion classes. Second, training via the decision tree reduces the misclassification issues associated with the SimpleMKL. The proposed work has been evaluated on different emotion datasets and the results confirm the good performances. Finally, the conceptual Emotion-based E-learning Model (EEM) with the proposed emotion recognition framework is proposed for future work.

References

  1. Oryina K. Akputu and Abiodun O. Adedolapo. 2016. Emotion recognition for user centred e-learning. In Proc. 2016 IEEE 40th Annu. Comput. Softw. Appl. Conf. IEEE, 509--514. DOI:http://dx.doi.org/10.1109/COMPSAC.2016.106 Google ScholarGoogle ScholarCross RefCross Ref
  2. Kiavash Bahreini, Rob Nadolski, and Wim Westera. 2015. Improved multimodal emotion recognition for better game-based learning. In Games Learn. Alliance. Springer, 107--120. DOI:http://dx.doi.org/10.1007/978-3-319-22960-7_11 Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Kiavash Bahreini, Rob Nadolski, and Wim Westera. 2016. Towards multimodal emotion recognition in e-learning environments. Interact. Learn. Environ. 24, 3 (Apr 2016), 590--605. DOI:http://dx.doi.org/10.1080/10494820.2014.908927 Google ScholarGoogle ScholarCross RefCross Ref
  4. Paul De Bra and Licia Calvi. 1998. AHA! An open adaptive hypermedia architecture. New Rev. Hypermedia Multimed. 4, 1 (Jan 1998), 115--139. DOI:http://dx.doi.org/10.1080/13614569808914698 Google ScholarGoogle ScholarCross RefCross Ref
  5. Peter Brusilovsky. 2003. Developing adaptive educational hypermedia systems: From design models to authoring tools. In Authoring Tools Adv. Technol. Learn. Environ., Tom Murray, Stephen B. Blessing, and Shaaron Ainsworth (Eds.). Springer Netherlands, Dordrecht, 377--409. DOI:http://dx.doi.org/10.1007/978-94-017-0819-7 Google ScholarGoogle ScholarCross RefCross Ref
  6. Serhat S. Bucak, Rong Jin, and Anil K. Jain. 2014. Multiple kernel learning for visual object recognition: A review. IEEE Trans. Pattern Anal. Mach. Intell. 36, 7 (Jul 2014), 1354--69. DOI:http://dx.doi.org/10.1109/TPAMI.2013.212 Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. J. G. Daugman. 1988. Complete discrete 2-D Gabor transforms by neural networks for image analysis and compression. IEEE Trans. Acoust. 36, 7 (Jul 1988), 1169--1179. DOI:http://dx.doi.org/10.1109/29.1644 Google ScholarGoogle ScholarCross RefCross Ref
  8. Abhinav Dhall, Roland Goecke, Jyoti Joshi, Karan Sikka, and Tom Gedeon. 2014. Emotion recognition in the wild challenge 2014. In Proc. 16th Int. Conf. Multimodal Interact. (ICMI’14). ACM Press, New York, New York, 461--466. DOI:http://dx.doi.org/10.1145/2663204.2666275 Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Sidney D’Mello, Andrew Olney, Claire Williams, and Patrick Hays. 2012. Gaze tutor: A gaze-reactive intelligent tutoring system. Int. J. Hum. Comput. Stud. 70, 5 (May 2012), 377--398. DOI:http://dx.doi.org/10.1016/j.ijhcs.2012.01.004 Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. P. Ekman and W. Friesen. 1978. Facial Action Coding System: A Technique for the Measurement of Facial Movement: Investigator’s Guide 2 Parts. Consulting Psychologists Press.Google ScholarGoogle Scholar
  11. Xiaohua Huang, Qiuhai He, Xiaopeng Hong, Guoying Zhao, and Matti Pietikainen. 2014. Improved spatiotemporal local monogenic binary pattern for emotion recognition in the wild. In Proc. 16th Int. Conf. Multimodal Interact. - ICMI’14. ACM Press, New York, New York, 514--520. DOI:http://dx.doi.org/10.1145/2663204.2666278 Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Leonard Kaufman and Peter J. Rousseeuw. 2009. Finding Groups in Data: An Introduction to Cluster Analysis. John Wiley 8 Sons.Google ScholarGoogle Scholar
  13. Diane J. Litman and Kate Forbes-Riley. 2006. Recognizing student emotions and attitudes on the basis of utterances in spoken tutoring dialogues with both human and computer tutors. Speech Commun. 48, 5 (2006), 559--590. DOI:http://dx.doi.org/10.1016/j.specom.2005.09.008 Google ScholarGoogle ScholarCross RefCross Ref
  14. Patrick Lucey, Jeffrey F. Cohn, Takeo Kanade, Jason Saragih, Zara Ambadar, and Iain Matthews. 2010. The extended Cohn-Kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression. In Proc. 2010 IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work. CVPRW 2010. IEEE, 94--101. DOI:http://dx.doi.org/10.1109/CVPRW.2010.5543262 Google ScholarGoogle ScholarCross RefCross Ref
  15. Cristianini Nello, Jaz Kandola, Andre Elisseeff, and John Shawe-Taylor. 2006. On kernel target alignment. In Innov. Mach. Learn. Stud. Fuzziness Soft Comput., Dawn E. Holmes and Lakhmi C. Jain (Eds.). Studies in Fuzziness and Soft Computing, Vol. 194. Springer-Verlag, Berlin, 205--256. DOI:http://dx.doi.org/10.1007/3-540-33486-6 Google ScholarGoogle ScholarCross RefCross Ref
  16. M. Pantic, M. Valstar, R. Rademaker, and L. Maat. 2005. Web-based database for facial expression analysis. In Proc. 2005 IEEE Int. Conf. Multimed. Expo. IEEE, 317--321. DOI:http://dx.doi.org/10.1109/ICME.2005.1521424 Google ScholarGoogle ScholarCross RefCross Ref
  17. MPuerto Paule-Ruiz, Víctor Álvarez-García, J. R. Pérez-Pérez, and M. Riestra-González. 2013. Voice interactive learning. In Proc. 18th ACM Conf. Innov. Technol. Comput. Sci. Educ. (ITiCSE’13). 34. DOI:http://dx.doi.org/10.1145/2462476.2462489 Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Alain Rakotomamonjy. 2008. Softwares and Toolboxes. Retrieved from http://asi.insa-rouen.fr/enseignants/.Google ScholarGoogle Scholar
  19. Alain Rakotomamonjy, Francis Bach, Stephane Canu, and Yves Grandvalet. 2008. SimpleMKL. J. Mach. Learn. Res. 9 (2008), 2491--2521.Google ScholarGoogle Scholar
  20. Abdolhossein Sarrafzadeh, Samuel Alexander, Farhad Dadgostar, Chao Fan, and Abbas Bigdeli. 2008. How do you know that I don’t understand? A look at the future of intelligent tutoring systems. Comput. Human Behav. 24, 4 (Jul 2008), 1342--1363. DOI:http://dx.doi.org/10.1016/j.chb.2007.07.008 Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Nicu Sebe. 2009. Multimodal interfaces: Challenges and perspectives. J. Ambient Intell. Smart Environ. 1, 1 (2009), 23--30. DOI:http://dx.doi.org/10.3233/AIS-2009-0003Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. N. Sebe, I. Cohen, T. Gevers, and T. S. Huang. 2006. Emotion recognition based on joint visual and audio cues. In Proc. 18th Int. Conf. Pattern Recognit., Vol. 1. IEEE, 1136--1139. DOI:http://dx.doi.org/10.1109/ICPR.2006.489 Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Liping Shen, Victor Callaghan, and Ruimin Shen. 2008. Affective e-Learning in residential and pervasive computing environments. Inf. Syst. Front. 10, 4 (2008), 461--472. DOI:http://dx.doi.org/10.1007/s10796-008-9104-5 Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Teng Song, Lin Qi, Enqing Chen, Lei Gao, and Ning Zheng. 2012. Recognizing human emotional state via SRC in fractional Fourier domain. In Proc. 2012 IEEE 11th Int. Conf. Signal Process. 1 (2012), 1583--1586. DOI:http://dx.doi.org/10.1109/ICoSP.2012.6491882 Google ScholarGoogle ScholarCross RefCross Ref
  25. Sima Taheri, Vishal M. Patel, and Rama Chellappa. 2013. Component-based recognition of faces and facial expressions. IEEE Trans. Affect. Comput. 4, 4 (Oct 2013), 360--371. DOI:http://dx.doi.org/10.1109/T-AFFC.2013.28 Google ScholarGoogle ScholarCross RefCross Ref
  26. Paul Viola and Michael J. Jones. 2004. Robust real-time face detection. Int. J. Comput. Vis. 57, 2 (May 2004), 137--154. DOI:http://dx.doi.org/10.1023/B:VISI.0000013087.49260.fb Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Yongjin Wang and Ling Guan. 2008. Recognizing human emotional state from audiovisual signals. IEEE Trans. Multimed. 10, 4 (Jun 2008), 659--668. DOI:http://dx.doi.org/10.1109/TMM.2008.921734 Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Jia-Jun Wong and Siu-Yeung Cho. 2009. A local experts organization model with application to face emotion recognition. Expert Syst. Appl. 36, 1 (Jan. 2009), 804--819. DOI:http://dx.doi.org/10.1016/j.eswa.2007.10.030 Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Emotion Recognition Using Multiple Kernel Learning toward E-learning Applications

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader
        About Cookies On This Site

        We use cookies to ensure that we give you the best experience on our website.

        Learn more

        Got it!