Abstract
Adaptive Educational Hypermedia (AEH) e-learning models aim to personalize educational content and learning resources based on the needs of an individual learner. The Adaptive Hypermedia Architecture (AHA) is a specific implementation of the AEH model that exploits the cognitive characteristics of learner feedback to adapt resources accordingly. However, beside cognitive feedback, the learning realm generally includes both the affective and emotional feedback of the learner, which is often neglected in the design of e-learning models. This article aims to explore the potential of utilizing affect or emotion recognition research in AEH models. The framework is referred to as Multiple Kernel Learning Decision Tree Weighted Kernel Alignment (MKLDT-WFA). The MKLDT-WFA has two merits over classical MKL. First, the WFA component only preserves the relevant kernel weights to reduce redundancy and improve the discrimination for emotion classes. Second, training via the decision tree reduces the misclassification issues associated with the SimpleMKL. The proposed work has been evaluated on different emotion datasets and the results confirm the good performances. Finally, the conceptual Emotion-based E-learning Model (EEM) with the proposed emotion recognition framework is proposed for future work.
- Oryina K. Akputu and Abiodun O. Adedolapo. 2016. Emotion recognition for user centred e-learning. In Proc. 2016 IEEE 40th Annu. Comput. Softw. Appl. Conf. IEEE, 509--514. DOI:http://dx.doi.org/10.1109/COMPSAC.2016.106 Google Scholar
Cross Ref
- Kiavash Bahreini, Rob Nadolski, and Wim Westera. 2015. Improved multimodal emotion recognition for better game-based learning. In Games Learn. Alliance. Springer, 107--120. DOI:http://dx.doi.org/10.1007/978-3-319-22960-7_11 Google Scholar
Digital Library
- Kiavash Bahreini, Rob Nadolski, and Wim Westera. 2016. Towards multimodal emotion recognition in e-learning environments. Interact. Learn. Environ. 24, 3 (Apr 2016), 590--605. DOI:http://dx.doi.org/10.1080/10494820.2014.908927 Google Scholar
Cross Ref
- Paul De Bra and Licia Calvi. 1998. AHA! An open adaptive hypermedia architecture. New Rev. Hypermedia Multimed. 4, 1 (Jan 1998), 115--139. DOI:http://dx.doi.org/10.1080/13614569808914698 Google Scholar
Cross Ref
- Peter Brusilovsky. 2003. Developing adaptive educational hypermedia systems: From design models to authoring tools. In Authoring Tools Adv. Technol. Learn. Environ., Tom Murray, Stephen B. Blessing, and Shaaron Ainsworth (Eds.). Springer Netherlands, Dordrecht, 377--409. DOI:http://dx.doi.org/10.1007/978-94-017-0819-7 Google Scholar
Cross Ref
- Serhat S. Bucak, Rong Jin, and Anil K. Jain. 2014. Multiple kernel learning for visual object recognition: A review. IEEE Trans. Pattern Anal. Mach. Intell. 36, 7 (Jul 2014), 1354--69. DOI:http://dx.doi.org/10.1109/TPAMI.2013.212 Google Scholar
Digital Library
- J. G. Daugman. 1988. Complete discrete 2-D Gabor transforms by neural networks for image analysis and compression. IEEE Trans. Acoust. 36, 7 (Jul 1988), 1169--1179. DOI:http://dx.doi.org/10.1109/29.1644 Google Scholar
Cross Ref
- Abhinav Dhall, Roland Goecke, Jyoti Joshi, Karan Sikka, and Tom Gedeon. 2014. Emotion recognition in the wild challenge 2014. In Proc. 16th Int. Conf. Multimodal Interact. (ICMI’14). ACM Press, New York, New York, 461--466. DOI:http://dx.doi.org/10.1145/2663204.2666275 Google Scholar
Digital Library
- Sidney D’Mello, Andrew Olney, Claire Williams, and Patrick Hays. 2012. Gaze tutor: A gaze-reactive intelligent tutoring system. Int. J. Hum. Comput. Stud. 70, 5 (May 2012), 377--398. DOI:http://dx.doi.org/10.1016/j.ijhcs.2012.01.004 Google Scholar
Digital Library
- P. Ekman and W. Friesen. 1978. Facial Action Coding System: A Technique for the Measurement of Facial Movement: Investigator’s Guide 2 Parts. Consulting Psychologists Press.Google Scholar
- Xiaohua Huang, Qiuhai He, Xiaopeng Hong, Guoying Zhao, and Matti Pietikainen. 2014. Improved spatiotemporal local monogenic binary pattern for emotion recognition in the wild. In Proc. 16th Int. Conf. Multimodal Interact. - ICMI’14. ACM Press, New York, New York, 514--520. DOI:http://dx.doi.org/10.1145/2663204.2666278 Google Scholar
Digital Library
- Leonard Kaufman and Peter J. Rousseeuw. 2009. Finding Groups in Data: An Introduction to Cluster Analysis. John Wiley 8 Sons.Google Scholar
- Diane J. Litman and Kate Forbes-Riley. 2006. Recognizing student emotions and attitudes on the basis of utterances in spoken tutoring dialogues with both human and computer tutors. Speech Commun. 48, 5 (2006), 559--590. DOI:http://dx.doi.org/10.1016/j.specom.2005.09.008 Google Scholar
Cross Ref
- Patrick Lucey, Jeffrey F. Cohn, Takeo Kanade, Jason Saragih, Zara Ambadar, and Iain Matthews. 2010. The extended Cohn-Kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression. In Proc. 2010 IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work. CVPRW 2010. IEEE, 94--101. DOI:http://dx.doi.org/10.1109/CVPRW.2010.5543262 Google Scholar
Cross Ref
- Cristianini Nello, Jaz Kandola, Andre Elisseeff, and John Shawe-Taylor. 2006. On kernel target alignment. In Innov. Mach. Learn. Stud. Fuzziness Soft Comput., Dawn E. Holmes and Lakhmi C. Jain (Eds.). Studies in Fuzziness and Soft Computing, Vol. 194. Springer-Verlag, Berlin, 205--256. DOI:http://dx.doi.org/10.1007/3-540-33486-6 Google Scholar
Cross Ref
- M. Pantic, M. Valstar, R. Rademaker, and L. Maat. 2005. Web-based database for facial expression analysis. In Proc. 2005 IEEE Int. Conf. Multimed. Expo. IEEE, 317--321. DOI:http://dx.doi.org/10.1109/ICME.2005.1521424 Google Scholar
Cross Ref
- MPuerto Paule-Ruiz, Víctor Álvarez-García, J. R. Pérez-Pérez, and M. Riestra-González. 2013. Voice interactive learning. In Proc. 18th ACM Conf. Innov. Technol. Comput. Sci. Educ. (ITiCSE’13). 34. DOI:http://dx.doi.org/10.1145/2462476.2462489 Google Scholar
Digital Library
- Alain Rakotomamonjy. 2008. Softwares and Toolboxes. Retrieved from http://asi.insa-rouen.fr/enseignants/.Google Scholar
- Alain Rakotomamonjy, Francis Bach, Stephane Canu, and Yves Grandvalet. 2008. SimpleMKL. J. Mach. Learn. Res. 9 (2008), 2491--2521.Google Scholar
- Abdolhossein Sarrafzadeh, Samuel Alexander, Farhad Dadgostar, Chao Fan, and Abbas Bigdeli. 2008. How do you know that I don’t understand? A look at the future of intelligent tutoring systems. Comput. Human Behav. 24, 4 (Jul 2008), 1342--1363. DOI:http://dx.doi.org/10.1016/j.chb.2007.07.008 Google Scholar
Digital Library
- Nicu Sebe. 2009. Multimodal interfaces: Challenges and perspectives. J. Ambient Intell. Smart Environ. 1, 1 (2009), 23--30. DOI:http://dx.doi.org/10.3233/AIS-2009-0003Google Scholar
Digital Library
- N. Sebe, I. Cohen, T. Gevers, and T. S. Huang. 2006. Emotion recognition based on joint visual and audio cues. In Proc. 18th Int. Conf. Pattern Recognit., Vol. 1. IEEE, 1136--1139. DOI:http://dx.doi.org/10.1109/ICPR.2006.489 Google Scholar
Digital Library
- Liping Shen, Victor Callaghan, and Ruimin Shen. 2008. Affective e-Learning in residential and pervasive computing environments. Inf. Syst. Front. 10, 4 (2008), 461--472. DOI:http://dx.doi.org/10.1007/s10796-008-9104-5 Google Scholar
Digital Library
- Teng Song, Lin Qi, Enqing Chen, Lei Gao, and Ning Zheng. 2012. Recognizing human emotional state via SRC in fractional Fourier domain. In Proc. 2012 IEEE 11th Int. Conf. Signal Process. 1 (2012), 1583--1586. DOI:http://dx.doi.org/10.1109/ICoSP.2012.6491882 Google Scholar
Cross Ref
- Sima Taheri, Vishal M. Patel, and Rama Chellappa. 2013. Component-based recognition of faces and facial expressions. IEEE Trans. Affect. Comput. 4, 4 (Oct 2013), 360--371. DOI:http://dx.doi.org/10.1109/T-AFFC.2013.28 Google Scholar
Cross Ref
- Paul Viola and Michael J. Jones. 2004. Robust real-time face detection. Int. J. Comput. Vis. 57, 2 (May 2004), 137--154. DOI:http://dx.doi.org/10.1023/B:VISI.0000013087.49260.fb Google Scholar
Digital Library
- Yongjin Wang and Ling Guan. 2008. Recognizing human emotional state from audiovisual signals. IEEE Trans. Multimed. 10, 4 (Jun 2008), 659--668. DOI:http://dx.doi.org/10.1109/TMM.2008.921734 Google Scholar
Digital Library
- Jia-Jun Wong and Siu-Yeung Cho. 2009. A local experts organization model with application to face emotion recognition. Expert Syst. Appl. 36, 1 (Jan. 2009), 804--819. DOI:http://dx.doi.org/10.1016/j.eswa.2007.10.030 Google Scholar
Digital Library
Index Terms
Emotion Recognition Using Multiple Kernel Learning toward E-learning Applications
Recommendations
The effects of web-based learning experience, perceived-initiative, and perceived-performance on learners' attitudes toward mobile learning
This study explored the effects of prior web-based learning experience, perceived-initiative, and perceived-performance on learners' attitudes toward a mobile learning system designed to enhance novices' reflective thinking and problem-solving through ...
A survey of emotion recognition methods with emphasis on E-Learning environments
AbstractEmotions play an important role in the learning process. Considering the learner's emotions is essential for electronic learning (e-learning) systems. Some researchers have proposed that system should induce and conduct the learner's ...
The use of blended learning: Social media and Flipped Classroom to encourage Thinking skills and Collaborate Work in Higher Education
ICCMB '20: Proceedings of the 2020 the 3rd International Conference on Computers in Management and BusinessThe world society of 21st century, blended learning, social media and flipped classroom are become popular education. Social media is changing instruction then teachers and learners become coaching and play in classroom learning. To make the ...






Comments