skip to main content
research-article

Personalized Emotion Recognition by Personality-Aware High-Order Learning of Physiological Signals

Published:24 January 2019Publication History
Skip Abstract Section

Abstract

Due to the subjective responses of different subjects to physical stimuli, emotion recognition methodologies from physiological signals are increasingly becoming personalized. Existing works mainly focused on modeling the involved physiological corpus of each subject, without considering the psychological factors, such as interest and personality. The latent correlation among different subjects has also been rarely examined. In this article, we propose to investigate the influence of personality on emotional behavior in a hypergraph learning framework. Assuming that each vertex is a compound tuple (subject, stimuli), multi-modal hypergraphs can be constructed based on the personality correlation among different subjects and on the physiological correlation among corresponding stimuli. To reveal the different importance of vertices, hyperedges, and modalities, we learn the weights for each of them. As the hypergraphs connect different subjects on the compound vertices, the emotions of multiple subjects can be simultaneously recognized. In this way, the constructed hypergraphs are vertex-weighted multi-modal multi-task ones. The estimated factors, referred to as emotion relevance, are employed for emotion recognition. We carry out extensive experiments on the ASCERTAIN dataset and the results demonstrate the superiority of the proposed method, as compared to the state-of-the-art emotion recognition approaches.

References

  1. Mojtaba Khomami Abadi, Juan Abdón Miranda Correa, Julia Wache, Heng Yang, Ioannis Patras, and Nicu Sebe. 2015. Inference of personality traits and affect schedule by analysis of spontaneous reactions to affective videos. In IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, Vol. 1. 1--8.Google ScholarGoogle ScholarCross RefCross Ref
  2. Mojtaba Khomami Abadi, Ramanathan Subramanian, Seyed Mostafa Kia, Paolo Avesani, Ioannis Patras, and Nicu Sebe. 2015. DECAF: MEG-based multimodal database for decoding affective physiological responses. IEEE Transactions on Affective Computing 6, 3 (2015), 209--222.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Hussein Al Osman and Tiago H. Falk. 2017. Multimodal affect recognition: Current approaches and challenges. In Emotion and Attention Recognition Based on Biological Signals and Images. InTech.Google ScholarGoogle Scholar
  4. Xavier Alameda-Pineda, Elisa Ricci, Yan Yan, and Nicu Sebe. 2016. Recognizing emotions from abstract paintings using non-linear matrix completion. In IEEE Conference on Computer Vision and Pattern Recognition. 5240--5248.Google ScholarGoogle ScholarCross RefCross Ref
  5. Pradeep K. Atrey, M. Anwar Hossain, Abdulmotaleb El Saddik, and Mohan S. Kankanhalli. 2010. Multimodal fusion for multimedia analysis: A survey. Multimedia Systems 16, 6 (2010), 345--379. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Yoann Baveye, Emmanuel Dellandrea, Christel Chamaret, and Liming Chen. 2015. Liris-accede: A video database for affective content analysis. IEEE Transactions on Affective Computing 6, 1 (2015), 43--55.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Jiajun Bu, Shulong Tan, Chun Chen, Can Wang, Hao Wu, Lijun Zhang, and Xiaofei He. 2010. Music recommendation by unified hypergraph: Combining social media information and music content. In ACM International Conference on Multimedia. 391--400. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Elizabeth Camilleri, Georgios N. Yannakakis, and Antonios Liapis. 2017. Towards general models of player affect. In International Conference on Affective Computing and Intelligent Interaction. 333--339.Google ScholarGoogle ScholarCross RefCross Ref
  9. Paul T. Costa and Robert R. MacCrae. 1992. Revised NEO Personality Inventory (NEO PI-R) and NEO Five-factor Inventory (NEO-FFI): Professional Manual. Psychological Assessment Resources, Incorporated.Google ScholarGoogle Scholar
  10. Sidney K. D’mello and Jacqueline Kory. 2015. A review and meta-analysis of multimodal affect detection systems. Comput. Surveys 47, 3 (2015), 43. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Ellen Douglas-Cowie, Roddy Cowie, Ian Sneddon, Cate Cox, Orla Lowry, Margaret Mcrorie, Jean-Claude Martin, Laurence Devillers, Sarkis Abrilian, Anton Batliner, and others. 2007. The HUMAINE database: Addressing the collection and annotation of naturalistic and induced emotional data. In International Conference on Affective Computing and Intelligent Interaction. 488--500. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Nico H. Frijda. 1986. The Emotions. Cambridge University Press.Google ScholarGoogle Scholar
  13. Yue Gao, Meng Wang, Dacheng Tao, Rongrong Ji, and Qionghai Dai. 2012. 3-D object retrieval and recognition with hypergraph analysis. IEEE Transactions on Image Processing 21, 9 (2012), 4290--4303. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Yue Gao, Meng Wang, Zheng-Jun Zha, Jialie Shen, Xuelong Li, and Xindong Wu. 2013. Visual-textual joint relevance learning for tag-based social image search. IEEE Transactions on Image Processing 22, 1 (2013), 363--376. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Anastasia Giachanou and Fabio Crestani. 2016. Like it or not: A survey of Twitter sentiment analysis methods. Comput. Surveys 49, 2 (2016), 28. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Hatice Gunes and Massimo Piccardi. 2005. Affect recognition from face and body: Early fusion vs. late fusion. In IEEE International Conference on Systems, Man and Cybernetics, Vol. 4. 3437--3443.Google ScholarGoogle ScholarCross RefCross Ref
  17. R. Hamed, Adham Atyabi, Antti Rantanen, Seppo J. Laukka, Samia Nefti-Meziani, Janne Heikkilä, and others. 2015. Predicting the valence of a scene from observersąŕ eye movements. PloS One 10, 9 (2015), e0138198.Google ScholarGoogle Scholar
  18. Rui Henriques and Ana Paiva. 2014. Seven principles to mine flexible behavior from physiological signals for effective emotion recognition and description in affective interactions. In International Conference on Physiological Computing Systems. 75--82.Google ScholarGoogle Scholar
  19. Rui Henriques, Ana Paiva, and Claudia Antunes. 2013. Accessing emotion patterns from affective interactions using electrodermal activity. In Humaine Association Conference on Affective Computing and Intelligent Interaction. 43--48. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Yuchi Huang, Qingshan Liu, Shaoting Zhang, and Dimitris Metaxas. 2010. Image retrieval via probabilistic hypergraph ranking. In IEEE Conference on Computer Vision and Pattern Recognition. 3376--3383.Google ScholarGoogle ScholarCross RefCross Ref
  21. Hideo Joho, Jacopo Staiano, Nicu Sebe, and Joemon M. Jose. 2011. Looking at the viewer: Analysing facial activity to detect personal highlights of multimedia contents. Multimedia Tools and Applications 51, 2 (2011), 505--523. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Dhiraj Joshi, Ritendra Datta, Elena Fedorovskaya, Quang-Tuan Luong, James Z. Wang, Jia Li, and Jiebo Luo. 2011. Aesthetics and emotions in images. IEEE Signal Processing Magazine 28, 5 (2011), 94--115.Google ScholarGoogle ScholarCross RefCross Ref
  23. Patrik N. Juslin and Petri Laukka. 2004. Expression, perception, and induction of musical emotions: A review and a questionnaire study of everyday listening. Journal of New Music Research 33, 3 (2004), 217--238.Google ScholarGoogle Scholar
  24. Elizabeth G. Kehoe, John M. Toomey, Joshua H. Balsters, and Arun L. W. Bokde. 2012. Personality modulates the effects of emotional arousal and valence on brain activation. Social Cognitive and Affective Neuroscience 7, 7 (2012), 858--870.Google ScholarGoogle ScholarCross RefCross Ref
  25. Jonghwa Kim and Elisabeth André. 2008. Emotion recognition based on physiological changes in music listening. IEEE Transactions on Pattern Analysis and Machine Intelligence 30, 12 (2008), 2067--2083. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Yelin Kim and Emily Mower Provost. 2015. Emotion recognition during speech using dynamics of multiple regions of the face. ACM Transactions on Multimedia Computing, Communications, and Applications 12, 1s (2015), 25. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Sander Koelstra, Christian Muhl, Mohammad Soleymani, Jong-Seok Lee, Ashkan Yazdani, Touradj Ebrahimi, Thierry Pun, Anton Nijholt, and Ioannis Patras. 2012. DEAP: A database for emotion analysis using physiological signals. IEEE Transactions on Affective Computing 3, 1 (2012), 18--31. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Ting Li, Yoann Baveye, Christel Chamaret, Emmanuel Dellandréa, and Liming Chen. 2015. Continuous arousal self-assessments validation using real-time physiological responses. In ACM International Workshop on Affect 8 Sentiment in Multimedia. ACM, 39--44. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Christine Lætitia Lisetti and Fatma Nasoz. 2004. Using noninvasive wearable computers to recognize human emotions from physiological signals. EURASIP Journal on Advances in Signal Processing 2004, 11 (2004), 929414. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Hector P. Martinez, Yoshua Bengio, and Georgios N. Yannakakis. 2013. Learning deep physiological models of affect. IEEE Computational Intelligence Magazine 8, 2 (2013), 20--33. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Juan Abdon Miranda-Correa, Mojtaba Khomami Abadi, Nicu Sebe, and Ioannis Patras. 2017. AMIGOS: A dataset for Mood, personality, and affect research on Individuals and GrOupS. Arxiv Preprint Arxiv:1702.02510 (2017).Google ScholarGoogle Scholar
  32. Jiquan Ngiam, Aditya Khosla, Mingyu Kim, Juhan Nam, Honglak Lee, and Andrew Y. Ng. 2011. Multimodal deep learning. In International Conference on Machine Learning. 689--696. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Marco Perugini and Lisa Di Blas. 2002. Analyzing personality related adjectives from an eticemic perspective: The big five marker scales (BFMS) and the Italian AB5C taxonomy. Big Five Assessment (2002), 281--304.Google ScholarGoogle Scholar
  34. Soujanya Poria, Erik Cambria, Rajiv Bajpai, and Amir Hussain. 2017. A review of affective computing: From unimodal analysis to multimodal fusion. Information Fusion 37 (2017), 98--125. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Pulak Purkait, Tat-Jun Chin, Alireza Sadri, and David Suter. 2017. Clustering with hypergraphs: The case for large hyperedges. IEEE Transactions on Pattern Analysis and Machine Intelligence 39, 9 (2017), 1697--1711.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Yangyang Shu and Shangfei Wang. 2017. Emotion recognition through integrating EEG and peripheral signals. In IEEE International Conference on Acoustics, Speech and Signal Processing. 2871--2875.Google ScholarGoogle ScholarCross RefCross Ref
  37. Cees G. M. Snoek, Marcel Worring, and Arnold W. M. Smeulders. 2005. Early versus late fusion in semantic video analysis. In ACM International Conference on Multimedia. 399--402. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Mohammad Soleymani, Jeroen Lichtenauer, Thierry Pun, and Maja Pantic. 2012. A multimodal database for affect recognition and implicit tagging. IEEE Transactions on Affective Computing 3, 1 (2012), 42--55. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Robert C. Solomon. 1993. The passions: Emotions and the Meaning of Life. Hackett Publishing.Google ScholarGoogle Scholar
  40. Lifan Su, Yue Gao, Xibin Zhao, Hai Wan, Ming Gu, and Jiaguang Sun. 2017. Vertex-weighted hypergraph learning for multi-view object classification. In International Joint Conferences on Artificial Intelligence. 2779--2785. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Ramanathan Subramanian, Divya Shankar, Nicu Sebe, and David Melcher. 2014. Emotion modulates eye movement patterns and subsequent memory for the gist and details of movie scenes. Journal of Vision 14, 3 (2014), 31:1--31:18.Google ScholarGoogle ScholarCross RefCross Ref
  42. Ramanathan Subramanian, Julia Wache, Mojtaba Abadi, Radu Vieriu, Stefan Winkler, and Nicu Sebe. 2018. ASCERTAIN: Emotion and personality recognition using commercial sensors. IEEE Transactions on Affective Computing 9, 2 (2018), 147--160.Google ScholarGoogle ScholarCross RefCross Ref
  43. Simone Tognetti, Maurizio Garbarino, Andrea Bonarini, and Matteo Matteucci. 2010. Modeling enjoyment preference from physiological responses in a car racing game. In IEEE Conference on Computational Intelligence and Games. 321--328.Google ScholarGoogle ScholarCross RefCross Ref
  44. Giel Van Lankveld, Pieter Spronck, Jaap Van den Herik, and Arnoud Arntz. 2011. Games as personality profiling tools. In IEEE Conference on Computational Intelligence and Games. 197--202.Google ScholarGoogle ScholarCross RefCross Ref
  45. Alessandro Vinciarelli and Gelareh Mohammadi. 2014. A survey of personality computing. IEEE Transactions on Affective Computing 5, 3 (2014), 273--291.Google ScholarGoogle ScholarCross RefCross Ref
  46. Johannes Wagner, Elisabeth Andre, Florian Lingenfelser, and Jonghwa Kim. 2011. Exploring fusion methods for multimodal emotion recognition with missing data. IEEE Transactions on Affective Computing 2, 4 (2011), 206--218. Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Meng Wang, Xian-Sheng Hua, Richang Hong, Jinhui Tang, Guo-Jun Qi, and Yan Song. 2009. Unified video annotation via multigraph learning. IEEE Transactions on Circuits and Systems for Video Technology 19, 5 (2009), 733--746. Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Shangfei Wang and Qiang Ji. 2015. Video affective content analysis: A survey of state-of-the-art methods. IEEE Transactions on Affective Computing 6, 4 (2015), 410--430.Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Longyin Wen, Wenbo Li, Junjie Yan, Zhen Lei, Dong Yi, and Stan Z. Li. 2014. Multiple target tracking based on undirected hierarchical relation hypergraph. In IEEE Conference on Computer Vision and Pattern Recognition. 1282--1289. Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. Kathy A. Winter and Nicholas A. Kuiper. 1997. Individual differences in the experience of emotions. Clinical Psychology Review 17, 7 (1997), 791--821.Google ScholarGoogle Scholar
  51. Yang Yang, Jia Jia, Shumei Zhang, Boya Wu, Qicong Chen, Juanzi Li, Chunxiao Xing, and Jie Tang. 2014. How do your friends on social media disclose your emotions? In AAAI Conference on Artificial Intelligence. 306--312. Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Yi-Hsuan Yang and Homer H. Chen. 2012. Machine recognition of music emotion: A review. ACM Transactions on Intelligent Systems and Technology 3, 3 (2012), 40. Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Georgios N. Yannakakis, Roddy Cowie, and Carlos Busso. 2017. The ordinal nature of emotions. In International Conference on Affective Computing and Intelligent Interaction. 248--255.Google ScholarGoogle ScholarCross RefCross Ref
  54. Chao Yao, Jimin Xiao, Tammam Tillo, Yao Zhao, Chunyu Lin, and Huihui Bai. 2016. Depth map down-sampling and coding based on synthesized view distortion. IEEE Transactions on Multimedia 18, 10 (2016), 2015--2022.Google ScholarGoogle ScholarCross RefCross Ref
  55. Quanzeng You, Liangliang Cao, Hailin Jin, and Jiebo Luo. 2016. Robust visual-textual sentiment analysis: When attention meets tree-structured recursive neural networks. In ACM International Conference on Multimedia. 1008--1017. Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Sicheng Zhao, Guiguang Ding, Yue Gao, and Jungong Han. 2017. Approximating discrete probability distribution of image emotions by multi-modal features fusion. In International Joint Conference on Artificial Intelligence. 466--4675. Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. Sicheng Zhao, Guiguang Ding, Yue Gao, and Jungong Han. 2017. Learning visual emotion distributions via multi-modal features fusion. In ACM International Conference on Multimedia. 369--377. Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. Sicheng Zhao, Guiguang Ding, Yue Gao, Xin Zhao, Youbao Tang, Jungong Han, Hongxun Yao, and Qingming Huang. 2018. Discrete probability distribution prediction of image emotions with shared sparse learning. IEEE Transactions on Affective Computing (2018).Google ScholarGoogle Scholar
  59. Sicheng Zhao, Guiguang Ding, Jungong Han, and Yue Gao. 2018. Personality-aware personalized emotion recognition from physiological signals. In International Joint Conferences on Artificial Intelligence. 1660--1667. Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. Sicheng Zhao, Yue Gao, Guiguang Ding, and Tat-Seng Chua. 2018. Real-time multimedia social event detection in microblog. IEEE Transactions on Cybernetics 48, 11 (2018) 3218--3231.Google ScholarGoogle ScholarCross RefCross Ref
  61. Sicheng Zhao, Yue Gao, Xiaolei Jiang, Hongxun Yao, Tat-Seng Chua, and Xiaoshuai Sun. 2014. Exploring principles-of-art features for image emotion recognition. In ACM International Conference on Multimedia. 47--56. Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. Sicheng Zhao, Hongxun Yao, Yue Gao, Guiguang Ding, and Tat-Seng Chua. 2018. Predicting personalized image emotion perceptions in social networks. IEEE Transactions on Affective Computing 9, 4 (2018), 526--540.Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. Sicheng Zhao, Hongxun Yao, Yue Gao, Rongrong Ji, and Guiguang Ding. 2017. Continuous probability distribution prediction of image emotions via multitask shared sparse regression. IEEE Transactions on Multimedia 19, 3 (2017), 632--645. Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. Sicheng Zhao, Hongxun Yao, Yue Gao, Rongrong Ji, Wenlong Xie, Xiaolei Jiang, and Tat-Seng Chua. 2016. Predicting personalized emotion perceptions of social images. In ACM International Conference on Multimedia. 1385--1394. Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. Sicheng Zhao, Hongxun Yao, You Yang, and Yanhao Zhang. 2014. Affective image retrieval via multi-graph learning. In ACM International Conference on Multimedia. 1025--1028. Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. Dengyong Zhou, Jiayuan Huang, and Bernhard Schölkopf. 2006. Learning with hypergraphs: Clustering, classification, and embedding. In Advances in Neural Information Processing Systems. 1601--1608. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Personalized Emotion Recognition by Personality-Aware High-Order Learning of Physiological Signals

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format .

        View HTML Format
        About Cookies On This Site

        We use cookies to ensure that we give you the best experience on our website.

        Learn more

        Got it!