Abstract
Due to the subjective responses of different subjects to physical stimuli, emotion recognition methodologies from physiological signals are increasingly becoming personalized. Existing works mainly focused on modeling the involved physiological corpus of each subject, without considering the psychological factors, such as interest and personality. The latent correlation among different subjects has also been rarely examined. In this article, we propose to investigate the influence of personality on emotional behavior in a hypergraph learning framework. Assuming that each vertex is a compound tuple (subject, stimuli), multi-modal hypergraphs can be constructed based on the personality correlation among different subjects and on the physiological correlation among corresponding stimuli. To reveal the different importance of vertices, hyperedges, and modalities, we learn the weights for each of them. As the hypergraphs connect different subjects on the compound vertices, the emotions of multiple subjects can be simultaneously recognized. In this way, the constructed hypergraphs are vertex-weighted multi-modal multi-task ones. The estimated factors, referred to as emotion relevance, are employed for emotion recognition. We carry out extensive experiments on the ASCERTAIN dataset and the results demonstrate the superiority of the proposed method, as compared to the state-of-the-art emotion recognition approaches.
- Mojtaba Khomami Abadi, Juan Abdón Miranda Correa, Julia Wache, Heng Yang, Ioannis Patras, and Nicu Sebe. 2015. Inference of personality traits and affect schedule by analysis of spontaneous reactions to affective videos. In IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, Vol. 1. 1--8.Google Scholar
Cross Ref
- Mojtaba Khomami Abadi, Ramanathan Subramanian, Seyed Mostafa Kia, Paolo Avesani, Ioannis Patras, and Nicu Sebe. 2015. DECAF: MEG-based multimodal database for decoding affective physiological responses. IEEE Transactions on Affective Computing 6, 3 (2015), 209--222.Google Scholar
Digital Library
- Hussein Al Osman and Tiago H. Falk. 2017. Multimodal affect recognition: Current approaches and challenges. In Emotion and Attention Recognition Based on Biological Signals and Images. InTech.Google Scholar
- Xavier Alameda-Pineda, Elisa Ricci, Yan Yan, and Nicu Sebe. 2016. Recognizing emotions from abstract paintings using non-linear matrix completion. In IEEE Conference on Computer Vision and Pattern Recognition. 5240--5248.Google Scholar
Cross Ref
- Pradeep K. Atrey, M. Anwar Hossain, Abdulmotaleb El Saddik, and Mohan S. Kankanhalli. 2010. Multimodal fusion for multimedia analysis: A survey. Multimedia Systems 16, 6 (2010), 345--379. Google Scholar
Digital Library
- Yoann Baveye, Emmanuel Dellandrea, Christel Chamaret, and Liming Chen. 2015. Liris-accede: A video database for affective content analysis. IEEE Transactions on Affective Computing 6, 1 (2015), 43--55.Google Scholar
Digital Library
- Jiajun Bu, Shulong Tan, Chun Chen, Can Wang, Hao Wu, Lijun Zhang, and Xiaofei He. 2010. Music recommendation by unified hypergraph: Combining social media information and music content. In ACM International Conference on Multimedia. 391--400. Google Scholar
Digital Library
- Elizabeth Camilleri, Georgios N. Yannakakis, and Antonios Liapis. 2017. Towards general models of player affect. In International Conference on Affective Computing and Intelligent Interaction. 333--339.Google Scholar
Cross Ref
- Paul T. Costa and Robert R. MacCrae. 1992. Revised NEO Personality Inventory (NEO PI-R) and NEO Five-factor Inventory (NEO-FFI): Professional Manual. Psychological Assessment Resources, Incorporated.Google Scholar
- Sidney K. D’mello and Jacqueline Kory. 2015. A review and meta-analysis of multimodal affect detection systems. Comput. Surveys 47, 3 (2015), 43. Google Scholar
Digital Library
- Ellen Douglas-Cowie, Roddy Cowie, Ian Sneddon, Cate Cox, Orla Lowry, Margaret Mcrorie, Jean-Claude Martin, Laurence Devillers, Sarkis Abrilian, Anton Batliner, and others. 2007. The HUMAINE database: Addressing the collection and annotation of naturalistic and induced emotional data. In International Conference on Affective Computing and Intelligent Interaction. 488--500. Google Scholar
Digital Library
- Nico H. Frijda. 1986. The Emotions. Cambridge University Press.Google Scholar
- Yue Gao, Meng Wang, Dacheng Tao, Rongrong Ji, and Qionghai Dai. 2012. 3-D object retrieval and recognition with hypergraph analysis. IEEE Transactions on Image Processing 21, 9 (2012), 4290--4303. Google Scholar
Digital Library
- Yue Gao, Meng Wang, Zheng-Jun Zha, Jialie Shen, Xuelong Li, and Xindong Wu. 2013. Visual-textual joint relevance learning for tag-based social image search. IEEE Transactions on Image Processing 22, 1 (2013), 363--376. Google Scholar
Digital Library
- Anastasia Giachanou and Fabio Crestani. 2016. Like it or not: A survey of Twitter sentiment analysis methods. Comput. Surveys 49, 2 (2016), 28. Google Scholar
Digital Library
- Hatice Gunes and Massimo Piccardi. 2005. Affect recognition from face and body: Early fusion vs. late fusion. In IEEE International Conference on Systems, Man and Cybernetics, Vol. 4. 3437--3443.Google Scholar
Cross Ref
- R. Hamed, Adham Atyabi, Antti Rantanen, Seppo J. Laukka, Samia Nefti-Meziani, Janne Heikkilä, and others. 2015. Predicting the valence of a scene from observersąŕ eye movements. PloS One 10, 9 (2015), e0138198.Google Scholar
- Rui Henriques and Ana Paiva. 2014. Seven principles to mine flexible behavior from physiological signals for effective emotion recognition and description in affective interactions. In International Conference on Physiological Computing Systems. 75--82.Google Scholar
- Rui Henriques, Ana Paiva, and Claudia Antunes. 2013. Accessing emotion patterns from affective interactions using electrodermal activity. In Humaine Association Conference on Affective Computing and Intelligent Interaction. 43--48. Google Scholar
Digital Library
- Yuchi Huang, Qingshan Liu, Shaoting Zhang, and Dimitris Metaxas. 2010. Image retrieval via probabilistic hypergraph ranking. In IEEE Conference on Computer Vision and Pattern Recognition. 3376--3383.Google Scholar
Cross Ref
- Hideo Joho, Jacopo Staiano, Nicu Sebe, and Joemon M. Jose. 2011. Looking at the viewer: Analysing facial activity to detect personal highlights of multimedia contents. Multimedia Tools and Applications 51, 2 (2011), 505--523. Google Scholar
Digital Library
- Dhiraj Joshi, Ritendra Datta, Elena Fedorovskaya, Quang-Tuan Luong, James Z. Wang, Jia Li, and Jiebo Luo. 2011. Aesthetics and emotions in images. IEEE Signal Processing Magazine 28, 5 (2011), 94--115.Google Scholar
Cross Ref
- Patrik N. Juslin and Petri Laukka. 2004. Expression, perception, and induction of musical emotions: A review and a questionnaire study of everyday listening. Journal of New Music Research 33, 3 (2004), 217--238.Google Scholar
- Elizabeth G. Kehoe, John M. Toomey, Joshua H. Balsters, and Arun L. W. Bokde. 2012. Personality modulates the effects of emotional arousal and valence on brain activation. Social Cognitive and Affective Neuroscience 7, 7 (2012), 858--870.Google Scholar
Cross Ref
- Jonghwa Kim and Elisabeth André. 2008. Emotion recognition based on physiological changes in music listening. IEEE Transactions on Pattern Analysis and Machine Intelligence 30, 12 (2008), 2067--2083. Google Scholar
Digital Library
- Yelin Kim and Emily Mower Provost. 2015. Emotion recognition during speech using dynamics of multiple regions of the face. ACM Transactions on Multimedia Computing, Communications, and Applications 12, 1s (2015), 25. Google Scholar
Digital Library
- Sander Koelstra, Christian Muhl, Mohammad Soleymani, Jong-Seok Lee, Ashkan Yazdani, Touradj Ebrahimi, Thierry Pun, Anton Nijholt, and Ioannis Patras. 2012. DEAP: A database for emotion analysis using physiological signals. IEEE Transactions on Affective Computing 3, 1 (2012), 18--31. Google Scholar
Digital Library
- Ting Li, Yoann Baveye, Christel Chamaret, Emmanuel Dellandréa, and Liming Chen. 2015. Continuous arousal self-assessments validation using real-time physiological responses. In ACM International Workshop on Affect 8 Sentiment in Multimedia. ACM, 39--44. Google Scholar
Digital Library
- Christine Lætitia Lisetti and Fatma Nasoz. 2004. Using noninvasive wearable computers to recognize human emotions from physiological signals. EURASIP Journal on Advances in Signal Processing 2004, 11 (2004), 929414. Google Scholar
Digital Library
- Hector P. Martinez, Yoshua Bengio, and Georgios N. Yannakakis. 2013. Learning deep physiological models of affect. IEEE Computational Intelligence Magazine 8, 2 (2013), 20--33. Google Scholar
Digital Library
- Juan Abdon Miranda-Correa, Mojtaba Khomami Abadi, Nicu Sebe, and Ioannis Patras. 2017. AMIGOS: A dataset for Mood, personality, and affect research on Individuals and GrOupS. Arxiv Preprint Arxiv:1702.02510 (2017).Google Scholar
- Jiquan Ngiam, Aditya Khosla, Mingyu Kim, Juhan Nam, Honglak Lee, and Andrew Y. Ng. 2011. Multimodal deep learning. In International Conference on Machine Learning. 689--696. Google Scholar
Digital Library
- Marco Perugini and Lisa Di Blas. 2002. Analyzing personality related adjectives from an eticemic perspective: The big five marker scales (BFMS) and the Italian AB5C taxonomy. Big Five Assessment (2002), 281--304.Google Scholar
- Soujanya Poria, Erik Cambria, Rajiv Bajpai, and Amir Hussain. 2017. A review of affective computing: From unimodal analysis to multimodal fusion. Information Fusion 37 (2017), 98--125. Google Scholar
Digital Library
- Pulak Purkait, Tat-Jun Chin, Alireza Sadri, and David Suter. 2017. Clustering with hypergraphs: The case for large hyperedges. IEEE Transactions on Pattern Analysis and Machine Intelligence 39, 9 (2017), 1697--1711.Google Scholar
Digital Library
- Yangyang Shu and Shangfei Wang. 2017. Emotion recognition through integrating EEG and peripheral signals. In IEEE International Conference on Acoustics, Speech and Signal Processing. 2871--2875.Google Scholar
Cross Ref
- Cees G. M. Snoek, Marcel Worring, and Arnold W. M. Smeulders. 2005. Early versus late fusion in semantic video analysis. In ACM International Conference on Multimedia. 399--402. Google Scholar
Digital Library
- Mohammad Soleymani, Jeroen Lichtenauer, Thierry Pun, and Maja Pantic. 2012. A multimodal database for affect recognition and implicit tagging. IEEE Transactions on Affective Computing 3, 1 (2012), 42--55. Google Scholar
Digital Library
- Robert C. Solomon. 1993. The passions: Emotions and the Meaning of Life. Hackett Publishing.Google Scholar
- Lifan Su, Yue Gao, Xibin Zhao, Hai Wan, Ming Gu, and Jiaguang Sun. 2017. Vertex-weighted hypergraph learning for multi-view object classification. In International Joint Conferences on Artificial Intelligence. 2779--2785. Google Scholar
Digital Library
- Ramanathan Subramanian, Divya Shankar, Nicu Sebe, and David Melcher. 2014. Emotion modulates eye movement patterns and subsequent memory for the gist and details of movie scenes. Journal of Vision 14, 3 (2014), 31:1--31:18.Google Scholar
Cross Ref
- Ramanathan Subramanian, Julia Wache, Mojtaba Abadi, Radu Vieriu, Stefan Winkler, and Nicu Sebe. 2018. ASCERTAIN: Emotion and personality recognition using commercial sensors. IEEE Transactions on Affective Computing 9, 2 (2018), 147--160.Google Scholar
Cross Ref
- Simone Tognetti, Maurizio Garbarino, Andrea Bonarini, and Matteo Matteucci. 2010. Modeling enjoyment preference from physiological responses in a car racing game. In IEEE Conference on Computational Intelligence and Games. 321--328.Google Scholar
Cross Ref
- Giel Van Lankveld, Pieter Spronck, Jaap Van den Herik, and Arnoud Arntz. 2011. Games as personality profiling tools. In IEEE Conference on Computational Intelligence and Games. 197--202.Google Scholar
Cross Ref
- Alessandro Vinciarelli and Gelareh Mohammadi. 2014. A survey of personality computing. IEEE Transactions on Affective Computing 5, 3 (2014), 273--291.Google Scholar
Cross Ref
- Johannes Wagner, Elisabeth Andre, Florian Lingenfelser, and Jonghwa Kim. 2011. Exploring fusion methods for multimodal emotion recognition with missing data. IEEE Transactions on Affective Computing 2, 4 (2011), 206--218. Google Scholar
Digital Library
- Meng Wang, Xian-Sheng Hua, Richang Hong, Jinhui Tang, Guo-Jun Qi, and Yan Song. 2009. Unified video annotation via multigraph learning. IEEE Transactions on Circuits and Systems for Video Technology 19, 5 (2009), 733--746. Google Scholar
Digital Library
- Shangfei Wang and Qiang Ji. 2015. Video affective content analysis: A survey of state-of-the-art methods. IEEE Transactions on Affective Computing 6, 4 (2015), 410--430.Google Scholar
Digital Library
- Longyin Wen, Wenbo Li, Junjie Yan, Zhen Lei, Dong Yi, and Stan Z. Li. 2014. Multiple target tracking based on undirected hierarchical relation hypergraph. In IEEE Conference on Computer Vision and Pattern Recognition. 1282--1289. Google Scholar
Digital Library
- Kathy A. Winter and Nicholas A. Kuiper. 1997. Individual differences in the experience of emotions. Clinical Psychology Review 17, 7 (1997), 791--821.Google Scholar
- Yang Yang, Jia Jia, Shumei Zhang, Boya Wu, Qicong Chen, Juanzi Li, Chunxiao Xing, and Jie Tang. 2014. How do your friends on social media disclose your emotions? In AAAI Conference on Artificial Intelligence. 306--312. Google Scholar
Digital Library
- Yi-Hsuan Yang and Homer H. Chen. 2012. Machine recognition of music emotion: A review. ACM Transactions on Intelligent Systems and Technology 3, 3 (2012), 40. Google Scholar
Digital Library
- Georgios N. Yannakakis, Roddy Cowie, and Carlos Busso. 2017. The ordinal nature of emotions. In International Conference on Affective Computing and Intelligent Interaction. 248--255.Google Scholar
Cross Ref
- Chao Yao, Jimin Xiao, Tammam Tillo, Yao Zhao, Chunyu Lin, and Huihui Bai. 2016. Depth map down-sampling and coding based on synthesized view distortion. IEEE Transactions on Multimedia 18, 10 (2016), 2015--2022.Google Scholar
Cross Ref
- Quanzeng You, Liangliang Cao, Hailin Jin, and Jiebo Luo. 2016. Robust visual-textual sentiment analysis: When attention meets tree-structured recursive neural networks. In ACM International Conference on Multimedia. 1008--1017. Google Scholar
Digital Library
- Sicheng Zhao, Guiguang Ding, Yue Gao, and Jungong Han. 2017. Approximating discrete probability distribution of image emotions by multi-modal features fusion. In International Joint Conference on Artificial Intelligence. 466--4675. Google Scholar
Digital Library
- Sicheng Zhao, Guiguang Ding, Yue Gao, and Jungong Han. 2017. Learning visual emotion distributions via multi-modal features fusion. In ACM International Conference on Multimedia. 369--377. Google Scholar
Digital Library
- Sicheng Zhao, Guiguang Ding, Yue Gao, Xin Zhao, Youbao Tang, Jungong Han, Hongxun Yao, and Qingming Huang. 2018. Discrete probability distribution prediction of image emotions with shared sparse learning. IEEE Transactions on Affective Computing (2018).Google Scholar
- Sicheng Zhao, Guiguang Ding, Jungong Han, and Yue Gao. 2018. Personality-aware personalized emotion recognition from physiological signals. In International Joint Conferences on Artificial Intelligence. 1660--1667. Google Scholar
Digital Library
- Sicheng Zhao, Yue Gao, Guiguang Ding, and Tat-Seng Chua. 2018. Real-time multimedia social event detection in microblog. IEEE Transactions on Cybernetics 48, 11 (2018) 3218--3231.Google Scholar
Cross Ref
- Sicheng Zhao, Yue Gao, Xiaolei Jiang, Hongxun Yao, Tat-Seng Chua, and Xiaoshuai Sun. 2014. Exploring principles-of-art features for image emotion recognition. In ACM International Conference on Multimedia. 47--56. Google Scholar
Digital Library
- Sicheng Zhao, Hongxun Yao, Yue Gao, Guiguang Ding, and Tat-Seng Chua. 2018. Predicting personalized image emotion perceptions in social networks. IEEE Transactions on Affective Computing 9, 4 (2018), 526--540.Google Scholar
Digital Library
- Sicheng Zhao, Hongxun Yao, Yue Gao, Rongrong Ji, and Guiguang Ding. 2017. Continuous probability distribution prediction of image emotions via multitask shared sparse regression. IEEE Transactions on Multimedia 19, 3 (2017), 632--645. Google Scholar
Digital Library
- Sicheng Zhao, Hongxun Yao, Yue Gao, Rongrong Ji, Wenlong Xie, Xiaolei Jiang, and Tat-Seng Chua. 2016. Predicting personalized emotion perceptions of social images. In ACM International Conference on Multimedia. 1385--1394. Google Scholar
Digital Library
- Sicheng Zhao, Hongxun Yao, You Yang, and Yanhao Zhang. 2014. Affective image retrieval via multi-graph learning. In ACM International Conference on Multimedia. 1025--1028. Google Scholar
Digital Library
- Dengyong Zhou, Jiayuan Huang, and Bernhard Schölkopf. 2006. Learning with hypergraphs: Clustering, classification, and embedding. In Advances in Neural Information Processing Systems. 1601--1608. Google Scholar
Digital Library
Index Terms
Personalized Emotion Recognition by Personality-Aware High-Order Learning of Physiological Signals
Recommendations
Physiological Signals-based Emotion Recognition via High-order Correlation Learning
Special Issue on Face Analysis for Applications and Special Issue on Affective Computing for Large-Scale Heterogeneous Multimedia DataEmotion recognition by physiological signals is an effective way to discern the inner state of human beings and therefore has been widely adopted in many user-centered applications. The majority of current state-of-the-art methods focus on exploring ...
Personality-aware personalized emotion recognition from physiological signals
IJCAI'18: Proceedings of the 27th International Joint Conference on Artificial IntelligenceEmotion recognition methodologies from physiological signals are increasingly becoming personalized, due to the subjective responses of different subjects to physical stimuli. Existing works mainly focused on modelling the involved physiological corpus ...
Emotion Recognition Using Physiological Signals
MIDI '15: Proceedings of the Mulitimedia, Interaction, Design and InnnovationIn this paper the problem of emotion recognition using physiological signals is presented. Firstly the problems with acquisition of physiological signals related to specific human emotions are described. It is not a trivial problem to elicit real ...






Comments