skip to main content
research-article

Facial-expression-aware Emotional Color Transfer Based on Convolutional Neural Network

Authors Info & Claims
Published:27 January 2022Publication History
Skip Abstract Section

Abstract

Emotional color transfer aims to change the evoked emotion of a source image to that of a target image by adjusting color distribution. Most of existing emotional color transfer methods only consider the low-level visual features of an image and ignore the facial expression features when the image contains a human face, which would cause incorrect emotion evaluation for the given image. In addition, previous emotional color transfer methods may easily result in ambiguity between the emotion of resulting image and target image. For example, if the background of the target image is dark while the facial expression is happiness, then previous methods would directly transfer dark color to the source image, neglecting the facial emotion in the image. To solve this problem, we propose a new facial-expression-aware emotional color transfer framework. Given a target image with facial expression features, we first predict the facial emotion label of the image through the emotion classification network. Then, facial emotion labels are matched with pre-trained emotional color transfer models. Finally, we use the matched emotion model to transfer the color of the target image to the source image. Considering none of the existing emotion image databases, which focus on images that contain face and background, we built an emotion database for our new emotional color transfer framework that is called “Face-Emotion database.” Experiments demonstrate that our method can successfully capture and transfer facial emotions, outperforming state-of-the-art methods.

REFERENCES

  1. [1] Yang Chuan-Kai and Peng Li-Kai. 2008. Automatic mood-transferring between color images. IEEE Comput. Graph. Appl. 28, 2 (2008), 5261. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. [2] Wang Xiaohui, Jia Jia, and Cai Lianhong. 2013. Affective image adjustment with a single word. Visual Comput. 29, 11 (2013), 11211133. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. [3] Ryoo SeungTaek. 2014. Emotion affective color transfer. Int. J. Softw. Eng. Appl. 8, 3 (2014), 227232.Google ScholarGoogle Scholar
  4. [4] Fried Ohad, Avidan Shai, and Cohen-Or Daniel. 2017. Patch2vec: Globally consistent image patch representation. In Computer Graphics Forum, Vol. 36. 183194.Google ScholarGoogle Scholar
  5. [5] Zhen Qingkai, Huang Di, Wang Yunhong, and Chen Liming. 2016. Muscular movement model-based automatic 3D/4D facial expression recognition. IEEE Trans. Multimedia 18, 7 (2016), 14381450. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. [6] Li Huibin, Sun Jian, Xu Zongben, and Chen Liming. 2017. Multimodal 2D+3D facial expression recognition with deep fusion convolutional neural network. IEEE Trans. Multimedia 19, 12 (2017), 28162831.Google ScholarGoogle ScholarCross RefCross Ref
  7. [7] Peng Kuan-Chuan, Chen Tsuhan, Sadovnik Amir, and Gallagher Andrew C.. 2015. A mixed bag of emotions: Model, predict, and transfer emotion distributions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 860868.Google ScholarGoogle ScholarCross RefCross Ref
  8. [8] Solli Martin and Lenz Reiner. 2010. Emotion related structures in large image databases. In Proceedings of the ACM International Conference on Image and Video Retrieval. 398405. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. [9] Dan-Glauser Elise S. and Scherer Klaus R.. 2011. The geneva affective picture database (GAPED): A new 730-picture database focusing on valence and normative significance. Behav. Res. Methods 43, 2 (2011), 468477.Google ScholarGoogle ScholarCross RefCross Ref
  10. [10] Lang Peter J., Bradley Margaret M., Cuthbert Bruce N., et al. 1999. International affective picture system (IAPS): Instruction manual and affective ratings. The Center for Research in Psychophysiology, University of Florida, Gainesville, FL.Google ScholarGoogle Scholar
  11. [11] Bradley Margaret M. and Lang Peter J.. 1994. Measuring emotion: The self-assessment manikin and the semantic differential. J. Behav. Ther. Exper. Psych. 25, 1 (1994), 4959.Google ScholarGoogle Scholar
  12. [12] Reinhard Erik, Adhikhmin Michael, Gooch Bruce, and Shirley Peter. 2001. Color transfer between images. IEEE Comput. Graph. Appl. 21, 5 (2001), 3441. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. [13] Welsh Tomihisa, Ashikhmin Michael, and Mueller Klaus. 2002. Transferring color to greyscale images. In Proceedings of the Annual Conference on Computer Graphics and Interactive Techniques. 277280. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. [14] Xiao Xuezhong and Ma Lizhuang. 2006. Color transfer in correlated color space. In Proceedings of the ACM International Conference on Virtual Reality Continuum and Its Applications. 305309. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. [15] Xiao Yi, Wan Liang, Leung Chi-Sing, Lai Yu-Kun, and Wong Tien-Tsin. 2012. Example-based color transfer for gradient meshes. IEEE Trans. Multimedia 15, 3 (2012), 549560. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. [16] Cao XiaoChun, Zhang YuJie, Guo XiaoJie, and Cheung Yiu-Ming. 2014. Video color conceptualization using optimization. Sci. China Info. Sci. 57, 7 (2014), 111.Google ScholarGoogle ScholarCross RefCross Ref
  17. [17] Han Yu, Xu Chen, Baciu George, Li Min, and Islam Md Robiul. 2016. Cartoon and texture decomposition-based color transfer for fabric images. IEEE Trans. Multimedia 19, 1 (2016), 8092. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. [18] Song Zhi-Chao and Liu Shi-Guang. 2016. Sufficient image appearance transfer combining color and texture. IEEE Trans. Multimedia 19, 4 (2016), 702711. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. [19] Gatys Leon A., Ecker Alexander S., and Bethge Matthias. 2016. Image style transfer using convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 24142423.Google ScholarGoogle ScholarCross RefCross Ref
  20. [20] Zhu Jun-Yan, Park Taesung, Isola Phillip, and Efros Alexei A.. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision. 22232232.Google ScholarGoogle ScholarCross RefCross Ref
  21. [21] Huang Yifei, Qiu Sheng, Wang Changbo, and Li Chenhui. 2020. Learning representations for high-dynamic-range image color transfer in a self-supervised Way. IEEE Trans. Multimedia 23 (2020), 176188.Google ScholarGoogle ScholarCross RefCross Ref
  22. [22] He Li, Qi Hairong, and Zaretzki Russell. 2015. Image color transfer to evoke different emotions based on color combinations. Signal Image Video Process. 9, 8 (2015), 19651973.Google ScholarGoogle ScholarCross RefCross Ref
  23. [23] Su Yuan-Yuan and Sun Hung-Min. 2019. Emotion-based color transfer of images using adjustable color combinations. Soft Comput. 23, 3 (2019), 10071020. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. [24] Liu Da, Jiang Yaxi, Pei Min, and Liu Shiguang. 2018. Emotional image color transfer via deep learning. Pattern Recogn. Lett. 110 (2018), 1622.Google ScholarGoogle ScholarCross RefCross Ref
  25. [25] Ekman Paul and Friesen Wallace V.. 1986. A new pan-cultural facial expression of emotion. Motivat. Emot. 10, 2 (1986), 159168.Google ScholarGoogle ScholarCross RefCross Ref
  26. [26] Shan Caifeng, Gong Shaogang, and McOwan Peter W.. 2009. Facial expression recognition based on local binary patterns: A comprehensive study. Image Vision Comput. 27, 6 (2009), 803816. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. [27] Liu Mengyi, Li Shaoxin, Shan Shiguang, Wang Ruiping, and Chen Xilin. 2014. Deeply learning deformable facial action parts model for dynamic expression analysis. In Proceedings of the Asian Conference on Computer Vision. 143157.Google ScholarGoogle Scholar
  28. [28] Sreedharan Ninu Preetha Nirmala, Ganesan Brammya, Raveendran Ramya, Sarala Praveena, Dennis Binu, et al. 2018. Grey wolf optimisation-based feature selection and classification for facial emotion recognition. IET Biometrics 7, 5 (2018), 490499.Google ScholarGoogle ScholarCross RefCross Ref
  29. [29] Yang Jufeng, She Dongyu, and Sun Ming. 2017. Joint image emotion classification and distribution learning via deep convolutional neural network. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI’17). 32663272. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. [30] Chang Feng-Ju, Tran Anh Tuan, Hassner Tal, Masi Iacopo, Nevatia Ram, and Medioni Gerard. 2018. Expnet: Landmark-free, deep, 3D facial expressions. In Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition (FG’18). 122129.Google ScholarGoogle ScholarCross RefCross Ref
  31. [31] Yanulevskaya Victoria, Gemert Jan C. van, Roth Katharina, Herbold Ann-Katrin, Sebe Nicu, and Geusebroek Jan-Mark. 2008. Emotional valence categorization using holistic image features. In Proceedings of the IEEE International Conference on Image Processing. 101104.Google ScholarGoogle ScholarCross RefCross Ref
  32. [32] Alameda-Pineda Xavier, Ricci Elisa, Yan Yan, and Sebe Nicu. 2016. Recognizing emotions from abstract paintings using non-linear matrix completion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 52405248.Google ScholarGoogle ScholarCross RefCross Ref
  33. [33] Rao Tianrong, Xu Min, Liu Huiying, Wang Jinqiao, and Burnett Ian. 2016. Multi-scale blocks based image emotion classification using multiple instance learning. In Proceedings of the IEEE International Conference on Image Processing (ICIP). 634638.Google ScholarGoogle ScholarCross RefCross Ref
  34. [34] Zhao Sicheng, Ding Guiguang, Gao Yue, Zhao Xin, Tang Youbao, Han Jungong, Yao Hongxun, and Huang Qingming. 2018. Discrete probability distribution prediction of image emotions with shared sparse learning. IEEE Trans. Affect. Comput. 11, 4 (2018), 574587.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. [35] Kim Hye-Rin, Kim Yeong-Seok, Kim Seon Joo, and Lee In-Kwon. 2018. Building emotional machines: Recognizing image emotions through deep neural networks. IEEE Trans. Multimedia 20, 11 (2018), 29802992.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. [36] Habimana Olivier, Li Yuhua, Li Ruixuan, Gu Xiwu, and Yu Ge. 2020. Sentiment analysis using deep learning approaches: An overview. Sci. China Info. Sci. 63, 1 (2020), 136.Google ScholarGoogle Scholar
  37. [37] Ekman Paul. 1992. An argument for basic emotions. Cogn. Emot. 6, 3–4 (1992), 169200.Google ScholarGoogle ScholarCross RefCross Ref
  38. [38] Ioffe Sergey and Szegedy Christian. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning. 448456. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. [39] Wang Xiaohui, Jia Jia, Yin Jiaming, and Cai Lianhong. 2013. Interpretable aesthetic features for affective image classification. In Proceedings of the IEEE International Conference on Image Processing. 32303234.Google ScholarGoogle ScholarCross RefCross Ref
  40. [40] Machajdik Jana and Hanbury Allan. 2010. Affective image classification using features inspired by psychology and art theory. In Proceedings of the ACM International Conference on Multimedia. 8392. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. [41] Lucey Patrick, Cohn Jeffrey F., Kanade Takeo, Saragih Jason, Ambadar Zara, and Matthews Iain. 2010. The extended Cohn-Kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 94101.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Facial-expression-aware Emotional Color Transfer Based on Convolutional Neural Network

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Multimedia Computing, Communications, and Applications
      ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 18, Issue 1
      January 2022
      517 pages
      ISSN:1551-6857
      EISSN:1551-6865
      DOI:10.1145/3505205
      Issue’s Table of Contents

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 27 January 2022
      • Accepted: 1 April 2021
      • Revised: 1 March 2021
      • Received: 1 October 2020
      Published in tomm Volume 18, Issue 1

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Refereed

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    View Full Text

    HTML Format

    View this article in HTML Format .

    View HTML Format
    About Cookies On This Site

    We use cookies to ensure that we give you the best experience on our website.

    Learn more

    Got it!