10.1145/2000119.2000141acmconferencesArticle/Chapter ViewAbstractPublication PageseuroitvConference Proceedings
research-article
Free Access

Ifelt: accessing movies through our emotions

ABSTRACT

Films are by excellence the form of art that exploits our affective, perceptual and intellectual activity. Technological developments and the trends for media convergence are turning video into a dominant and pervasive medium and online video is becoming a growing entertainment activity on the web and iTV. The improvement of new techniques for gathering emotional information about videos, both through content analysis or user implicit feedback through user physiological signals, is revealing an unfolding of new ways for exploring emotional information in videos, films or TV series, and brings out new perspectives to personalize user information. We present iFelt - an interactive web video application to classify, access, explore and visualize movies based on their emotional characteristics. In this work, we explore the design and evaluate different ways to access, browse and visualize movies and their contents.

References

  1. Ahlberg, C. & Truvé, S. 1995. Tight coupling: Guiding user actions in a direct manipulation retrieval system. In People and computers X: Proceedings of HCI'95, Huddersfield, August. 305--321. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Ahlberg, C., & Shneiderman, B. Film Finder website. www.infovis-wiki.net/index.php?title=Film_FinderGoogle ScholarGoogle Scholar
  3. Arijon, D. 1976. Grammar of the film language. Focal Press.Google ScholarGoogle Scholar
  4. Ashby GF, Valentin VV, Turken U. 2002. The effects of positive affect and arousal on working memory and executive attention. In: Moore SC, Oaksford M, editors. Emotional cognition: from brain to behaviour. Amsterdam {u.a.}: Benjamins. 245--287.Google ScholarGoogle Scholar
  5. Bestiario, Videosphere, May 2008. http://www.bestiario.org/research/videosphere/Google ScholarGoogle Scholar
  6. Card, S.K., Mackinlay J.D., and Shneiderman, B. 1999 Readings in Information Visualization: Using Vision to Think, San Francisco, California: Morgan-Kaufmann. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Chambel, T., and Guimarães, N., 2002. Context Perception in Video-Based Hypermedia Spaces. In Proceedings of ACM Hypertext'02, College Park, Maryland, USA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Cunningham, S. and David M. Nichols. 2008. How people find videos. In Proceedings of the 8th ACM/IEEE-CS joint conference on Digital libraries (JCDL '08). ACM, NY, USA, 201--210. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Ekman, P. 1992. Are there basic emotions? Psychological Review, 99(3):550--553.Google ScholarGoogle ScholarCross RefCross Ref
  10. Emotionally}Vague. http://www.emotionallyvague.com/Google ScholarGoogle Scholar
  11. Few, S. 2007. Data Visualization: Past, Present, and Future. IBM Cognos Innovation Center.Google ScholarGoogle Scholar
  12. Hanjalic, A. and Li-Qun Xu. 2005. Affective video content representation and modeling. Multimedia, IEEE Transactions on Multimedia, 7(1). Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Harris, J., Kamvar, S. 2009. We Feel Fine: An Almanac of Human Emotion, Scribner.Google ScholarGoogle Scholar
  14. Hauptmann, A. G. 2005. Lessons for the Future from a Decade of Informedia Video Analysis Research. Int. Conf. on Image and Video Retrieval, National Univ. of Singapore, Singapure, July 20--22. LNCS, vol 3568, pp.1--10, Aug. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. IMDb -- Internet Movie Database. www.imdb.comGoogle ScholarGoogle Scholar
  16. Kim, J. and André, E. 2008. "Emotion Recognition Based on Physiological Changes in Listening Music," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 30 (12), pp. 2067--2083, December, vol. 30, no. 12, 2067--2083. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Kreibig, S. D., Wilhelm, F. H., Roth, W. T., & Gross, J. J. 2007. Cardiovascular, electrodermal, and respiratory response patterns to fear- and sadness-inducing films. Psychophysiology, 44(5), 787--806.Google ScholarGoogle ScholarCross RefCross Ref
  18. Langlois, T., Chambel, T., Oliveira, E., Carvalho, P., Marques, G., & Falcão, A. 2010. VIRUS: Video Information Retrieval Using Subtitles. In Proceedings of Academic MindTrek'2010, Tampere, Finland, Oct 6th-8th. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Lund, A. M. 2001. Measuring usability with the USE questionnaire. Usability and User Experience, 8(2). 8.Google ScholarGoogle Scholar
  20. Maaoui, C., Pruski, A., & Abdat, F. 2008. Emotion recognition for human-machine communication. 2008 IEEERSJ International Conference on Intelligent Robots and Systems, 1210--1215. Ieee. Retrieved from http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=4650870Google ScholarGoogle ScholarCross RefCross Ref
  21. Martinho, J., & Chambel, T. 2009. ColorsInMotion: Interactive Visualization and Exploration of Video Spaces. In Proceedings of Academic MindTrek'2009, Tampere, Finland, Sep-Oct, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Mauss, I. B. & Robinson, M. D. 2009. Measures of emotion: A review. Cognition & Emotion, 23(2), 209--237.Google ScholarGoogle ScholarCross RefCross Ref
  23. Metz, C. & Taylor, M. 1991. Film language: A semiotics of the cinema. University of Chicago Press.Google ScholarGoogle Scholar
  24. Money, A. G. & Agius, H. 2009. Analysing user physiological responses for affective video summarization. Displays, 30(2), 59--70. 3059--70.Google ScholarGoogle ScholarCross RefCross Ref
  25. NetFlix. http://www.netflix.com/Google ScholarGoogle Scholar
  26. Ordelman, R., de Jong, F., & Larson, M. 2009. Enhanced multimedia content access and exploitation using semantic speech retrieval. In Proceedings of IEEE Int. Conf. on Semantic Computing ICSC'09, Berkeley, CA, USA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Perez, S. 2008. The Best Tools for Visualization. http://www.readwriteweb.com/archives/the_best_tools_for_visualization.phpGoogle ScholarGoogle Scholar
  28. Picard, R. W., Vyzas, E., & Healey, J. 2001. Toward Machine Emotional Intelligence: Analysis of Affective Physiological State. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1175--1191. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Plutchik, R. 1980. Emotion: A psychoevolutionary synthesis Harper & Row New York.Google ScholarGoogle Scholar
  30. Rainville, P., Bechara, A., Naqvi, N., & Damasio, A. R. 2006. Basic emotions are associated with distinct patterns of cardiorespiratory activity. International Journal of Psychophysiology, Vol. 61, Issue 1, July. 5--18.Google ScholarGoogle ScholarCross RefCross Ref
  31. Rocha, T., & Chambel,T. 2008. VideoSpace: a 3D Video Experience. In Proceedings of Artech'2008, 4th International Conference on Digital Arts, Porto, Portugal, Nov.Google ScholarGoogle Scholar
  32. Rottenberg, J., Ray, R., & Gross, J. 2007. Emotion elicitation using films. The Handbook of Emotion Elicitation and Assessment.Google ScholarGoogle Scholar
  33. Russell J., 1980. A circumflex model of affect. Journal of Personality and Social Psychology, 39:1161--1178.Google ScholarGoogle ScholarCross RefCross Ref
  34. Scherer KR., 2005. What are emotions? and how can they be measured? Social Science Information, 44(4):695.Google ScholarGoogle ScholarCross RefCross Ref
  35. Shneiderman, B. 1992. Tree visualization with tree-maps: 2-D space-filling approach. ACM Transactions on Graphics (TOG), 11(1), 92--99. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Snoek, C. G. M., Worring, M., Smeulders, A. W. M., & Freiburg, B. 2007. The role of visual content and style for concert video indexing. In Proceedings of IEEE Int. Conf. on Multimedia and Expo.Google ScholarGoogle ScholarCross RefCross Ref
  37. Soleymani, M. S., Chanel, C. G., Kierkels, J. K., & Pun, T. P. 2008. Affective Characterization of Movie Scenes Based on Content Analysis and Physiological Changes. International Symposium on Multimedia, 228--235. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Synesketch. www.synesketch.krcadinac.comGoogle ScholarGoogle Scholar
  39. Vimeo. http://vimeo.com/Google ScholarGoogle Scholar
  40. We Feel Fine. www.wefeelfine.orgGoogle ScholarGoogle Scholar
  41. Xu, M., Jin, J.S. and Luo, S. 2008. Personalized video adaptation based on video content analysis. In Proceedings of the 9th International Workshop on Multimedia Data Mining: held in conjunction with the ACM SIGKDD (MDM '08). ACM, New York, NY, USA, 26--35. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Yoo H. and Cho S., 2007. Video scene retrieval with interactive genetic algorithm, Multimedia Tools and Applications, 34, September, 317--336. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. YouTube. www.youtube.comGoogle ScholarGoogle Scholar

Index Terms

  1. Ifelt

            Comments

            Login options

            Check if you have access through your login credentials or your institution to get full access on this article.

            Sign in

            PDF Format

            View or Download as a PDF file.

            PDF

            eReader

            View online with eReader.

            eReader
            About Cookies On This Site

            We use cookies to ensure that we give you the best experience on our website.

            Learn more

            Got it!

            To help support our community working remotely during COVID-19, we are making all work published by ACM in our Digital Library freely accessible through June 30, 2020. Learn more