ABSTRACT
Films are by excellence the form of art that exploits our affective, perceptual and intellectual activity. Technological developments and the trends for media convergence are turning video into a dominant and pervasive medium and online video is becoming a growing entertainment activity on the web and iTV. The improvement of new techniques for gathering emotional information about videos, both through content analysis or user implicit feedback through user physiological signals, is revealing an unfolding of new ways for exploring emotional information in videos, films or TV series, and brings out new perspectives to personalize user information. We present iFelt - an interactive web video application to classify, access, explore and visualize movies based on their emotional characteristics. In this work, we explore the design and evaluate different ways to access, browse and visualize movies and their contents.
References
- Ahlberg, C. & Truvé, S. 1995. Tight coupling: Guiding user actions in a direct manipulation retrieval system. In People and computers X: Proceedings of HCI'95, Huddersfield, August. 305--321. Google Scholar
Digital Library
- Ahlberg, C., & Shneiderman, B. Film Finder website. www.infovis-wiki.net/index.php?title=Film_FinderGoogle Scholar
- Arijon, D. 1976. Grammar of the film language. Focal Press.Google Scholar
- Ashby GF, Valentin VV, Turken U. 2002. The effects of positive affect and arousal on working memory and executive attention. In: Moore SC, Oaksford M, editors. Emotional cognition: from brain to behaviour. Amsterdam {u.a.}: Benjamins. 245--287.Google Scholar
- Bestiario, Videosphere, May 2008. http://www.bestiario.org/research/videosphere/Google Scholar
- Card, S.K., Mackinlay J.D., and Shneiderman, B. 1999 Readings in Information Visualization: Using Vision to Think, San Francisco, California: Morgan-Kaufmann. Google Scholar
Digital Library
- Chambel, T., and Guimarães, N., 2002. Context Perception in Video-Based Hypermedia Spaces. In Proceedings of ACM Hypertext'02, College Park, Maryland, USA. Google Scholar
Digital Library
- Cunningham, S. and David M. Nichols. 2008. How people find videos. In Proceedings of the 8th ACM/IEEE-CS joint conference on Digital libraries (JCDL '08). ACM, NY, USA, 201--210. Google Scholar
Digital Library
- Ekman, P. 1992. Are there basic emotions? Psychological Review, 99(3):550--553.Google Scholar
Cross Ref
- Emotionally}Vague. http://www.emotionallyvague.com/Google Scholar
- Few, S. 2007. Data Visualization: Past, Present, and Future. IBM Cognos Innovation Center.Google Scholar
- Hanjalic, A. and Li-Qun Xu. 2005. Affective video content representation and modeling. Multimedia, IEEE Transactions on Multimedia, 7(1). Google Scholar
Digital Library
- Harris, J., Kamvar, S. 2009. We Feel Fine: An Almanac of Human Emotion, Scribner.Google Scholar
- Hauptmann, A. G. 2005. Lessons for the Future from a Decade of Informedia Video Analysis Research. Int. Conf. on Image and Video Retrieval, National Univ. of Singapore, Singapure, July 20--22. LNCS, vol 3568, pp.1--10, Aug. Google Scholar
Digital Library
- IMDb -- Internet Movie Database. www.imdb.comGoogle Scholar
- Kim, J. and André, E. 2008. "Emotion Recognition Based on Physiological Changes in Listening Music," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 30 (12), pp. 2067--2083, December, vol. 30, no. 12, 2067--2083. Google Scholar
Digital Library
- Kreibig, S. D., Wilhelm, F. H., Roth, W. T., & Gross, J. J. 2007. Cardiovascular, electrodermal, and respiratory response patterns to fear- and sadness-inducing films. Psychophysiology, 44(5), 787--806.Google Scholar
Cross Ref
- Langlois, T., Chambel, T., Oliveira, E., Carvalho, P., Marques, G., & Falcão, A. 2010. VIRUS: Video Information Retrieval Using Subtitles. In Proceedings of Academic MindTrek'2010, Tampere, Finland, Oct 6th-8th. Google Scholar
Digital Library
- Lund, A. M. 2001. Measuring usability with the USE questionnaire. Usability and User Experience, 8(2). 8.Google Scholar
- Maaoui, C., Pruski, A., & Abdat, F. 2008. Emotion recognition for human-machine communication. 2008 IEEERSJ International Conference on Intelligent Robots and Systems, 1210--1215. Ieee. Retrieved from http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=4650870Google Scholar
Cross Ref
- Martinho, J., & Chambel, T. 2009. ColorsInMotion: Interactive Visualization and Exploration of Video Spaces. In Proceedings of Academic MindTrek'2009, Tampere, Finland, Sep-Oct, 2009. Google Scholar
Digital Library
- Mauss, I. B. & Robinson, M. D. 2009. Measures of emotion: A review. Cognition & Emotion, 23(2), 209--237.Google Scholar
Cross Ref
- Metz, C. & Taylor, M. 1991. Film language: A semiotics of the cinema. University of Chicago Press.Google Scholar
- Money, A. G. & Agius, H. 2009. Analysing user physiological responses for affective video summarization. Displays, 30(2), 59--70. 3059--70.Google Scholar
Cross Ref
- NetFlix. http://www.netflix.com/Google Scholar
- Ordelman, R., de Jong, F., & Larson, M. 2009. Enhanced multimedia content access and exploitation using semantic speech retrieval. In Proceedings of IEEE Int. Conf. on Semantic Computing ICSC'09, Berkeley, CA, USA. Google Scholar
Digital Library
- Perez, S. 2008. The Best Tools for Visualization. http://www.readwriteweb.com/archives/the_best_tools_for_visualization.phpGoogle Scholar
- Picard, R. W., Vyzas, E., & Healey, J. 2001. Toward Machine Emotional Intelligence: Analysis of Affective Physiological State. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1175--1191. Google Scholar
Digital Library
- Plutchik, R. 1980. Emotion: A psychoevolutionary synthesis Harper & Row New York.Google Scholar
- Rainville, P., Bechara, A., Naqvi, N., & Damasio, A. R. 2006. Basic emotions are associated with distinct patterns of cardiorespiratory activity. International Journal of Psychophysiology, Vol. 61, Issue 1, July. 5--18.Google Scholar
Cross Ref
- Rocha, T., & Chambel,T. 2008. VideoSpace: a 3D Video Experience. In Proceedings of Artech'2008, 4th International Conference on Digital Arts, Porto, Portugal, Nov.Google Scholar
- Rottenberg, J., Ray, R., & Gross, J. 2007. Emotion elicitation using films. The Handbook of Emotion Elicitation and Assessment.Google Scholar
- Russell J., 1980. A circumflex model of affect. Journal of Personality and Social Psychology, 39:1161--1178.Google Scholar
Cross Ref
- Scherer KR., 2005. What are emotions? and how can they be measured? Social Science Information, 44(4):695.Google Scholar
Cross Ref
- Shneiderman, B. 1992. Tree visualization with tree-maps: 2-D space-filling approach. ACM Transactions on Graphics (TOG), 11(1), 92--99. Google Scholar
Digital Library
- Snoek, C. G. M., Worring, M., Smeulders, A. W. M., & Freiburg, B. 2007. The role of visual content and style for concert video indexing. In Proceedings of IEEE Int. Conf. on Multimedia and Expo.Google Scholar
Cross Ref
- Soleymani, M. S., Chanel, C. G., Kierkels, J. K., & Pun, T. P. 2008. Affective Characterization of Movie Scenes Based on Content Analysis and Physiological Changes. International Symposium on Multimedia, 228--235. Google Scholar
Digital Library
- Synesketch. www.synesketch.krcadinac.comGoogle Scholar
- Vimeo. http://vimeo.com/Google Scholar
- We Feel Fine. www.wefeelfine.orgGoogle Scholar
- Xu, M., Jin, J.S. and Luo, S. 2008. Personalized video adaptation based on video content analysis. In Proceedings of the 9th International Workshop on Multimedia Data Mining: held in conjunction with the ACM SIGKDD (MDM '08). ACM, New York, NY, USA, 26--35. Google Scholar
Digital Library
- Yoo H. and Cho S., 2007. Video scene retrieval with interactive genetic algorithm, Multimedia Tools and Applications, 34, September, 317--336. Google Scholar
Digital Library
- YouTube. www.youtube.comGoogle Scholar
Index Terms
Ifelt





Comments