10.1145/2181037.2181059acmotherconferencesArticle/Chapter ViewAbstractPublication PagesmindtrekConference Proceedings
research-article
Free Access

MovieClouds: content-based overviews and exploratory browsing of movies

ABSTRACT

Movies are one of the biggest sources of entertainment, in individual and social contexts. By combining diverse symbol systems, such as images, texts, music and narration to tell stories, they often engage the viewers perceptually, cognitively and emotionally. Advances in digitalization and networking are enabling the access to enormous collections of videos and movies over the Internet, in social media, and through video on demand services on iTV. The development of video content-based analysis and classification techniques is also allowing the access to more information about or contained in the movies, demanding for new ways to search, browse and view videos and movies in this scenario. In this paper, we present and evaluate MovieClouds, an interactive web application designed to access, explore and visualize movies based on the information conveyed in the different tracks or perspectives of its content, especially audio and subtitles where most of the semantics is expressed, and with a special focus on the emotional dimensions expressed in the movies or felt by the viewers. For the overview, analysis, and exploratory browsing of the movies collection and the individual movies, it adopts a tag cloud unifying-paradigm, that gained popularity in Web 2.0, with the aim to extend to movies the power, flexibility, engagement and fun usually associated with clouds.

References

  1. Ahlberg, C. & Truvéé, S. 1995. Tight coupling: Guiding user actions in a direct manipulation retrieval system. In People and computers X: Proc. of HCI'95, Aug. 305--321. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Chambel, T., Oliveira, E., & Martins, P. 2011. Being Happy, Healthy and Whole Watching Movies that Affect our Emotions. In Proc. of ACII'2011, Memphis, TN, USA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Chambel, T., and Guimarães, N., 2002. Context Perception in Video-Based Hypermedia Spaces. In Proc. of ACM Hypertext'02, College Park, MD, USA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Chen, Y. 2010. Exploratory Browsing: Enhancing the Browsing Experience with Media Collections, PhD thesis, Ludwig-Maximilians-Universität München, June.Google ScholarGoogle Scholar
  5. Christel, M. 2008. Amplifying Video Information-Seeking Success through Rich, Exploratory Interfaces, Studies in Computational Intelligence, Volume 142/2008, 21--30.Google ScholarGoogle Scholar
  6. Chu, S., Narayanan, S. and Jay Kuo, C.-C. 2008. Environmental Sound Recognition using MP-based Features. In IEEE Int. Conf. on Acoustics, Speech, and Signal Process.Google ScholarGoogle Scholar
  7. Clavel, C., Ehrette, T., Richard, G. 2005. Events Detection for an Audio-Based Surveillance System. Multimed & Expo.Google ScholarGoogle ScholarCross RefCross Ref
  8. Cunningham, S. and David M. Nichols. 2008. How people find videos. In Proc. of the 8th ACM/IEEE-CS joint conference on Digital libraries (JCDL '08), 201--210. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Daniel, G. and Chen, M. 2003. Video Visualization In Proc. of the 14th IEEE Visualization 2003 (Vis'03). IEEE Visualization. IEEE Computer Society, Washington, DC, 54. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Ekman, P. 1992. Are there basic emotions? Psychological Review, 99(3):550--553.Google ScholarGoogle ScholarCross RefCross Ref
  11. Gulik, R., van Vignoli, F. 2005. Visual Playlist Generation on the Artist Map. In Proc. of the 6th Int. Conf. on Music Information Retrieval. ISMIR 2005, London, UK. 520--523.Google ScholarGoogle Scholar
  12. Hanjalic, A. and Li-Qun Xu. 2005. Affective video content representation and modeling. Multimedia, IEEE Transactions on Multimedia, 7(1). Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Hauptmann, A. G. 2005. Lessons for the Future from a Decade of Informedia Video Analysis Research. Int. Conf. on Image and Video Retrieval, Singapure, July 20--22. LNCS, vol 3568, pp.1--10, Aug. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Katsiouli P., Tsetsos V., Hadjiefthymiades S. 2007. Semantic video classification based on subtitles and domain terminologies. Workshop on Knowledge Acquisition from Multimedia Content.Google ScholarGoogle Scholar
  15. Kim, Y., Schmidt, E., Migneco, R., Morton, B., Richardson, P., Scott, J., Speck, J., and Turnbull, D. 2010. Music Emotion Recognition: a State of the Art Review, Int. Society of Music Information Retrieval (ISMIR), pp 255--266.Google ScholarGoogle Scholar
  16. Langlois, T., Chambel, T., Oliveira, E., Carvalho, P., Marques, G., & Falcãão, A. 2010. VIRUS: Video Information Retrieval Using Subtitles. In Proc.of Acad. MindTrek'2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Langlois, T. & Marques, G. 2009. Music Classification Method based on Timbral Features. In Proc. of ISMIR'2009.Google ScholarGoogle Scholar
  18. Li, T., and Ogihara, M. 2003. Detecting emotion in music. In Proc. of the Intl. Conf. on Music Information Retrieval, Baltimore MD, October.Google ScholarGoogle Scholar
  19. Lohmann, S., Ziegler, J. and Tetzlaff, L. 2009. Comparison of Tag Cloud Layouts: Task-Related Performance and Visual Exploration. Proc. of INTERACT'09 Part I, Springer-Verlag Berlin, Heidelberg. 392--404. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Lund, A. M. 2001. Measuring usability with the USE questionnaire. Usability and User Experience, 8(2).Google ScholarGoogle Scholar
  21. Marques, G., Langlois, T., Gouyon, F., Lopes, M., & Sordo, M. 2011. Short-term Feature Space and Music Genre Classification. Journal of New Music Research, Special Issue: Music and Machine Learning, 40(2), 2011.Google ScholarGoogle Scholar
  22. Marques, G., Domigues, M., Langlois, T, & Gouyon, F. 2011. Three Current Issues In Music Autotagging. In Proc. of ISMIR' 2011, Miami, FL, USA, Oct.24--28,Google ScholarGoogle Scholar
  23. Martinho, J., & Chambel, T. 2009. ColorsInMotion: Interactive Visualization and Exploration of Video Spaces. In Proc. of Academic MindTrek'2009, Tampere, Finland. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Money, A. G. & Agius, H. 2009. Analysing user physiological responses for affective video summarization. Displays, 30(2), 59--70.Google ScholarGoogle ScholarCross RefCross Ref
  25. Oliveira, E., Martins, P., & Chambel, T. 2011. iFelt: Accessing Movies Through Our Emotions. In Proceedings of EuroiTV'2011, Lisbon, Portugal, Jun 29 - Jul 1. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Oliveira, E., Benovoy, M., Ribeiro, N., & Chambel, T. 2011. Towards Emotional Interaction: Using Movies to Automatically Learn Users' Emotional States. In Proc.of Interact, Lisbon, Portugal, Sep 5--9. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Perez, S. 2008. The Best Tools for Visualization. http://www.readwriteweb.com/archives/the_best_tools_for_visualization.phpGoogle ScholarGoogle Scholar
  28. Picard, R. W., Vyzas, E., & Healey, J. 2001. Toward Machine Emotional Intelligence: Analysis of Affective Physiological State. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1175--1191. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Plutchik, R. 1980. Emotion: A psychoevolutionary synthesis Harper & Row, New York.Google ScholarGoogle Scholar
  30. Pradeep K. Atrey, Namunu C. Maddage and Mohan S. Kankanhalli. 2006. Audio Based Event Detection For Multimedia Surveillance, ICASSP.Google ScholarGoogle Scholar
  31. Rocha, T., & Chambel, T. 2008. VideoSpace: a 3D Video Experience. In Proceedings of Artech'2008, 4th International Conference on Digital Arts, Porto, Portugal.Google ScholarGoogle Scholar
  32. Russell J., 1980. A circumflex model of affect. Journal of Personality and Social Psychology, 39:1161--1178.Google ScholarGoogle ScholarCross RefCross Ref
  33. Sarmento, L., Carvalho, P., Silva, M. J., and Oliveira, E. de. 2009. Automatic creation of a reference corpus for political opinion mining in user-generated content. In Proc.of TSA'09 Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Scherer KR., 2005. What are emotions? and how can they be measured? Social Science Information, 44(4):695.Google ScholarGoogle ScholarCross RefCross Ref
  35. Soleymani, M. S., Chanel, C. G., Kierkels, J. K., & Pun, T. P. 2008. Affective Characterization of Movie Scenes Based on Content Analysis and Physiological Changes. Int. Symposium on Multimedia, 228--235. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Thayer, R. E., 1989. The Biopsychology of Mood and Arousal, Oxford University Press, New York.Google ScholarGoogle Scholar
  37. Wattenburg, M., and Viegas, F. 2008. Tag Clouds and the Case for Vernacular Visualization. ACM Interactions, XV.4. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. MovieClouds

                    Comments

                    Login options

                    Check if you have access through your login credentials or your institution to get full access on this article.

                    Sign in

                    PDF Format

                    View or Download as a PDF file.

                    PDF

                    eReader

                    View online with eReader.

                    eReader
                    About Cookies On This Site

                    We use cookies to ensure that we give you the best experience on our website.

                    Learn more

                    Got it!

                    To help support our community working remotely during COVID-19, we are making all work published by ACM in our Digital Library freely accessible through June 30, 2020. Learn more