skip to main content
research-article

Fixation Prediction through Multimodal Analysis

Authors Info & Claims
Published:25 October 2016Publication History
Skip Abstract Section

Abstract

In this article, we propose to predict human eye fixation through incorporating both audio and visual cues. Traditional visual attention models generally make the utmost of stimuli’s visual features, yet they bypass all audio information. In the real world, however, we not only direct our gaze according to visual saliency, but also are attracted by salient audio cues. Psychological experiments show that audio has an influence on visual attention, and subjects tend to be attracted by the sound sources. Therefore, we propose fusing both audio and visual information to predict eye fixation. In our proposed framework, we first localize the moving--sound-generating objects through multimodal analysis and generate an audio attention map. Then, we calculate the spatial and temporal attention maps using the visual modality. Finally, the audio, spatial, and temporal attention maps are fused to generate the final audiovisual saliency map. The proposed method is applicable to scenes containing moving--sound-generating objects. We gather a set of video sequences and collect eye-tracking data under an audiovisual test condition. Experiment results show that we can achieve better eye fixation prediction performance when taking both audio and visual cues into consideration, especially in some typical scenes in which object motion and audio are highly correlated.

References

  1. Ravi Achanta, Sheila Hemami, Francisco Estrada, and Sabine Susstrunk. 2009. Frequency-tuned salient region detection. In Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition. 1597--1604. Google ScholarGoogle ScholarCross RefCross Ref
  2. Bogdan Alexe, Thomas Deselaers, and Vittorio Ferrari. 2012. Measuring the objectness of image windows. IEEE Transactions on Pattern Analysis and Machine Intelligence 34, 11, 2189--2202. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Xuan Bao and Romit Roy Choudhury. 2010. Movi: Mobile phone based video highlights via collaborative sensing. In Proceedings of the 8th International Conference on Mobile Systems, Applications, and Services. ACM, 357--370. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Zohar Barzelay and Yoav Y. Schechner. 2007. Harmony in motion. In Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition. 1--8. Google ScholarGoogle ScholarCross RefCross Ref
  5. Ali Borji, Ming-Ming Cheng, Huaizu Jiang, and Jia Li. 2014. Salient object detection: A survey. ArXiv Preprint.Google ScholarGoogle Scholar
  6. Ali Borji and Laurent Itti. 2013. State-of-the-art in visual attention modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence 35, 1, 185--207. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Ali Borji, Dicky N. Sihite, and Laurent Itti. 2013. Quantitative analysis of human-model agreement in visual saliency modeling: A comparative study. IEEE Transactions on Image Processing 22, 1, 55--69. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Zoya Bylinskii, Tilke Judd, Ali Borji, Laurent Itti, Frédo Durand, Aude Oliva, and Antonio Torralba. 2012. MIT Saliency Benchmark. Retrieved September 25, 2016 from http://saliency.mit.edu/.Google ScholarGoogle Scholar
  9. Moran Cerf, E. Paxon Frady, and Christof Koch. 2009. Faces and text attract gaze independent of the task: Experimental data and computer model. Journal of Vision 9, 12, 10.Google ScholarGoogle ScholarCross RefCross Ref
  10. Moran Cerf, Jonathan Harel, Wolfgang Einhäuser, and Christof Koch. 2008. Predicting human gaze using low-level saliency combined with face detection. In Advances in Neural Information Processing Systems. 241--248. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Christel Chamaret, Jean-Claude Chevet, and Olivier Le Meur. 2010. Spatio-temporal combination of saliency maps and eye-tracking assessment of different strategies. In Proceedings of IEEE International Conference on Image Processing. 1077--1080. Google ScholarGoogle ScholarCross RefCross Ref
  12. Yanxiang Chen, Tam V. Nguyen, Mohan Kankanhalli, Jun Yuan, Shuicheng Yan, and Meng Wang. 2014. Audio matters in visual attention. IEEE Transactions on Circuits and Systems for Video Technology 24, 11, 1992--2003. Google ScholarGoogle ScholarCross RefCross Ref
  13. Ming-Ming Cheng, Guo-Xin Zhang, Niloy J. Mitra, Xiaolei Huang, and Shi-Min Hu. 2011. Global contrast based salient region detection. In Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition. 409--416. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Ming-Ming Cheng, Ziming Zhang, Wen-Yan Lin, and Philip Torr. 2014. BING: Binarized normed gradients for objectness estimation at 300fps. In Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition. 3286--3293. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Antoine Coutrot and Nathalie Guyader. 2013. Toward the introduction of auditory information in dynamic visual attention models. In Proceedings of IEEE International Workshop on Image Analysis for Multimedia Interactive Services. 1--4. Google ScholarGoogle ScholarCross RefCross Ref
  16. Antoine Coutrot and Nathalie Guyader. 2014. How saliency, faces, and sound influence gaze in dynamic social scenes. Journal of Vision 14, 8, 5.Google ScholarGoogle ScholarCross RefCross Ref
  17. Erkut Erdem and Aykut Erdem. 2013. Visual saliency estimation by nonlinearly integrating features using region covariances. Journal of Vision 13, 4, 11.Google ScholarGoogle ScholarCross RefCross Ref
  18. Georgios Evangelopoulos, Athanasia Zlatintsi, Alexandros Potamianos, Petros Maragos, Konstantinos Rapantzikos, Georgios Skoumas, and Yannis Avrithis. 2013. Multimodal saliency and fusion for movie summarization based on aural, visual, and textual attention. IEEE Transactions on Multimedia 15, 7, 1553--1568. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Ke Gu, Guangtao Zhai, Weisi Lin, Xiaokang Yang, and Wenjun Zhang. 2015a. Visual saliency detection with free energy theory. IEEE Signal Processing Letters 22, 10, 1552--1555. Google ScholarGoogle ScholarCross RefCross Ref
  20. Ke Gu, Guofu Zhai, Xu Yang, Wensheng Zhang, and Chang Wen Chen. 2015b. Automatic contrast enhancement technology with saliency preservation. IEEE Transactions on Circuits and Systems for Video Technology 25, 9, 1480--1494. Google ScholarGoogle ScholarCross RefCross Ref
  21. Chenlei Guo, Qi Ma, and Liming Zhang. 2008. Spatio-temporal saliency detection using phase spectrum of quaternion Fourier transform. In Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition. 1--8.Google ScholarGoogle Scholar
  22. David Hardoon, Sandor Szedmak, and John Shawe-Taylor. 2004. Canonical correlation analysis: An overview with application to learning methods. Neural Computation 16, 12, 2639--2664. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Jonathan Harel, Christof Koch, and Pietro Perona. 2006. Graph-based visual saliency. In Advances in Neural Information Processing Systems. 545--552. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Xiaodi Hou and Liqing Zhang. 2007. Saliency detection: A spectral residual approach. In Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition. 1--8. Google ScholarGoogle ScholarCross RefCross Ref
  25. Xiaodi Hou and Liqing Zhang. 2009. Dynamic visual attention: Searching for coding length increments. In Advances in Neural Information Processing Systems. 681--688. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Laurent Itti. 2004. Automatic foveation for video compression using a neurobiological model of visual attention. IEEE Transactions on Image Processing 13, 10, 1304--1318. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Laurent Itti, Christof Koch, and Ernst Niebur. 1998. A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 20, 11, 1254--1259. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Hamid Izadinia, Imran Saleemi, and Mubarak Shah. 2013. Multimodal analysis for identification and segmentation of moving-sounding objects. IEEE Transactions on Multimedia 15, 2, 378--390. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Lloyd A. Jeffress. 1948. A place theory of sound localization. Journal of Comparative and Physiological Psychology 41, 1, 35.Google ScholarGoogle ScholarCross RefCross Ref
  30. Tilke Judd, Krista Ehinger, Fredo Durand, and Antonio Torralba. 2009. Learning to predict where humans look. In Proceedings of IEEE International Conference on Computer Vision. 2106--2113. Google ScholarGoogle ScholarCross RefCross Ref
  31. Einat Kidron, Yoav Y. Schechner, and Michael Elad. 2005. Pixels that sound. In Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition, Vol. 1. 88--95. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Einat Kidron, Yoav Y. Schechner, and Michael Elad. 2007. Cross-modal localization via sparsity. IEEE Transactions on Signal Processing 55, 4, 1390--1404. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Hansang Kim, Youngbae Kim, Jae-Young Sim, and Chang-Su Kim. 2015. Spatiotemporal saliency detection for video sequences based on random walk with restart. IEEE Transactions on Image Processing. 24, 8 (Aug. 2015), 2552--2564. Google ScholarGoogle ScholarCross RefCross Ref
  34. Jong-Seok Lee, Francesca De Simone, and Touradj Ebrahimi. 2011. Subjective quality evaluation of foveated video coding using audio-visual focus of attention. IEEE Journal on Selected Topics in Signal Processing 5, 7, 1322--1331. Google ScholarGoogle ScholarCross RefCross Ref
  35. Jian Li, Martin D. Levine, Xiangjing An, Xin Xu, and Hangen He. 2013. Visual saliency based on scale-space analysis in the frequency domain. IEEE Transactions on Pattern Analysis and Machine Intelligence 35, 4, 996--1010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Kai Li, Jun Ye, and Kien A. Hua. 2014. What’s making that sound? In Proceedings of ACM International Conference on Multimedia. 147--156. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Ce Liu. 2009. Beyond Pixels: Exploring New Representations and Applications for Motion Analysis. Ph.D. Dissertation. Citeseer. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Yu-Fei Ma, Xian-Sheng Hua, Lie Lu, and Hong-Jiang Zhang. 2005. A generic framework of user attention model and its application in video summarization. IEEE Transactions on Multimedia 7, 5, 907--919. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Xiongkuo Min, Guangtao Zhai, Zhongpai Gao, Chunjia Hu, and Xiaokang Yang. 2014. Sound influences visual attention discriminately in videos. In Proceedings of IEEE International Workshop on Quality of Multimedia Experience. 153--158.Google ScholarGoogle Scholar
  40. Vicente P. Minotto, Claudio R. Jung, and Bowon Lee. 2014. Simultaneous-speaker voice activity detection and localization using mid-fusion of SVM and HMMs. IEEE Transactions on Multimedia 16, 4, 1032--1044. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Meinard Müller. 2007. Information Retrieval for Music and Motion. Vol. 2. Springer. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Alexandre Ninassi, Olivier Le Meur, Patrick Le Callet, and D. Barbba. 2007. Does where you gaze on an image affect your perception of quality? Applying visual attention to image quality metric. In Proceedings of IEEE International Conference on Image Processing, Vol. 2. II--169--II--172. Google ScholarGoogle ScholarCross RefCross Ref
  43. David R. Perrott, Kourosh Saberi, Kathleen Brown, and Thomas Z. Strybel. 1990. Auditory psychomotor coordination and visual search performance. Perception 8 Psychophysics 48, 3, 214--226.Google ScholarGoogle Scholar
  44. Robert J. Peters, Asha Iyer, Laurent Itti, and Christof Koch. 2005. Components of bottom-up gaze allocation in natural images. Vision Research 45, 18, 2397--2416. Google ScholarGoogle ScholarCross RefCross Ref
  45. Hae Jong Seo and Peyman Milanfar. 2009. Static and space-time visual saliency detection by self-resemblance. Journal of Vision 9, 12, 15.Google ScholarGoogle ScholarCross RefCross Ref
  46. G. W. Snecdecor and W. G. Cochran. 1989. Statistical Methods (8th ed.). Iowa State University Press, Iowa City, IA.Google ScholarGoogle Scholar
  47. Guanghan Song, Denis Pellerin, and Lionel Granjon. 2013. Different types of sounds influence gaze differently in videos. Journal of Eye Movement Research 6, 4, 1--13.Google ScholarGoogle ScholarCross RefCross Ref
  48. Jean Vroomen and Beatrice de Gelder. 2000. Sound enhances visual perception: Cross-modal effects of auditory organization on vision. Journal of Experimental Psychology: Human Perception and Performance 26, 5, 1583.Google ScholarGoogle ScholarCross RefCross Ref
  49. Chenliang Xu, Caiming Xiong, and Jason J. Corso. 2012. Streaming hierarchical video segmentation. In European Conference on Computer Vision, Springer, 626--639. Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. Steven Yantis and John Jonides. 1990. Abrupt visual onsets and selective attention: Voluntary versus automatic allocation. Journal of Experimental Psychology: Human Perception and Performance 16, 1, 121.Google ScholarGoogle ScholarCross RefCross Ref
  51. Yun Zhai and Mubarak Shah. 2006. Visual attention detection in video sequences using spatiotemporal cues. In Proceedings of ACM International Conference on Multimedia. 815--824. Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Jianming Zhang and Stan Sclaroff. 2013. Saliency detection: A Boolean map approach. In Proceedings of IEEE International Conference on Computer Vision. 153--160. Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Lingyun Zhang, Matthew H. Tong, Tim K. Marks, Honghao Shan, and Garrison W. Cottrell. 2008. SUN: A Bayesian framework for saliency using natural statistics. Journal of Vision 8, 7, 32.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Fixation Prediction through Multimodal Analysis

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          Full Access

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader
          About Cookies On This Site

          We use cookies to ensure that we give you the best experience on our website.

          Learn more

          Got it!