skip to main content
research-article

Using Eye Tracking and Heart-Rate Activity to Examine Crossmodal Correspondences QoE in Mulsemedia

Published:05 June 2019Publication History
Skip Abstract Section

Abstract

Different senses provide us with information of various levels of precision and enable us to construct a more precise representation of the world. Rich multisensory simulations are thus beneficial for comprehension, memory reinforcement, or retention of information. Crossmodal mappings refer to the systematic associations often made between different sensory modalities (e.g., high pitch is matched with angular shapes) and govern multisensory processing. A great deal of research effort has been put into exploring cross-modal correspondences in the field of cognitive science. However, the possibilities they open in the digital world have been relatively unexplored. Multiple sensorial media (mulsemedia) provides a highly immersive experience to the users and enhances their Quality of Experience (QoE) in the digital world. Thus, we consider that studying the plasticity and the effects of cross-modal correspondences in a mulsemedia setup can bring interesting insights about improving the human computer dialogue and experience. In our experiments, we exposed users to videos with certain visual dimensions (brightness, color, and shape), and we investigated whether the pairing with a cross-modal matching sound (high and low pitch) and the corresponding auto-generated vibrotactile effects (produced by a haptic vest) lead to an enhanced QoE. For this, we captured the eye gaze and the heart rate of users while experiencing mulsemedia, and we asked them to fill in a set of questions targeting their enjoyment and perception at the end of the experiment. Results showed differences in eye-gaze patterns and heart rate between the experimental and the control group, indicating changes in participants’ engagement when videos were accompanied by matching cross-modal sounds (this effect was the strongest for the video displaying angular shapes and high-pitch audio) and transitively generated cross-modal vibrotactile effects.<?vsp -1pt?>

References

  1. Oluwakemi A. Ademoye and Gheorghita Ghinea. 2013. Information recall task impact in olfaction-enhanced multimedia. ACM Trans. Multimedia Comput. Commun. Appl. 9, 3 (2013), 17. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Oluwakemi A. Ademoye, Niall Murray, Gabriel-Miro Muntean, and Gheorghita Ghinea. 2016. Audio masking effect on inter-component skews in olfaction-enhanced multimedia presentations. ACM Trans. Multimedia Comput. Commun. Appl. 12, 4 (2016), 51. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Belma R. Brkic, Alan Chalmers, Kevin Boulanger, Sumanta Pattanaik, and James Covington. 2009. Cross-modal affects of smell on the real-time rendering of grass. In Proceedings of the 25th Spring Conference on Computer Graphics. ACM, 161--166. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Kjell Brunnström, Sergio Ariel Beker, Katrien De Moor, Ann Dooms, Sebastian Egger, Marie-Neige Garcia, Tobias Hossfeld, Satu Jumisko-Pyykkö, Christian Keimel, Mohamed-Chaker Larabi, et al. 2013. Qualinet white paper on definitions of quality of experience. (2013).Google ScholarGoogle Scholar
  5. Drew Cingel and Anne Marie Piper. 2017. How parents engage children in tablet-based reading experiences: An exploration of haptic feedback. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing. ACM, 505--510. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Anne-Sylvie Crisinel and Charles Spence. 2009. Implicit association between basic tastes and pitch. Neurosci. Lett. 464, 1 (2009), 39--42.Google ScholarGoogle Scholar
  7. Edwin Dalmaijer. 2014. Is the low-cost EyeTribe eye tracker any good for research? No. e585v1. PeerJ PrePrints (2014).Google ScholarGoogle Scholar
  8. Fabien Danieau, Anatole Lécuyer, Philippe Guillotel, Julien Fleureau, Nicolas Mollet, and Marc Christie. 2013. Enhancing audiovisual experience with haptic feedback: A survey on HAV. IEEE Trans. Haptics 6, 2 (2013), 193--205. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Anders Drachen, Lennart E. Nacke, Georgios Yannakakis, and Anja Lee Pedersen. 2010. Correlation between heart rate, electrodermal activity and player experience in first-person shooter games. In Proceedings of the 5th ACM SIGGRAPH Symposium on Video Games. ACM, 49--54. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Susan T. Dumais, Georg Buscher, and Edward Cutrell. 2010. Individual differences in gaze patterns for web search. In Proceedings of the 3rd Symposium on Information Interaction in Context (IIiX’10). ACM, New York, NY, 185--194. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Darragh Egan, Sean Brennan, John Barrett, Yuansong Qiao, Christian Timmerer, and Niall Murray. 2016. An evaluation of heart rate and electrodermal activity as an objective QoE evaluation method for immersive virtual reality environments. In Proceedings of the 2016 8th International Conference on Quality of Multimedia Experience (QoMEX’16). IEEE, 1--6.Google ScholarGoogle ScholarCross RefCross Ref
  12. Mohamad Eid, Jongeun Cha, and Abdulmotaleb El Saddik. 2008. HugMe: A haptic videoconferencing system for interpersonal communication. In Proceedings of the IEEE Conference on Virtual Environments, Human--Computer Interfaces and Measurement Systems (VECIMS’08). IEEE, 5--9.Google ScholarGoogle ScholarCross RefCross Ref
  13. Ulrich Engelke, Daniel P. Darcy, Grant H. Mulliken, Sebastian Bosse, Maria G. Martini, Sebastian Arndt, Jan-Niklas Antons, Kit Yan Chan, Naeem Ramzan, and Kjell Brunnström. 2017. Psychophysiology-based QoE assessment: A survey. IEEE J. Select. Top. Sign. Process. 11, 1 (2017), 6--21.Google ScholarGoogle ScholarCross RefCross Ref
  14. David Gal, S. Christian Wheeler, and Baba Shiv. 2007. Cross-modal influences on gustatory perception. Available at SSRN 1030197.Google ScholarGoogle Scholar
  15. Gheorghita Ghinea and Oluwakemi Ademoye. 2012a. The sweet smell of success: Enhancing multimedia applications with olfaction. ACM Trans. Multimedia Comput. Commun. Appl. 8, 1 (2012), 2. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Gheorghita Ghinea and Oluwakemi Ademoye. 2012b. User perception of media content association in olfaction-enhanced multimedia. ACM Trans. Multimedia Comput. Commun. Appl. 8, 4 (2012), 52. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Gheorghita Ghinea, Christian Timmerer, Weisi Lin, and Stephen R. Gulliver. 2014. Mulsemedia: State of the art, perspectives, and challenges. ACM Trans. Multimedia Comput. Commun. Appl. 11, 1s, Article 17 (Oct. 2014), 23 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Kiel Gilleade, Alan Dix, and Jen Allanson. 2005. Affective videogames and modes of affective gaming: Assist me, challenge me, emote me. In Proceedings of DiGRA 2005: Changing Views--Worlds in Play. (2005).Google ScholarGoogle Scholar
  19. Stephen R. Gulliver and George Ghinea. 2004. Stars in their eyes: What eye-tracking reveals about multimedia perceptual quality. IEEE Trans. Syst. Man Cybernet. A 34, 4 (2004), 472--482. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Henrik Hagtvedt and S. Adam Brasel. 2016. Cross-modal communication: Sound frequency influences consumer responses to color lightness. J. Market. Res. 53, 4 (2016), 551--562.Google ScholarGoogle ScholarCross RefCross Ref
  21. E. Hoggan and S. A. Brewster. 2006. Mobile crossmodal auditory and tactile displays. In Proceedings of the 1st International Workshop on Haptic and Audio Interaction Design (HAID’06). 9--12.Google ScholarGoogle Scholar
  22. Gijs Huisman, Merijn Bruijnes, and Dirk K. J. Heylen. 2016. A moving feast: Effects of color, shape and animation on taste associations and taste perceptions. In Proceedings of the 13th International Conference on Advances in Computer Entertainment Technology. ACM, 13. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Vedad Hulusić, Gabriela Czanner, Kurt Debattista, Elena Sikudova, Piotr Dubla, and Alan Chalmers. 2009. Investigation of the beat rate effect on frame rate for animated content. In Proceedings of the 25th Spring Conference on Computer Graphics. ACM, 151--159. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Vedad Hulusic, Kurt Debattista, Vibhor Aggarwal, and Alan Chalmers. 2010. Exploiting audio-visual cross-modal interaction to reduce computational requirements in interactive environments. In Proceedings of the 2010 2nd International Conference on Games and Virtual Worlds for Serious Applications (VS-GAMES’10). IEEE, 126--132. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Conor Keighrey, Ronan Flynn, Siobhan Murray, and Niall Murray. 2017. A QoE evaluation of immersive augmented and virtual reality speech 8 language assessment applications. In Proceedings of the 2017 9th International Conference on Quality of Multimedia Experience (QoMEX’17). IEEE, 1--6.Google ScholarGoogle ScholarCross RefCross Ref
  26. Sander Koelstra, Christian Muhl, Mohammad Soleymani, Jong-Seok Lee, Ashkan Yazdani, Touradj Ebrahimi, Thierry Pun, Anton Nijholt, and Ioannis Patras. 2012. Deap: A database for emotion analysis; using physiological signals. IEEE Trans. Affect. Comput. 3, 1 (2012), 18--31. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Naoya Koizumi, Hidekazu Tanaka, Yuji Uema, and Masahiko Inami. 2011. Chewing jockey: Augmented food texture by using sound based on the cross-modal effect. In Proceedings of the 8th International Conference on Advances in Computer Entertainment Technology. ACM, 21. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Daniela Litscher, Lu Wang, Ingrid Gaischek, and Gerhard Litscher. 2013. The influence of new colored light stimulation methods on heart rate variability, temperature, and well-being: Results of a pilot study in humans. Evidence-Based Complementary and Alternative Medicine (2013).Google ScholarGoogle Scholar
  29. Lawrence E. Marks. 1974. On associations of light and sound: The mediation of brightness, pitch, and loudness. Am. J. Psychol. (1974), 173--188.Google ScholarGoogle Scholar
  30. Lawrence E. Marks. 1987. On cross-modal similarity: Auditory--visual interactions in speeded discrimination.J. Exp. Psychol.: Hum. Percept. Perf. 13, 3 (1987), 384.Google ScholarGoogle ScholarCross RefCross Ref
  31. Georgia Mastoropoulou. 2007. The Effect of Audio on the Visual Perception of High-Fidelity Animated 3d Computer Graphics. Ph.D. Dissertation. University of Bristol.Google ScholarGoogle Scholar
  32. Georgia Mastoropoulou, Kurt Debattista, Alan Chalmers, and Tom Troscianko. 2005. Auditory bias of visual attention for perceptually-guided selective rendering of animations. In Proceedings of the 3rd International Conference on Computer Graphics and Interactive Techniques in Australasia and South East Asia. ACM, 363--369. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Gene Munster, Travis Jakel, Doug Clinton, and Erinn Murphy. 2015. Next mega tech theme is virtual reality. Gene 612 (2015), 303--6452.Google ScholarGoogle Scholar
  34. Niall Murray, Oluwakemi A. Ademoye, Gheorghita Ghinea, and Gabriel-Miro Muntean. 2017. A tutorial for olfaction-based multisensorial media application design and evaluation. ACM Comput. Surv. 50, 5 (2017), 67. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Belma Ramic-Brkic, Alan Chalmers, Aida Sadzak, Kurt Debattista, and Saida Sultanic. 2013. Exploring multiple modalities for selective rendering of virtual environments. In Proceedings of the 29th Spring Conference on Computer Graphics. ACM, 91--98. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Nimesha Ranasinghe, Pravar Jain, Nguyen Thi Ngoc Tram, Koon Chuan Raymond Koh, David Tolley, Shienny Karwita, Lin Lien-Ya, Yan Liangkun, Kala Shamaiah, Chow Eason Wai Tung, et al. 2018. Season traveller: Multisensory narration for enhancing the virtual reality experience. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 577. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Nimesha Ranasinghe, Kuan-Yi Lee, and Ellen Yi-Luen Do. 2014. FunRasa: An interactive drinking platform. In Proceedings of the 8th International Conference on Tangible, Embedded and Embodied Interaction. ACM, 133--136. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Nimesha Ranasinghe, Thi Ngoc Tram Nguyen, Yan Liangkun, Lien-Ya Lin, David Tolley, and Ellen Yi-Luen Do. 2017. Vocktail: A virtual cocktail for pairing digital taste, smell, and color sensations. In Proceedings of the 2017 ACM on Multimedia Conference. ACM, 1139--1147. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Han-Seok Seo, Artin Arshamian, Kerstin Schemmer, Ingeborg Scheer, Thorsten Sander, Guido Ritter, and Thomas Hummel. 2010. Cross-modal integration between odors and abstract symbols. Neurosci. Lett. 478, 3 (2010), 175--178.Google ScholarGoogle Scholar
  40. Ladan Shams, Yukiyasu Kamitani, and Shinsuke Shimojo. 2000. Illusions: What you see is what you hear. Nature 408, 6814 (2000), 788.Google ScholarGoogle Scholar
  41. J. Simner and V. Ludwig. 2009. What colour does that feel? Cross-modal correspondences from touch to colour. In Proceedings of the 3rd International Conference of Synaesthesia and Art.Google ScholarGoogle Scholar
  42. Ray H. Simpson, Marian Quinn, and David P. Ausubel. 1956. Synesthesia in children: Association of colors with pure tone frequencies. J. Gen. Psychol. 89, 1 (1956), 95--103.Google ScholarGoogle ScholarCross RefCross Ref
  43. B. G. Slocombe, D. A. Carmichael, and J. Simner. 2016. Cross-modal tactile--taste interactions in food evaluations. Neuropsychologia 88 (2016), 58--64.Google ScholarGoogle ScholarCross RefCross Ref
  44. Guanghan Song, Denis Pellerin, and Lionel Granjon. 2012. How different kinds of sound in videos can influence gaze. In Proceedings of the 13th International Workshop on Image Analysis for Multimedia Interactive Services (WIAMIS’12). 4--p.Google ScholarGoogle ScholarCross RefCross Ref
  45. Charles Spence. 2010. The color of wine--part 1. Wrld. Fine Wine 28 (2010), 122--129.Google ScholarGoogle Scholar
  46. Charles Spence. 2011. Crossmodal correspondences: A tutorial review. Att. Percept. Psychophys. 73, 4 (2011), 971--995.Google ScholarGoogle ScholarCross RefCross Ref
  47. Peter Stephen and Susan Hornby. 1997. Simple statistics for library and information professionals. Library Association Publishing. (1997).Google ScholarGoogle Scholar
  48. Alina Striner. 2018. Can multisensory cues in VR help train pattern recognition to citizen scientists? arXiv preprint arXiv:1804.00229 (2018).Google ScholarGoogle Scholar
  49. Yevgeniya Sulema. 2016. Mulsemedia vs. Multimedia: State of the art and future trends. In Proceedings of the 2016 International Conference on Systems, Signals and Image Processing (IWSSIP’16). IEEE, 1--5.Google ScholarGoogle ScholarCross RefCross Ref
  50. Xiuwen Sun, Xiaoling Li, Lingyu Ji, Feng Han, Huifen Wang, Yang Liu, Yao Chen, Zhiyuan Lou, and Zhuoyun Li. 2018. An extended research of crossmodal correspondence between color and sound in psychology and cognitive ergonomics. PeerJ 6 (Mar. 2018), e4443.Google ScholarGoogle Scholar
  51. Benjamin Tag, Takuya Goto, Kouta Minamizawa, Ryan Mannschreck, Haruna Fushimi, and Kai Kunze. 2017. atmoSphere: Mindfulness over haptic-audio cross modal correspondence. In Proceedings of the 2017 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2017 ACM International Symposium on Wearable Computers. ACM, 289--292. Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Atau Tanaka and Adam Parkinson. 2016. Haptic wave: A cross-modal interface for visually impaired audio producers. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, 2150--2161. Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. R. Tortell, D. P. Luigi, A. Dozois, S. Bouchard, Jacquelyn Ford Morie, and D. Ilan. 2007. The effects of scent and game play experience on memory of a virtual environment. Virt. Real. 11, 1 (2007), 61--68.Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Augoustinos Tsiros. 2017. The parallels between the study of cross-modal correspondence and the design of cross-sensory mappings. In Proceedings of the Conference on Electronic Visualisation and the Arts. BCS Learning 8 Development Ltd., 175--182. Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Shafiq ur Réhman, Muhammad Sikandar Lal Khan, Liu Li, and Haibo Li. 2014. Vibrotactile TV for immersive experience. In Proceedings of the 2014 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA’14). IEEE, 1--4.Google ScholarGoogle ScholarCross RefCross Ref
  56. Giulio Valenti and Klaas R. Westerterp. 2013. Optical heart rate monitoring module validation study. In Proceedings of the 2013 IEEE International Conference on Consumer Electronics (ICCE’13). IEEE, 195--196.Google ScholarGoogle Scholar
  57. J. Jessica Wang and Ian A. Apperly. 2017. Just one look: Direct gaze briefly disrupts visual working memory. Psychonom. Bull. Rev. 24, 2 (2017), 393--399.Google ScholarGoogle ScholarCross RefCross Ref
  58. Robert B. Welch, Lance D. DutionHurt, and David H. Warren. 1986. Contributions of audition and vision to temporal rate perception. Percept. Psychophys. 39, 4 (1986), 294--300.Google ScholarGoogle ScholarCross RefCross Ref
  59. Robert B. Welch and David H. Warren. 1980. Immediate perceptual response to intersensory discrepancy. Psychol. Bull. 88, 3 (1980), 638.Google ScholarGoogle ScholarCross RefCross Ref
  60. Jeffrey M. Yau, Jonathon B. Olenczak, John F. Dammann, and Sliman J Bensmaia. 2009. Temporal frequency channels are linked across audition and touch. Curr. Biol. 19, 7 (2009), 561--566.Google ScholarGoogle ScholarCross RefCross Ref
  61. Zhenhui Yuan, Shengyang Chen, Gheorghita Ghinea, and Gabriel-Miro Muntean. 2014. User quality of experience of mulsemedia applications. ACM Trans. Multimedia Comput. Commun. Appl. 11, 1s (2014), 15. Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. Zhenhui Yuan, Gheorghita Ghinea, and Gabriel-Miro Muntean. 2015. Beyond multimedia adaptation: Quality of experience-aware multi-sensorial media delivery. IEEE Trans. Multimedia 17, 1 (2015), 104--117.Google ScholarGoogle Scholar
  63. Longhao Zou, Irina Tal, Alexandra Covaci, Eva Ibarrola, Gheorghita Ghinea, and Gabriel-Miro Muntean. 2017. Can multisensorial media improve learner experience? In Proceedings of the 8th ACM on Multimedia Systems Conference. ACM, 315--320. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Using Eye Tracking and Heart-Rate Activity to Examine Crossmodal Correspondences QoE in Mulsemedia

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          Full Access

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          HTML Format

          View this article in HTML Format .

          View HTML Format
          About Cookies On This Site

          We use cookies to ensure that we give you the best experience on our website.

          Learn more

          Got it!