skip to main content
research-article

Do Users Behave Similarly in VR? Investigation of the User Influence on the System Design

Published:22 May 2020Publication History
Skip Abstract Section

Abstract

With the overarching goal of developing user-centric Virtual Reality (VR) systems, a new wave of studies focused on understanding how users interact in VR environments has recently emerged. Despite the intense efforts, however, current literature still does not provide the right framework to fully interpret and predict users’ trajectories while navigating in VR scenes. This work advances the state-of-the-art on both the study of users’ behaviour in VR and the user-centric system design. In more detail, we complement current datasets by presenting a publicly available dataset that provides navigation trajectories acquired for heterogeneous omnidirectional videos and different viewing platforms—namely, head-mounted display, tablet, and laptop. We then present an exhaustive analysis on the collected data to better understand navigation in VR across users, content, and, for the first time, across viewing platforms. The novelty lies in the user-affinity metric, proposed in this work to investigate users’ similarities when navigating within the content. The analysis reveals useful insights on the effect of device and content on the navigation, which could be precious considerations from the system design perspective. As a case study of the importance of studying users’ behaviour when designing VR systems, we finally propose a user-centric server optimisation. We formulate an integer linear program that seeks the best stored set of omnidirectional content that minimises encoding and storage cost while maximising the user’s experience. This is posed while taking into account network dynamics, type of video content, and also user population interactivity. Experimental results prove that our solution outperforms common company recommendations in terms of experienced quality but also in terms of encoding and storage, achieving a savings up to 70%. More importantly, we highlight a strong correlation between the storage cost and the user-affinity metric, showing the impact of the latter in the system architecture design.

References

  1. Mathias Almquist, Viktor Almquist, Vengatanathan Krishnamoorthi, Niklas Carlsson, and Derek Eager. 2018. The prefetch aggressiveness tradeoff in 360 video streaming. In Proceedings of the 9th ACM Multimedia Systems Conference (MMSys’18).Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Amazon. 2019. Cloud Storage pricing. Retrieved from https://aws.amazon.com/s3/pricing/.Google ScholarGoogle Scholar
  3. Amazon. 2019. Elastic transcoder pricing. Retrieved from https://aws.amazon.com/elastictranscoder/pricing/.Google ScholarGoogle Scholar
  4. Apple. 2018. HLS Authoring Specification for Apple Devices. Retrieved from https://developer.apple.com.Google ScholarGoogle Scholar
  5. Samantha W. Bindman, Lisa M. Castaneda, Mike Scanlon, and Anna Cechony. 2018. Am I a Bunny?: The impact of high and low immersion platforms and viewers’ perceptions of role on presence, narrative engagement, and empathy during an animated 360 video. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI’18).Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. F. Chao, L. Zhang, W. Hamidouche, and O. Deforges. 2018. Salgan360: Visual saliency prediction on 360 degree images with generative adversarial networks. In Proceedings of the IEEE International Conference on Multimedia Expo Workshops (ICMEW’18).Google ScholarGoogle Scholar
  7. Xavier Corbillon, Francesca De Simone, and Gwendal Simon. 2017. 360 degreee video head movement dataset. In Proceedings of the 8th ACM on Multimedia Systems Conference (MMSys’17).Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Xavier Corbillon, Alisa Devlic, Gwendal Simon, and Jacob Chakareski. 2017. Optimal set of 360-degree videos for viewport-adaptive streaming. In Proceedings of the 25th ACM International Conference on Multimedia (MM’17).Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Simone Croci, Cagri Ozcinar, Emin Zerman, Julián Cabrera, and Aljosa Smolic. 2019. Voronoi-based objective quality metrics for omnidirectional video. In Proceedings of the IEEE 11th International Conference on Quality of Multimedia Experience (QoMEX’19).Google ScholarGoogle ScholarCross RefCross Ref
  10. Erwan J. David, Jesús Gutiérrez, Antoine Coutrot, Matthieu Perreira Da Silva, and Patrick Le Callet. 2018. A dataset of head and eye movements for 360 videos. In Proceedings of the 9th ACM Multimedia Systems Conference (MMSys’18).Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Ana De Abreu, Cagri Ozcinar, and Aljosa Smolic. 2017. Look around you: Saliency maps for omnidirectional images in VR applications. In Proceedings of the 9th International Conference on Quality of Multimedia Experience (QoMEX’17).Google ScholarGoogle Scholar
  12. Francesca De Simone, Jesús Gutiérrez, and Patrick Le Callet. 2019. Complexity measurement and characterization of 360-degree content. Electron. Imag. 2019, 12 (2019).Google ScholarGoogle Scholar
  13. Fanyi Duanmu, Yixiang Mao, Shuai Liu, Sumanth Srinivasan, and Yao Wang. 2018. A subjective study of viewer navigation behaviors when watching 360-degree videos on computers. In Proceedings of the IEEE International Conference on Multimedia and Expo (ICME’18).Google ScholarGoogle ScholarCross RefCross Ref
  14. Ching-Ling Fan, Wen-Chih Lo, Yu-Tung Pai, and Cheng-Hsin Hsu. 2019. A survey on 360 video streaming: Acquisition, transmission, and display. ACM Comput. Surv. 52, 4 (2019).Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Colm O. Fearghail, Cagri Ozcinar, Sebastian Knorr, and Aljosa Smolic. 2018. Director’s cutAnalysis of aspects of interactive storytelling for VR films. In Proceedings of the International Conference for Interactive Digital Storytelling (ICIDS’18).Google ScholarGoogle ScholarCross RefCross Ref
  16. Colm O. Fearghail, Cagri Ozcinar, Sebastian Knorr, and Aljosa Smolic. 2018. Director’s cut—Analysis of VR film cuts for interactive storytelling. In Proceedings of the IEEE International Conference on 3D Immersion (IC3D’18).Google ScholarGoogle ScholarCross RefCross Ref
  17. Stephan Fremerey, Ashutosh Singla, Kay Meseberg, and Alexander Raake. 2018. AVtrack360: An open dataset and software recording people’s head rotations watching 360 videos on an HMD. In Proceedings of the 9th ACM Multimedia Systems Conference (MMSys’18).Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. M. Graf, C. Timmerer, and C. Mueller. 2017. Towards bandwidth efficient adaptive streaming of omnidirectional video over HTTP: Design, implementation, and evaluation. In Proceedings of the 8th ACM on Multimedia Systems Conference (MMSys’17).Google ScholarGoogle Scholar
  19. Jonathan Harth, Alexandra Hofmann, Mike Karst, David Kempf, Annelie Ostertag, Isabell Przemus, and Bernhard Schaefermeyer. 2018. Different types of users, different types of immersion: A user study of interaction design and immersion in consumer virtual reality. IEEE Consum. Electron. Mag. 7, 4 (2018).Google ScholarGoogle ScholarCross RefCross Ref
  20. IBM. 2013. ILOG CPLEX Optimization studio. Retrieved from https://www-01.ibm.com/software/.Google ScholarGoogle Scholar
  21. MulticoreWare Inc.2018. x265 HEVC Encoder / H.265 Video Codec. Retrieved from http://x265.org/.Google ScholarGoogle Scholar
  22. ITU-T. 2008. Subjective Video Quality Assessment Methods for Multimedia Applications. ITU-T Recom. P.910.Google ScholarGoogle Scholar
  23. Chakareski Jacob, Aksu Ridvan, Corbillon Xavier, Simon Gwendal, and Swaminathan Viswanathan. 2018. Viewport-driven rate-distortion optimized 360 video streaming. In Proceedings of the IEEE International Conference on Communications (ICC’18).Google ScholarGoogle Scholar
  24. Sebastian Knorr, Cagri Ozcinar, Colm O. Fearghail, and Aljosa Smolic. 2018. Director’s cut—A combined dataset for visual attention analysis in cinematic VR content. In Proceedings of the 15th ACM SIGGRAPH European Conference on Visual Media Production (CVMP’18).Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Chen Li, Mai Xu, Xinzhe Du, and Zulin Wang. 2018. Bridge the gap between VQA and human behavior on omnidirectional video: A large-scale dataset and a deep learning model. In Proceedings of the 26th ACM International Conference on Multimedia (MM’18).Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Wen-Chih Lo, Ching-Ling Fan, Jean Lee, Chun-Ying Huang, Kuan-Ta Chen, and Cheng-Hsin Hsu. 2017. 360 video viewing dataset in head-mounted virtual reality. In Proceedings of the 8th ACM on Multimedia Systems Conference (MMSys’17).Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Lester C. Loschky, Adam M. Larson, Joseph P. Magliano, and Tim J. Smith. 2015. What would Jaws do? The tyranny of film and the relationship between gaze and higher-level narrative film comprehension. PloS One 10, 11 (2015).Google ScholarGoogle Scholar
  28. Anahita Mahzari, Afshin Taghavi Nasrabadi, Aliehsan Samiei, and Ravi Prakash. 2018. FoV-aware edge caching for adaptive 360 video streaming. In Proceedings of the 26th ACM International Conference on Multimedia (MM’18).Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Pantelis Maniotis, Eirina Bourtsoulatze, and Nikolaos Thomos. 2019. Tile-based joint caching and delivery of 360 videos in heterogeneous networks. IEEE Trans. Multimedia 2019, 12 (2019).Google ScholarGoogle Scholar
  30. Kiran Misra, Andrew Segall, Michael Horowitz, Shilin Xu, Arild Fuldseth, and Minhua Zhou. 2013. An overview of tiles in HEVC. IEEE J. Select. Topics Sig. Proc. 7, 6 (2013).Google ScholarGoogle Scholar
  31. Afshin Taghavi Nasrabadi, Aliehsan Samiei, Anahita Mahzari, Ryan P. McMahan, Ravi Prakash, Mylène C. Q. Farias, and Marcelo M. Carvalho. 2019. A taxonomy and dataset for 360 videos. In Proceedings of the 10th ACM Multimedia Systems Conference (MMSys’19).Google ScholarGoogle Scholar
  32. Netflix. 2015. Per-Title Encode Optimization. Retrieved from https://medium.com/netflix-techblog/per-title-encode-optimization-7e99442b62a2.Google ScholarGoogle Scholar
  33. Duc V. Nguyen, Huyen T. T. Tran, Anh T. Pham, and Truong C. Thang. 2019. An optimal tile-based approach for viewport-adaptive 360-degree video streaming. IEEE J. Emerg. Select. Topics Circ. Syst. 9, 1 (2019).Google ScholarGoogle Scholar
  34. Omar A. Niamut, Emmanuel Thomas, Lucia D’Acunto, Cyril Concolato, Franck Denoual, and Seong Yong Lim. 2016. MPEG DASH SRD: Spatial relationship description. In Proceedings of the 7th International Conference on Multimedia Systems (MMSys’16).Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. J.-R. Ohm and G. Sullivan. 2011. Vision, Applications and Requirements for High Efficiency Video Coding (HEVC). Technical Report. ISO/IEC JTC1/SC29/WG11.Google ScholarGoogle Scholar
  36. Cagri Ozcinar, Julián Cabrera, and Aljosa Smolic. 2019. Visual attention-aware omnidirectional video streaming using optimal tiles for virtual reality. IEEE J. Emerg. Select. Topics Circ. Syst. 9, 1 (2019).Google ScholarGoogle Scholar
  37. Cagri Ozcinar, Ana De Abreu, Sebastian Knorr, and Aljosa Smolic. 2017. Estimation of optimal encoding ladders for tiled 360 VR video in adaptive streaming systems. In Proceedings of the IEEE International Symposium on Multimedia (ISM’17).Google ScholarGoogle ScholarCross RefCross Ref
  38. Cagri Ozcinar, Ana De Abreu, and Aljosa Smolic. 2017. Viewport-aware adaptive 360 video streaming using tiles for virtual reality. In Proceedings of the IEEE International Conference on Image Processing (ICIP’17).Google ScholarGoogle ScholarCross RefCross Ref
  39. Cagri Ozcinar and Aljosa Smolic. 2018. Visual attention in omnidirectional video for virtual reality applications. In Proceedings of the IEEE 10th International Conference on Quality of Multimedia Experience (QoMEX’18).Google ScholarGoogle ScholarCross RefCross Ref
  40. Anjul Patney, Marco Salvi, Joohwan Kim, Anton Kaplanyan, Chris Wyman, Nir Benty, David Luebke, and Aaron Lefohn. 2016. Towards foveated rendering for gaze-tracked virtual reality. ACM Trans. Graph. 35, 6 (2016).Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. S. Petrangeli, G. Simon, and V. Swaminathan. 2018. Trajectory-based viewport prediction for 360-degree virtual reality videos. In Proceedings of the IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR’18).Google ScholarGoogle Scholar
  42. Michelle M. Ramey, Andrew P. Yonelinas, and John M. Henderson. 2019. Conscious and unconscious memory deferentially impact attention: Eye movements, visual search, and recognition processes. Cognition 185, 1 (2019).Google ScholarGoogle Scholar
  43. Silvia Rossi, Francesca De Simone, Pascal Frossard, and Laura Toni. 2019. Spherical clustering of users navigating 360 content. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’19).Google ScholarGoogle ScholarCross RefCross Ref
  44. Silvia Rossi and Laura Toni. 2017. Navigation-aware adaptive streaming strategies for omnidirectional video. In Proceedings of the IEEE 19th International Workshop on Multimedia Signal Processing (MMSP’17).Google ScholarGoogle ScholarCross RefCross Ref
  45. Jose Rubio-Tamayo, Manuel Gertrudix Barrio, and Francisco García García. 2017. Immersive environments and virtual reality: Systematic review and advances in communication, interaction and simulation. Multimod. Technol. Interact. 1, 4 (2017).Google ScholarGoogle Scholar
  46. Salient360! 2019. Salient360!—Visual Attention Modeling for 360 Content. Retrieved from https://salient360.ls2n.fr/grand-challenges/.Google ScholarGoogle Scholar
  47. Vincent Sitzmann, Ana Serrano, Amy Pavel, Maneesh Agrawala, Diego Gutierrez, Belen Masia, and Gordon Wetzstein. 2018. Saliency in VR: How do people explore virtual environments? IEEE Trans. Vis. Comput. Graph. 24, 4 (2018).Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Mel Slater and Maria V. Sanchez-Vives. 2016. Enhancing our lives with immersive virtual reality. Front. Robot. AI 3, 1 (2016).Google ScholarGoogle Scholar
  49. Yule Sun, Ang Lu, and Lu Yu. 2017. Weighted-to-spherically-uniform quality evaluation for omnidirectional video. IEEE Sig. Proc. Lett. 24, 9 (2017).Google ScholarGoogle Scholar
  50. C. Timmerer. 2017. Immersive media delivery: Overview of ongoing standardization activities. IEEE Commun. Stand. Mag. 1, 4 (2017).Google ScholarGoogle ScholarCross RefCross Ref
  51. Laura Toni, Ramon Aparicio-Pardo, Karine Pires, Gwendal Simon, Alberto Blanc, and Pascal Frossard. 2015. Optimal selection of adaptive streaming representations. ACM Trans. Multimedia Comput. Commun. Applic. 11, 2s (2015).Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Audrey Tse, Charlene Jennett, Joanne Moore, Zillah Watson, Jacob Rigby, and Anna L. Cox. 2017. Was I there?: Impact of platform and headphones on 360 video immersion. In Proceedings of the ACM Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA’17).Google ScholarGoogle Scholar
  53. Chenglei Wu, Zhihao Tan, Zhi Wang, and Shiqiang Yang. 2017. A dataset for exploring user behaviors in VR spherical video streaming. In Proceedings of the 8th ACM on Multimedia Systems Conference (MMSys’17).Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Mengbai Xiao, Chao Zhou, Yao Liu, and Songqing Chen. 2017. OpTile: Toward optimal tiling in 360-degree video streaming. In Proceedings of the 25th ACM International Conference on Multimedia (MM’17).Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. M. Xu, C. Li, S. Zhang, and P. Le Callet. 2020. State-of-the-art in 360 video/image processing: Perception, assessment and compression. IEEE J. Select. Topics Sig. Proc. 14, 1 (2020).Google ScholarGoogle Scholar
  56. Matt Yu, Haricharan Lakshman, and Bernd Girod. 2015. A framework to evaluate omnidirectional video coding schemes. In Proceedings of the IEEE International Symposium on Mixed and Augmented Reality.Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. Ziheng Zhang, Yanyu Xu, Jingyi Yu, and Shenghua Gao. 2018. Saliency detection in 360 videos. In Proceedings of the European Conference on Computer Vision (ECCV’18).Google ScholarGoogle ScholarCross RefCross Ref
  58. Junni Zou, Chenglin Li, Chengming Liu, Qin Yang, Hongkai Xiong, and Eckehard Steinbach. 2020. Probabilistic tile visibility-based server-side rate adaptation for adaptive 360-degree video streaming. IEEE J. Select. Topics Sig. Proc. 14, 1 (2020).Google ScholarGoogle Scholar

Index Terms

  1. Do Users Behave Similarly in VR? Investigation of the User Influence on the System Design

              Recommendations

              Comments

              Login options

              Check if you have access through your login credentials or your institution to get full access on this article.

              Sign in

              Full Access

              PDF Format

              View or Download as a PDF file.

              PDF

              eReader

              View online with eReader.

              eReader

              HTML Format

              View this article in HTML Format .

              View HTML Format
              About Cookies On This Site

              We use cookies to ensure that we give you the best experience on our website.

              Learn more

              Got it!