skip to main content
research-article

Luminance-contrast-aware foveated rendering

Published:12 July 2019Publication History
Skip Abstract Section

Abstract

Current rendering techniques struggle to fulfill quality and power efficiency requirements imposed by new display devices such as virtual reality headsets. A promising solution to overcome these problems is foveated rendering, which exploits gaze information to reduce rendering quality for the peripheral vision where the requirements of the human visual system are significantly lower. Most of the current solutions model the sensitivity as a function of eccentricity, neglecting the fact that it also is strongly influenced by the displayed content. In this work, we propose a new luminance-contrast-aware foveated rendering technique which demonstrates that the computational savings of foveated rendering can be significantly improved if local luminance contrast of the image is analyzed. To this end, we first study the resolution requirements at different eccentricities as a function of luminance patterns. We later use this information to derive a low-cost predictor of the foveated rendering parameters. Its main feature is the ability to predict the parameters using only a low-resolution version of the current frame, even though the prediction holds for high-resolution rendering. This property is essential for the estimation of required quality before the full-resolution image is rendered. We demonstrate that our predictor can efficiently drive the foveated rendering technique and analyze its benefits in a series of user experiments.

Skip Supplemental Material Section

Supplemental Material

a98-tursun.mp4

References

  1. Hime Aguiar e Oliveira Junior, Lester Ingber, Antonio Petraglia, Mariane Rembold Petraglia, and Maria Augusta Soares Machado. 2012. Adaptive Simulated Annealing. Springer Berlin Heidelberg, Berlin, Heidelberg, 33--62.Google ScholarGoogle Scholar
  2. Rachel Albert, Anjul Patney, David Luebke, and Joohwan Kim. 2017. Latency requirements for foveated rendering in virtual reality. ACM Trans. on App. Perception (TAP) 14, 4 (2017), 25. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Elena Arabadzhiyska, Okan Tarhan Tursun, Karol Myszkowski, Hans-Peter Seidel, and Piotr Didyk. 2017. Saccade Landing Position Prediction for Gaze-Contingent Rendering. ACM Trans. Graph. (Proc. SIGGRAPH) 36, 4 (2017). Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Peter GJ Barten. 1989. The square root integral (SQRI): a new metric to describe the effect of various display parameters on perceived image quality. In Human Vision, Visual Processing, and Digital Display, Vol. 1077. Int. Soc. for Optics and Photonics, 73--83.Google ScholarGoogle Scholar
  5. Peter GJ Barten. 1999. Contrast sensitivity of the human eye and its effects on image quality. Vol. 72. SPIE press.Google ScholarGoogle Scholar
  6. M.R. Bolin and G.W. Meyer. 1998. A Perceptually Based Adaptive Sampling Algorithm. In Proc. of SIGGRAPH. 299--310. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Chris Bradley, Jared Abrams, and Wilson S. Geisler. 2014. Retina-V1 model of detectability across the visual field. Journal of Vision 14, 12 (2014), 22.Google ScholarGoogle ScholarCross RefCross Ref
  8. P. Burt and E. Adelson. 1983. The Laplacian Pyramid as a Compact Image Code. IEEE Trans. on Communications 31, 4 (Apr 1983), 532--540.Google ScholarGoogle ScholarCross RefCross Ref
  9. M. Cimpoi, S. Maji, I. Kokkinos, S. Mohamed, and A. Vedaldi. 2014. Describing Textures in the Wild. In Proc. IEEE Conf. on Comp. Vision and Pattern Recognition (CVPR). Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Christine A. Curcio and Kimberly A. Allen. 1990. Topography of ganglion cells in human retina. The Journal of Comparative Neurology 300, 1 (1990), 5--25.Google ScholarGoogle ScholarCross RefCross Ref
  11. Andrew T. Duchowski, David Bate, Paris Stringfellow, Kaveri Thakur, Brian J. Melloy, and Anand K. Gramopadhye. 2009. On spatiochromatic visual sensitivity and peripheral color LOD management. ACM Trans. on App. Perception 6, 2 (2009). Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Andrew T. Duchowski, Donald H. House, Jordan Gestring, Rui I. Wang, Krzysztof Krejtz, Izabela Krejtz, Radosław Mantiuk, and Bartosz Bazyluk. 2014. Reducing visual discomfort of 3D stereoscopic sisplays with gaze-contingent depth-of-field. In Proc. ACM Symp. on Appl. Perc. (SAP). 39--46. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Andrew T. Duchowski and Bruce Howard McCormick. 1995. Preattentive considerations for gaze-contingent image processing, Vol. 2411.Google ScholarGoogle Scholar
  14. Joe Durbin. 2017. NVIDIA Estimates VR Is 20 Years Away From Resolutions That Match The Human Eye. https://uploadvr.com/nvidia-estimates-20-years-away-vr-eye-quality-resolution/. (May 2017). Accessed: 2019-01-10.Google ScholarGoogle Scholar
  15. Henry Griffith, Subir Biswas, and Oleg Komogortsev. 2018. Towards Reduced Latency in Saccade Landing Position Prediction Using Velocity Profile Methods. In Proc. Future Technologies Conf. (FTC) 2018, Kohei Arai, Rahul Bhatia, and Supriya Kapoor (Eds.). Springer Int. Publishing, Cham, 79--91.Google ScholarGoogle Scholar
  16. Brian Guenter, Mark Finch, Steven Drucker, Desney Tan, and John Snyder. 2012. Foveated 3D graphics. ACM Trans. Graph. (Proc. SIGGRAPH Asia) 31, 6 (2012). Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. David Hoffman, Zoe Meraz, and Eric Turner. 2018. Limits of peripheral acuity and implications for VR system design. Journal of the Soc. for Information Display 26, 8 (2018), 483--495.Google ScholarGoogle ScholarCross RefCross Ref
  18. David Jacobs, Orazio Gallo, Emily Cooper, Kari Pulli, and Marc Levoy. 2015. Simulating the visual experience of very bright and very dark scenes. ACM Trans. on Graph. (TOG) 34, 3 (2015), 25. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Changwon Jang, Kiseung Bang, Seokil Moon, Jonghyun Kim, Seungjae Lee, and Byoungho Lee. 2017. Retinal 3D: Augmented Reality Near-eye Display via Pupil-tracked Light Field Projection on Retina. ACM Trans. on Graph. 36, 6, Article 190 (2017), 190:1--190:13 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Petr Kellnhofer, Piotr Didyk, Karol Myszkowski, Mohamed M Hefeeda, Hans-Peter Seidel, and Wojciech Matusik. 2016. GazeStereo3D: Seamless disparity manipulations. ACM Trans. Graph. (Proc. SIGGRAPH) 35, 4 (2016). Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Joohwan Kim, Qi Sun, Fu-Chung Huang, Li-Yi Wei, David Luebke, and Arie E. Kaufman. 2017. Perceptual Studies for Foveated Light Field Displays. CoRR abs/1708.06034 (2017). arXiv:1708.06034 http://arxiv.org/abs/1708.06034Google ScholarGoogle Scholar
  22. G.E. Legge and J.M. Foley. 1980. Contrast masking in human vision. Journal of the Opt. Soc. of America 70, 12 (1980), 1458--1471.Google ScholarGoogle ScholarCross RefCross Ref
  23. J. Lubin. 1995. A visual discrimination model for imaging system design and development. In Vision models for target detection and recognition, Peli E. (Ed.). World Scientific, 245--283.Google ScholarGoogle Scholar
  24. James Mannos and David Sakrison. 1974. The effects of a visual fidelity criterion of the encoding of images. IEEE Trans. on Information Theory 20, 4 (1974), 525--536. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Radosław Mantiuk, Bartosz Bazyluk, and Anna Tomaszewska. 2011a. Gaze-dependent depth-of-field effect rendering in virtual environments. In Int Conf on Serious Games Dev & Appl. 1--12. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Rafal Mantiuk, Kil Joong Kim, Allan G. Rempel, and Wolfgang Heidrich. 2011b. HDR-VDP-2: a calibrated visual metric for visibility and quality predictions in all luminance conditions. ACM Trans. Graph. (Proc. SIGGRAPH) (2011). Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Belen Masia, Gordon Wetzstein, Piotr Didyk, and Diego Gutierrez. 2013. A survey on computational displays: Pushing the boundaries of optics, computation, and perception. Computers & Graphics 37, 8 (2013), 1012--1038. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Michael Mauderer, Simone Conte, Miguel A. Nacenta, and Dhanraj Vishwanath. 2014. Depth perception with gaze-contingent depth of field. In Proc Human Fact in Comp Sys (CHI). 217--226. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Morgan McGuire. 2017. Computer Graphics Archive. (July 2017). https://casual-effects.com/dataGoogle ScholarGoogle Scholar
  30. Olivier Mercier, Yusufu Sulai, Kevin Mackenzie, Marina Zannoli, James Hillis, Derek Nowrouzezahrai, and Douglas Lanman. 2017. Fast Gaze-contingent Optimal Decompositions for Multifocal Displays. ACM Trans. on Graph. 36, 6, Article 237 (Nov. 2017), 237:1--237:15 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Cornelis Noorlander, Jan J. Koenderink, Ron J. Den Olden, and B. Wigbold Edens. 1983. Sensitivity to spatiotemporal colour contrast in the peripheral visual field. Vision Research 23, 1 (1983).Google ScholarGoogle Scholar
  32. Nvidia. 2018. VRWorks - Variable Rate Shading (VRS) website. https://developer.nvidia.com/vrworks/graphics/variablerateshading. (2018). Accessed: 2019-01-09.Google ScholarGoogle Scholar
  33. Anjul Patney, Marco Salvi, Joohwan Kim, Anton Kaplanyan, Chris Wyman, Nir Benty, David Luebke, and Aaron Lefohn. 2016. Towards foveated rendering for gaze-tracked virtual reality. ACM Trans. Graph. 35, 6 (2016), 179. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. E. Peli. 1990. Contrast in complex images. Journal of the Opt. Soc. of America 7, 10 (1990), 2033--2040.Google ScholarGoogle Scholar
  35. Eli Peli, Jian Yang, and Robert B. Goldstein. 1991. Image invariance with changes in size: the role of peripheral contrast thresholds. J. Opt. Soc. Am. A 8, 11 (Nov 1991), 1762--1774.Google ScholarGoogle ScholarCross RefCross Ref
  36. Daniel Pohl, Xucong Zhang, and Andreas Bulling. 2016. Combining eye tracking with optimizations for lens astigmatism in modern wide-angle HMDs. In Virtual Reality (VR), 2016 IEEE. IEEE, 269--270.Google ScholarGoogle ScholarCross RefCross Ref
  37. Simon JD Prince and Brian J Rogers. 1998. Sensitivity to disparity corrugations in peripheral vision. Vision Research 38, 17 (1998).Google ScholarGoogle Scholar
  38. Mahesh Ramasubramanian, Sumanta N. Pattanaik, and Donald P. Greenberg. 1999. A Perceptually Based Physical Error Metric for Realistic Image Synthesis. In Proc. 26th Annual Conf. on Comp. Graphics and Interactive Techniques (SIGGRAPH). ACM Press/Addison-Wesley Publishing Co., New York, NY, USA, 73--82. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Martin Reddy. 2001. Perceptually optimized 3D graphics. IEEE Comp. Graphics and Applications 21, 5 (2001), 68--75. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. L Ronchi and G. Molesini. 1975. Depth of Focus in Peripheral Vision. Ophthalmic Res 7, 3 (1975), 152--157.Google ScholarGoogle ScholarCross RefCross Ref
  41. Stephen Sebastian, Johannes Burge, and Wilson S. Geisler. 2015. Defocus blur discrimination in natural images with natural optics. Journal of Vision 15, 5 (2015), 16.Google ScholarGoogle ScholarCross RefCross Ref
  42. Mark Segal, Kurt Akeley, C Frazier, J Leech, and P Brown. 2013. The OpenGL Graphics System: A Specification (Version 4.4 (Core Profile) - October 18, 2013). (2013).Google ScholarGoogle Scholar
  43. Michael Stengel, Steve Grogorick, Martin Eisemann, and Marcus Magnor. 2016a. Adaptive Image-Space Sampling for Gaze-Contingent Real-time Rendering. Comp. Graphics Forum 35, 4 (2016), 129--139. Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Michael Stengel, Steve Grogorick, Martin Eisemann, and Marcus Magnor. 2016b. Adaptive image-space sampling for gaze-contingent real-time rendering. In Comp Graph Forum, Vol. 35. 129--139. Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Hans Strasburger, Ingo Rentschler, and Martin Jüttner. 2011. Peripheral vision and pattern recognition: A review. Journal of Vision 11, 5 (2011).Google ScholarGoogle ScholarCross RefCross Ref
  46. Qi Sun, Fu-Chung Huang, Joohwan Kim, Li-Yi Wei, David Luebke, and Arie Kaufman. 2017. Perceptually-guided Foveation for Light Field Displays. ACM Trans. Graph. 36, 6, Article 192 (Nov. 2017), 192:1--192:13 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Nicholas T. Swafford, José A. Iglesias-Guitian, Charalampos Koniaris, Bochang Moon, Darren Cosker, and Kenny Mitchell. 2016. User, Metric, and Computational Evaluation of Foveated Rendering Methods. In Proc. ACM Symposium on App. Perception (SAP '16). 7--14. Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Unity3D. 2018. Official website. https://unity3d.com/. (2018). Accessed: 2019-01-09.Google ScholarGoogle Scholar
  49. Karthik Vaidyanathan, Marco Salvi, Robert Toth, Tim Foley, Tomas Akenine-Möller, Jim Nilsson, Jacob Munkberg, Jon Hasselgren, Masamichi Sugihara, Petrik Clarberg, et al. 2014. Coarse pixel shading. In High Performance Graphics. Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. Carlin Vieri, Grace Lee, Nikhil Balram, Sang Hoon Jung, Joon Young Yang, Soo Young Yoon, and In Byeong Kang. 2018. An 18 megapixel 4.3"1443 ppi 120 Hz OLED display for wide field of view high acuity head mounted displays. Journal of the Soc. for Information Display (2018).Google ScholarGoogle Scholar
  51. B. Wang and K.J. Ciuffreda. 2005. Blur discrimination of the human eye in the near retinal periphery. Optom Vis Sci. 82, 1 (2005), 52--58.Google ScholarGoogle Scholar
  52. Andrew B. Watson. 2014. A formula for human retinal ganglion cell receptive field density as a function of visual field location. Journal of Vision 14, 7 (2014), 15.Google ScholarGoogle ScholarCross RefCross Ref
  53. Andrew B. Watson and Albert J. Ahumada. 2011. Blur clarified: A review and synthesis of blur discrimination. Journal of Vision 11, 5 (2011), 10.Google ScholarGoogle ScholarCross RefCross Ref
  54. M. Weier, M. Stengel, T. Roth, P. Didyk, E. Eisemann, M. Eisemann, S. Grogorick, A. Hinkenjann, E. Kruijff, M. Magnor, K. Myszkowski, and P. Slusallek. 2017. Perception-driven Accelerated Rendering. Comp. Graphics Forum 36, 2 (2017), 611--643. Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Wenjun Zeng, S. Daly, and Shawmin Lei. 2000. Point-wise extended visual masking for JPEG-2000 image compression. In Proc. Int. Conf. on Image Processing, Vol. 1. 657--660.Google ScholarGoogle Scholar
  56. Wenjun Zeng, Scott Daly, and Shawmin Lei. 2001. An Overview of the Visual Optimization Tools in JPEG 2000. Signal Processing: Image communication Journal 17, 1 (2001), 85--104.Google ScholarGoogle Scholar

Index Terms

  1. Luminance-contrast-aware foveated rendering

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM Transactions on Graphics
          ACM Transactions on Graphics  Volume 38, Issue 4
          August 2019
          1480 pages
          ISSN:0730-0301
          EISSN:1557-7368
          DOI:10.1145/3306346
          Issue’s Table of Contents

          Copyright © 2019 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 12 July 2019
          Published in tog Volume 38, Issue 4

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader