skip to main content
research-article
Open Access

Training and Predicting Visual Error for Real-Time Applications

Published:04 May 2022Publication History
Skip Abstract Section

Abstract

Visual error metrics play a fundamental role in the quantification of perceived image similarity. Most recently, use cases for them in real-time applications have emerged, such as content-adaptive shading and shading reuse to increase performance and improve efficiency. A wide range of different metrics has been established, with the most sophisticated being capable of capturing the perceptual characteristics of the human visual system. However, their complexity, computational expense, and reliance on reference images to compare against prevent their generalized use in real-time, restricting such applications to using only the simplest available metrics. In this work, we explore the abilities of convolutional neural networks to predict a variety of visual metrics without requiring either reference or rendered images. Specifically, we train and deploy a neural network to estimate the visual error resulting from reusing shading or using reduced shading rates. The resulting models account for 70%-90% of the variance while achieving up to an order of magnitude faster computation times. Our solution combines image-space information that is readily available in most state-of-the-art deferred shading pipelines with reprojection from previous frames to enable an adequate estimate of visual errors, even in previously unseen regions. We describe a suitable convolutional network architecture and considerations for data preparation for training. We demonstrate the capability of our network to predict complex error metrics at interactive rates in a real-time application that implements content-adaptive shading in a deferred pipeline. Depending on the portion of unseen image regions, our approach can achieve up to 2x performance compared to state-of-the-art methods.

Skip Supplemental Material Section

Supplemental Material

References

  1. Amazon Lumberyard. 2017. Amazon Lumberyard Bistro, Open Research Content Archive (ORCA). http://developer.nvidia.com/orca/amazon-lumberyard-bistroGoogle ScholarGoogle Scholar
  2. Marco Ancona, Enea Ceolini, Cengiz Öztireli, and Markus Gross. 2017. Towards better understanding of gradient-based attribution methods for deep neural networks. arXiv preprint arXiv:1711.06104 (2017).Google ScholarGoogle Scholar
  3. Magnus Andersson, Jon Hasselgren, Robert Toth, and Tomas Akenine-Möiler. 2014. Adaptive texture space shading for stochastic rendering. In Computer Graphics Forum, Vol. 33. Wiley Online Library, 341--350.Google ScholarGoogle Scholar
  4. Pontus Andersson, Jim Nilsson, Tomas Akenine-Möller, Magnus Oskarsson, Kalle Åström, and Mark D Fairchild. 2020. FLIP: a difference evaluator for alternating images. Proceedings of the ACM on Computer Graphics and Interactive Techniques 3, 2 (2020), 1--23.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Nir Benty, Kai-Hwa Yao, Petrik Clarberg, Lucy Chen, Simon Kallweit, Tim Foley, Matthew Oakes, Conor Lavelle, and Chris Wyman. 2020. The Falcor Rendering Framework. https://github.com/NVIDIAGameWorks/FalcorGoogle ScholarGoogle Scholar
  6. Swaroop Bhonde. 2019. Easy VRS Integration with Eye Tracking. https://developer.nvidia.com/blog/vrs-wrapper/. Accessed: 2021-03-05.Google ScholarGoogle Scholar
  7. Christopher A Burns, Kayvon Fatahalian, and William R Mark. 2010. A lazy object-space shading architecture with decoupled sampling. In Proceedings of the Conference on High Performance Graphics. Citeseer, 19--28.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Leonardo Carrion. 2016. Battle Damaged Sci-fi Helmet - PBR. https://sketchfab.com/3d-models/battle-damaged-sci-fi-helmet-pbr-b81008d513954189a063ff901f7abfe4Google ScholarGoogle Scholar
  9. Petrik Clarberg, Robert Toth, Jon Hasselgren, Jim Nilsson, and Tomas Akenine-Möller. 2014. AMFS: adaptive multi-frequency shading for future graphics processors. ACM Transactions on Graphics (TOG) 33, 4 (2014), 1--12.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Petrik Clarberg, Robert Toth, and Jacob Munkberg. 2013. A sort-based deferred shading architecture for decoupled sampling. ACM Transactions on Graphics (TOG) 32, 4 (2013), 1--10.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Michal Drobot. 2020. Software-based Variable Rate Shading in Call of Duty: Modern Warfare. (2020). https://research.activision.com/publications/2020-09/software-based-variable-rate-shading-in-call-of-duty--modern-war The 47TH International Conference and Exhibition on Computer Graphics and Interactive Techniques.Google ScholarGoogle Scholar
  12. Epic Games. 2017. Unreal Engine Sun Temple, Open Research Content Archive (ORCA). http://developer.nvidia.com/orca/epic-games-sun-templeGoogle ScholarGoogle Scholar
  13. Linus Franke, Laura Fink, Jana Martschinke, Kai Selgrad, and Marc Stamminger. 2021. Time-Warped Foveated Rendering for Virtual Reality Headsets. In Computer Graphics Forum, Vol. 40. Wiley Online Library, 110--123.Google ScholarGoogle Scholar
  14. Jon Hasselgren, Jacob Munkberg, Marco Salvi, Anjul Patney, and Aaron Lefohn. 2020. Neural temporal adaptive sampling and denoising. In Computer Graphics Forum, Vol. 39. Wiley Online Library, 147--155.Google ScholarGoogle Scholar
  15. Karl E Hillesland and JC Yang. 2016. Texel shading. In Proceedings of the 37th Annual Conference of the European Association for Computer Graphics: Short Papers. 73--76.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Intel Corp. 2019. Intel Processor Graphics Gen11 Architecture. Technical Report.Google ScholarGoogle Scholar
  17. Akshay Jindal, Krzysztof Wolski, Karol Myszkowski, and Rafał K Mantiuk. 2021. Perceptual model for adaptive local shading and refresh rate. ACM Transactions on Graphics (TOG) 40, 6 (2021), 1--18.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Manuel Kraemer. 2018. Accelerating Your VR Games with VRWorks. In NVIDIAs GPU Technology Conference (GTC).Google ScholarGoogle Scholar
  19. Siyeong Lee, Gwon Hwan An, and Suk-Ju Kang. 2018. Deep recursive hdri: Inverse tone mapping using generative adversarial networks. In proceedings of the European Conference on Computer Vision (ECCV). 596--611.Google ScholarGoogle ScholarCross RefCross Ref
  20. Gábor Liktor and Carsten Dachsbacher. 2012. Decoupled deferred shading for hardware rasterization. In Proceedings of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games. 143--150.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Edward Liu. 2017. Lens Matched Shading and Unreal Engine 4 Integration. https://developer.nvidia.com/lens-matched-shading-and-unreal-engine-4-integration-part-1. Accessed: 2021-03-05.Google ScholarGoogle Scholar
  22. Edward Liu. 2020. DLSS 2.0 - Image Reconstruction for Real-time Rendering with Deep Learning. In NVIDIAs GPU Technology Conference (GTC).Google ScholarGoogle Scholar
  23. Morgan McGuire. 2017. Computer Graphics Archive. https://casual-effects.com/dataGoogle ScholarGoogle Scholar
  24. Xiaoxu Meng, Quan Zheng, Amitabh Varshney, Gurprit Singh, and Matthias Zwicker. 2020. Real-time Monte Carlo Denoising with the Neural Bilateral Grid. (2020).Google ScholarGoogle Scholar
  25. Joerg H Mueller, Thomas Neff, Philip Voglreiter, Markus Steinberger, and Dieter Schmalstieg. 2021. Temporally Adaptive Shading Reuse for Real-Time Rendering and Virtual Reality. ACM Transactions on Graphics (TOG) 40, 2 (2021), 1--14.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Kate Anderson Nicholas Hull and Nir Benty. 2017. NVIDIA Emerald Square, Open Research Content Archive (ORCA). http://developer.nvidia.com/orca/nvidia-emerald-squareGoogle ScholarGoogle Scholar
  27. NVIDIA Corp. 2018. NVIDIA TURING GPU ARCHITECTURE. Technical Report.Google ScholarGoogle Scholar
  28. Anjul Patney, Marco Salvi, Joohwan Kim, Anton Kaplanyan, Chris Wyman, Nir Benty, David Luebke, and Aaron Lefohn. 2016. Towards foveated rendering for gaze-tracked virtual reality. ACM Transactions on Graphics (TOG) 35, 6 (2016), 1--12.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Jonathan Ragan-Kelley, Jaakko Lehtinen, Jiawen Chen, Michael Doggett, and Frédo Durand. 2011. Decoupled sampling for graphics pipelines. ACM Transactions on Graphics (TOG) 30, 3 (2011), 1--17.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Jeremy Shopf. 2009. Mixed resolution rendering. In Game Developers Conference.Google ScholarGoogle Scholar
  31. Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. 2017. Learning important features through propagating activation differences. In International Conference on Machine Learning. PMLR, 3145--3153.Google ScholarGoogle Scholar
  32. Ee-Leng Tan and Woon-Seng Gan. 2015. Computational models for just-noticeable differences. In Perceptual Image Coding with Discrete Cosine Transform. Springer, 3--19.Google ScholarGoogle Scholar
  33. Manu Mathew Thomas, Karthik Vaidyanathan, Gabor Liktor, and Angus G Forbes. 2020. A reduced-precision network for image reconstruction. ACM Transactions on Graphics (TOG) 39, 6 (2020), 1--12.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Okan Tarhan Tursun, Elena Arabadzhiyska-Koleva, Marek Wernikowski, Radosław Mantiuk, Hans-Peter Seidel, Karol Myszkowski, and Piotr Didyk. 2019. Luminance-contrast-aware foveated rendering. ACM Transactions on Graphics (TOG) 38, 4 (2019), 1--14.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Krzysztof Wolski, Daniele Giunchi, Nanyang Ye, Piotr Didyk, Karol Myszkowski, Radosław Mantiuk, Hans-Peter Seidel, Anthony Steed, and Rafał K Mantiuk. 2018. Dataset and metrics for predicting local visible differences. ACM Transactions on Graphics (TOG) 37, 5 (2018), 1--14.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Lei Yang, Pedro V Sander, and Jason Lawrence. 2008. Geometry-aware framebuffer level of detail. In Computer Graphics Forum, Vol. 27. Wiley Online Library, 1183--1188.Google ScholarGoogle Scholar
  37. Lei Yang and Dmitry Zhdan. 2019. Adaptive Shading Overview.Google ScholarGoogle Scholar
  38. Lei Yang, Dmitry Zhdan, Emmett Kilgariff, Eric B Lum, Yubo Zhang, Matthew Johnson, and Henrik Rydgård. 2019. Visually lossless content and motion adaptive shading in games. Proceedings of the ACM on Computer Graphics and Interactive Techniques 2, 1 (2019), 1--19.Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. 2018. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition. 586--595.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Training and Predicting Visual Error for Real-Time Applications

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image Proceedings of the ACM on Computer Graphics and Interactive Techniques
        Proceedings of the ACM on Computer Graphics and Interactive Techniques  Volume 5, Issue 1
        May 2022
        252 pages
        EISSN:2577-6193
        DOI:10.1145/3535313
        Issue’s Table of Contents

        Copyright © 2022 ACM

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 4 May 2022
        Published in pacmcgit Volume 5, Issue 1

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Research
        • Refereed
      • Article Metrics

        • Downloads (Last 12 months)140
        • Downloads (Last 6 weeks)2

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader
      About Cookies On This Site

      We use cookies to ensure that we give you the best experience on our website.

      Learn more

      Got it!