skip to main content
research-article

ADOP: approximate differentiable one-pixel point rendering

Authors Info & Claims
Published:22 July 2022Publication History
Skip Abstract Section

Abstract

In this paper we present ADOP, a novel point-based, differentiable neural rendering pipeline. Like other neural renderers, our system takes as input calibrated camera images and a proxy geometry of the scene, in our case a point cloud. To generate a novel view, the point cloud is rasterized with learned feature vectors as colors and a deep neural network fills the remaining holes and shades each output pixel. The rasterizer renders points as one-pixel splats, which makes it very fast and allows us to compute gradients with respect to all relevant input parameters efficiently. Furthermore, our pipeline contains a fully differentiable physically-based photometric camera model, including exposure, white balance, and a camera response function. Following the idea of inverse rendering, we use our renderer to refine its input in order to reduce inconsistencies and optimize the quality of its output. In particular, we can optimize structural parameters like the camera pose, lens distortions, point positions and features, and a neural environment map, but also photometric parameters like camera response function, vignetting, and per-image exposure and white balance. Because our pipeline includes photometric parameters, e.g. exposure and camera response function, our system can smoothly handle input images with varying exposure and white balance, and generates high-dynamic range output. We show that due to the improved input, we can achieve high render quality, also for difficult input, e.g. with imperfect camera calibrations, inaccurate proxy geometry, or varying exposure. As a result, a simpler and thus faster deep neural network is sufficient for reconstruction. In combination with the fast point rasterization, ADOP achieves real-time rendering rates even for models with well over 100M points.

https://github.com/darglein/ADOP

Skip Supplemental Material Section

Supplemental Material

3528223.3530122.mp4

presentation

099-439-supp-video.mp4

supplemental material

References

  1. Agisoft. 2021. Metashape. https://www.agisoft.com/Google ScholarGoogle Scholar
  2. Marc Alexa, Markus Gross, Mark Pauly, Hanspeter Pfister, Marc Stamminger, and Matthias Zwicker. 2004. Point-based Computer Graphics. In ACM SIGGRAPH 2004 Course Notes. 7--es.Google ScholarGoogle Scholar
  3. Kara-Ali Aliev, Artem Sevastopolsky, Maria Kolos, Dmitry Ulyanov, and Victor Lempitsky. 2020. Neural Point-Based Graphics. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23--28, 2020, Proceedings, Part XXII 16. Springer, 696--712.Google ScholarGoogle Scholar
  4. Mario Botsch, Alexander Hornung, Matthias Zwicker, and Leif Kobbelt. 2005. High-quality surface splatting on today's GPUs. In Proceedings Eurographics/IEEE VGTC Symposium Point-Based Graphics, 2005. IEEE, 17--141.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Giang Bui, Truc Le, Brittany Morago, and Ye Duan. 2018. Point-Based Rendering Enhancement via Deep Learning. The Visual Computer 34, 6 (2018), 829--841.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Gaurav Chaurasia, Sylvain Duchene, Olga Sorkine-Hornung, and George Drettakis. 2013. Depth synthesis and local warps for plausible image-based navigation. ACM Transactions on Graphics (TOG) 32, 3 (2013), 1--12.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Wenzheng Chen, Huan Ling, Jun Gao, Edward Smith, Jaakko Lehtinen, Alec Jacobson, and Sanja Fidler. 2019. Learning to Predict 3D Objects With an Interpolation-Based Differentiable Renderer. Advances in Neural Information Processing Systems 32 (2019), 9609--9619.Google ScholarGoogle Scholar
  8. Peng Dai, Yinda Zhang, Zhuwen Li, Shuaicheng Liu, and Bing Zeng. 2020. Neural Point Cloud Rendering via Multi-Plane Projection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).Google ScholarGoogle ScholarCross RefCross Ref
  9. Paul Debevec, Yizhou Yu, and George Boshokov. 1998. Efficient view-dependent IBR with projective texture-mapping. In EG Rendering Workshop, Vol. 4.Google ScholarGoogle Scholar
  10. Paul E Debevec and Jitendra Malik. 2008. Recovering High Dynamic Range Radiance Maps from Photographs. In ACM SIGGRAPH 2008 classes. 1--10.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Eric Enderton, Erik Sintorn, Peter Shirley, and David Luebke. 2010. Stochastic Transparency. IEEE transactions on visualization and computer graphics 17, 8 (2010), 1036--1047.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Jakob Engel, Vladlen Koltun, and Daniel Cremers. 2017. Direct Sparse Odometry. IEEE transactions on pattern analysis and machine intelligence 40, 3 (2017), 611--625.Google ScholarGoogle Scholar
  13. John Flynn, Ivan Neulander, James Philbin, and Noah Snavely. 2016. DeepStereo: Learning to Predict New Views from the World's Imagery. In Proceedings of the IEEE conference on computer vision and pattern recognition. 5515--5524.Google ScholarGoogle Scholar
  14. Yaroslav Ganin, Daniil Kononenko, Diana Sungatullina, and Victor Lempitsky. 2016. DeepWarp: Photorealistic Image Resynthesis for Gaze Manipulation. In European conference on computer vision. Springer, 311--326.Google ScholarGoogle Scholar
  15. Kyle Genova, Forrester Cole, Aaron Maschinot, Aaron Sarna, Daniel Vlasic, and William T Freeman. 2018. Unsupervised Training for 3D Morphable Model Regression. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 8377--8386.Google ScholarGoogle ScholarCross RefCross Ref
  16. Daniel B Goldman. 2010. Vignette and Exposure Calibration and Compensation. IEEE transactions on pattern analysis and machine intelligence 32, 12 (2010), 2276--2288.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Michael D Grossberg and Shree K Nayar. 2003. Determining the Camera Response from Images: What is Knowable? IEEE Transactions on pattern analysis and machine intelligence 25, 11 (2003), 1455--1467.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Jeffrey P Grossman and William J Dally. 1998. Point Sample Rendering. In Eurographics Workshop on Rendering Techniques. Springer, 181--192.Google ScholarGoogle Scholar
  19. Christian Günther, Thomas Kanzok, Lars Linsen, and Paul Rosenthal. 2013. A GPGPU-based pipeline for accelerated rendering of point clouds. (2013).Google ScholarGoogle Scholar
  20. John Hable. 2010. Filmic Tonemapping for Real-time Rendering. In Siggraph 2010 Color Course.Google ScholarGoogle Scholar
  21. Zhizhong Han, Chao Chen, Yu-Shen Liu, and Matthias Zwicker. 2020. DRWR: A Differentiable Renderer without Rendering for Unsupervised 3D Structure Learning from Silhouette Images. In International Conference on Machine Learning 2020.Google ScholarGoogle Scholar
  22. Peter Hedman, Julien Philip, True Price, Jan-Michael Frahm, George Drettakis, and Gabriel Brostow. 2018. Deep Blending for Free-Viewpoint Image-Based Rendering. ACM Transactions on Graphics (TOG) 37, 6 (2018), 1--15.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Philipp Henzler, Niloy J Mitra, and Tobias Ritschel. 2019. Escaping Plato's Cave: 3D Shape From Adversarial Rendering. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 9984--9993.Google ScholarGoogle ScholarCross RefCross Ref
  24. Xin Huang, Qi Zhang, Feng Ying, Hongdong Li, Xuan Wang, and Qing Wang. 2021. HDR-NeRF: High Dynamic Range Neural Radiance Fields. arXiv preprint arXiv:2111.14451 (2021).Google ScholarGoogle Scholar
  25. Eldar Insafutdinov and Alexey Dosovitskiy. 2018. Unsupervised learning of shape and pose with differentiable point clouds. In Proceedings of the 32nd International Conference on Neural Information Processing Systems. 2807--2817.Google ScholarGoogle Scholar
  26. Yoonwoo Jeong, Seokjun Ahn, Christopher Choy, Anima Anandkumar, Minsu Cho, and Jaesik Park. 2021. Self-Calibrating Neural Radiance Fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 5846--5854.Google ScholarGoogle ScholarCross RefCross Ref
  27. Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. 2014. Caffe: Convolutional architecture for fast feature embedding. In Proceedings of the 22nd ACM international conference on Multimedia. 675--678.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Yue Jiang, Dantong Ji, Zhizhong Han, and Matthias Zwicker. 2020. SDFDiff: Differentiable Rendering of Signed Distance Fields for 3D Shape Optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1251--1261.Google ScholarGoogle ScholarCross RefCross Ref
  29. Justin Johnson, Alexandre Alahi, and Li Fei-Fei. 2016. Perceptual Losses for Real-Time Style Transfer and Super-Resolution. In European conference on computer vision. Springer, 694--711.Google ScholarGoogle Scholar
  30. Hiroharu Kato, Deniz Beker, Mihai Morariu, Takahiro Ando, Toru Matsuoka, Wadim Kehl, and Adrien Gaidon. 2020. Differentiable Rendering: A Survey. arXiv preprint arXiv:2006.12057 (2020).Google ScholarGoogle Scholar
  31. Hiroharu Kato and Tatsuya Harada. 2019. Learning View Priors for Single-View 3D Reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 9778--9787.Google ScholarGoogle ScholarCross RefCross Ref
  32. Hiroharu Kato, Yoshitaka Ushiku, and Tatsuya Harada. 2018. Neural 3D Mesh Renderer. In Proceedings of the IEEE conference on computer vision and pattern recognition. 3907--3916.Google ScholarGoogle ScholarCross RefCross Ref
  33. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7--9, 2015, Conference Track Proceedings, Yoshua Bengio and Yann LeCun (Eds.). http://arxiv.org/abs/1412.6980Google ScholarGoogle Scholar
  34. Arno Knapitsch, Jaesik Park, Qian-Yi Zhou, and Vladlen Koltun. 2017. Tanks and Temples: Benchmarking Large-Scale Scene Reconstruction. ACM Transactions on Graphics (ToG) 36, 4 (2017), 1--13.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Leif Kobbelt and Mario Botsch. 2004. A Survey of Point-Based Techniques in Computer Graphics. Computers & Graphics 28, 6 (2004), 801--814.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Georgios Kopanas, Julien Philip, Thomas Leimkühler, and George Drettakis. 2021. Point-Based Neural Rendering with Per-View Optimization. In Computer Graphics Forum, Vol. 40. Wiley Online Library, 29--43.Google ScholarGoogle Scholar
  37. Christoph Lassner and Michael Zollhöfer. 2021. Pulsar: Efficient Sphere-Based Neural Rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 1440--1449.Google ScholarGoogle ScholarCross RefCross Ref
  38. Hoang-An Le, Thomas Mensink, Partha Das, and Theo Gevers. 2020. Novel View Synthesis from Single Images via Point Cloud Transformation. arXiv preprint arXiv:2009.08321 (2020).Google ScholarGoogle Scholar
  39. Marc Levoy and Turner Whitted. 1985. The Use of Points as a Display Primitive. Citeseer.Google ScholarGoogle Scholar
  40. Chen-Hsuan Lin, Chen Kong, and Simon Lucey. 2018. Learning Efficient Point Cloud Generation for Dense 3D Object Reconstruction. In proceedings of the AAAI Conference on Artificial Intelligence, Vol. 32.Google ScholarGoogle ScholarCross RefCross Ref
  41. Chen-Hsuan Lin, Wei-Chiu Ma, Antonio Torralba, and Simon Lucey. 2021. BARF: Bundle-Adjusting Neural Radiance Fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 5741--5751.Google ScholarGoogle ScholarCross RefCross Ref
  42. Shichen Liu, Tianye Li, Weikai Chen, and Hao Li. 2019a. Soft Rasterizer: A Differentiable Renderer for Image-based 3D Reasoning. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 7708--7717.Google ScholarGoogle ScholarCross RefCross Ref
  43. Shichen Liu, Shunsuke Saito, Weikai Chen, and Hao Li. 2019b. Learning to Infer Implicit Surfaces without 3D Supervision. Advances in Neural Information Processing Systems 32 (2019), 8295--8306.Google ScholarGoogle Scholar
  44. Shaohui Liu, Yinda Zhang, Songyou Peng, Boxin Shi, Marc Pollefeys, and Zhaopeng Cui. 2020. DIST: Rendering Deep Implicit Signed Distance Function with Differentiable Sphere Tracing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019--2028.Google ScholarGoogle ScholarCross RefCross Ref
  45. Stephen Lombardi, Tomas Simon, Jason Saragih, Gabriel Schwartz, Andreas Lehrmann, and Yaser Sheikh. 2019. Neural Volumes: Learning Dynamic Renderable Volumes From Images. ACM Transactions on Graphics (TOG) 38, 4 (2019), 1--14.Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Matthew M Loper and Michael J Black. 2014. OpenDR: An approximate differentiable renderer. In European Conference on Computer Vision. Springer, 154--169.Google ScholarGoogle ScholarCross RefCross Ref
  47. Ricardo Marroquim, Martin Kraus, and Paulo Roma Cavalcanti. 2007. Efficient Point-Based Rendering Using Image Reconstruction.. In PBG@ Eurographics. 101--108.Google ScholarGoogle Scholar
  48. Ricardo Martin-Brualla, Noha Radwan, Mehdi SM Sajjadi, Jonathan T Barron, Alexey Dosovitskiy, and Daniel Duckworth. 2021. NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 7210--7219.Google ScholarGoogle ScholarCross RefCross Ref
  49. Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. 2019. Occupancy Networks: Learning 3D Reconstruction in Function Space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 4460--4470.Google ScholarGoogle ScholarCross RefCross Ref
  50. Moustafa Meshry, Dan B Goldman, Sameh Khamis, Hugues Hoppe, Rohit Pandey, Noah Snavely, and Ricardo Martin-Brualla. 2019. Neural rerendering in the wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 6878--6887.Google ScholarGoogle ScholarCross RefCross Ref
  51. Ben Mildenhall, Peter Hedman, Ricardo Martin-Brualla, Pratul Srinivasan, and Jonathan T Barron. 2021. NeRF in the Dark: High Dynamic Range View Synthesis from Noisy Raw Images. arXiv preprint arXiv:2111.13679 (2021).Google ScholarGoogle Scholar
  52. Ben Mildenhall, Pratul P Srinivasan, Rodrigo Ortiz-Cayon, Nima Khademi Kalantari, Ravi Ramamoorthi, Ren Ng, and Abhishek Kar. 2019. Local Light Field Fusion: Practical View Synthesis With Prescriptive Sampling Guidelines. ACM Transactions on Graphics (TOG) 38, 4 (2019), 1--14.Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. 2020. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. In European conference on computer vision. Springer, 405--421.Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Raul Mur-Artal and Juan D Tardós. 2017. ORB-SLAM2: An open-source slam system for monocular, stereo, and rgb-d cameras. IEEE transactions on robotics 33, 5 (2017), 1255--1262.Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. NavVis. 2021. VLX Reality Capture. (2021).Google ScholarGoogle Scholar
  56. Thu Nguyen-Phuoc, Chuan Li, Stephen Balaban, and Yong-Liang Yang. 2018. RenderNet: A deep convolutional network for differentiable rendering from 3D shapes. In Proceedings of the 32nd International Conference on Neural Information Processing Systems. 7902--7912.Google ScholarGoogle Scholar
  57. Michael Niemeyer, Lars Mescheder, Michael Oechsle, and Andreas Geiger. 2020. Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 3504--3515.Google ScholarGoogle ScholarCross RefCross Ref
  58. Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. 2019. DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 165--174.Google ScholarGoogle ScholarCross RefCross Ref
  59. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019), 8026--8037.Google ScholarGoogle Scholar
  60. Hanspeter Pfister, Matthias Zwicker, Jeroen Van Baar, and Markus Gross. 2000. Surfels: Surface elements as rendering primitives. In Proceedings of the 27th annual conference on Computer graphics and interactive techniques. 335--342.Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. Ruggero Pintus, Enrico Gobbetti, and Marco Agus. 2011. Real-time rendering of massive unstructured raw point clouds using screen-space operators. In Proceedings of the 12th International conference on Virtual Reality, Archaeology and Cultural Heritage. 105--112.Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. Francesco Pittaluga, Sanjeev J. Koppal, Sing Bing Kang, and Sudipta N. Sinha. 2019. Revealing Scenes by Inverting Structure From Motion Reconstructions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).Google ScholarGoogle Scholar
  63. Reinhold Preiner, Stefan Jeschke, and Michael Wimmer. 2012. Auto Splats: Dynamic Point Cloud Visualization on the GPU.. In EGPGV@ Eurographics. 139--148.Google ScholarGoogle Scholar
  64. Nikhila Ravi, Jeremy Reizenstein, David Novotny, Taylor Gordon, Wan-Yen Lo, Justin Johnson, and Georgia Gkioxari. 2020. Accelerating 3D Deep Learning with PyTorch3D. arXiv preprint arXiv:2007.08501 (2020).Google ScholarGoogle Scholar
  65. Helge Rhodin, Nadia Robertini, Christian Richardt, Hans-Peter Seidel, and Christian Theobalt. 2015. A Versatile Scene Model With Differentiable Visibility Applied to Generative Pose Estimation. In Proceedings of the IEEE International Conference on Computer Vision. 765--773.Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. Gernot Riegler and Vladlen Koltun. 2020. Free View Synthesis. In European Conference on Computer Vision. Springer, 623--640.Google ScholarGoogle Scholar
  67. Gernot Riegler and Vladlen Koltun. 2021. Stable View Synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 12216--12225.Google ScholarGoogle ScholarCross RefCross Ref
  68. Paul Rosenthal and Lars Linsen. 2008. Image-space point cloud rendering. In Proceedings of Computer Graphics International. Citeseer, 136--143.Google ScholarGoogle Scholar
  69. Darius Rückert and Marc Stamminger. 2021. Snake-SLAM: Efficient Global Visual Inertial SLAM using Decoupled Nonlinear Optimization. In 2021 International Conference on Unmanned Aircraft Systems (ICUAS). IEEE, 219--228.Google ScholarGoogle ScholarCross RefCross Ref
  70. Johannes L Schönberger and Jan-Michael Frahm. 2016. Structure-from-motion revisited. In Proceedings of the IEEE conference on computer vision and pattern recognition. 4104--4113.Google ScholarGoogle ScholarCross RefCross Ref
  71. Markus Schütz, Bernhard Kerbl, and Michael Wimmer. 2021. Rendering Point Clouds with Compute Shaders and Vertex Order Optimization. Computer Graphics Forum (2021). Google ScholarGoogle ScholarCross RefCross Ref
  72. Dave Shreiner, Bill The Khronos OpenGL ARB Working Group, et al. 2009. OpenGL programming guide: the official guide to learning OpenGL, versions 3.0 and 3.1. Pearson Education.Google ScholarGoogle Scholar
  73. Harry Shum and Sing Bing Kang. 2000. Review of image-based rendering techniques. In Visual Communications and Image Processing 2000, Vol. 4067. International Society for Optics and Photonics, 2--13.Google ScholarGoogle Scholar
  74. Vincent Sitzmann, Julien Martel, Alexander Bergman, David Lindell, and Gordon Wetzstein. 2020. Implicit neural representations with periodic activation functions. Advances in Neural Information Processing Systems 33 (2020).Google ScholarGoogle Scholar
  75. Vincent Sitzmann, Justus Thies, Felix Heide, Matthias Nießner, Gordon Wetzstein, and Michael Zollhofer. 2019. DeepVoxels: Learning Persistent 3D Feature Embeddings. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2437--2446.Google ScholarGoogle ScholarCross RefCross Ref
  76. Zhenbo Song, Wayne Chen, Dylan Campbell, and Hongdong Li. 2020. Deep Novel View Synthesis from Colored 3D Point Clouds. In European Conference on Computer Vision. Springer, 1--17.Google ScholarGoogle ScholarDigital LibraryDigital Library
  77. Pratul P Srinivasan, Richard Tucker, Jonathan T Barron, Ravi Ramamoorthi, Ren Ng, and Noah Snavely. 2019. Pushing the Boundaries of View Extrapolation With Multiplane Images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 175--184.Google ScholarGoogle ScholarCross RefCross Ref
  78. Matthew Tancik, Vincent Casser, Xinchen Yan, Sabeek Pradhan, Ben Mildenhall, Pratul P Srinivasan, Jonathan T Barron, and Henrik Kretzschmar. 2022. Block-NeRF: Scalable Large Scene Neural View Synthesis. arXiv preprint arXiv:2202.05263 (2022).Google ScholarGoogle Scholar
  79. Ayush Tewari, Ohad Fried, Justus Thies, Vincent Sitzmann, Stephen Lombardi, Kalyan Sunkavalli, Ricardo Martin-Brualla, Tomas Simon, Jason Saragih, Matthias Nießner, et al. 2020. State of the Art on Neural Rendering. In Computer Graphics Forum, Vol. 39. Wiley Online Library, 701--727.Google ScholarGoogle ScholarCross RefCross Ref
  80. Justus Thies, Michael Zollhöfer, and Matthias Nießner. 2019. Deferred Neural Rendering: Image Synthesis using Neural Texturess. ACM Transactions on Graphics (TOG) 38, 4 (2019), 1--12.Google ScholarGoogle ScholarDigital LibraryDigital Library
  81. Justus Thies, Michael Zollhöfer, Christian Theobalt, Marc Stamminger, and Matthias Nießner. 2020. Image-guided neural object rendering. In 8th International Conference on Learning Representations. OpenReview. net.Google ScholarGoogle Scholar
  82. Richard Tucker and Noah Snavely. 2020. Single-View View Synthesis With Multiplane Images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 551--560.Google ScholarGoogle ScholarCross RefCross Ref
  83. Shubham Tulsiani, Tinghui Zhou, Alexei A Efros, and Jitendra Malik. 2017. Multi-View Supervision for Single-View Reconstruction via Differentiable Ray Consistency. In Proceedings of the IEEE conference on computer vision and pattern recognition. 2626--2634.Google ScholarGoogle ScholarCross RefCross Ref
  84. Olivia Wiles, Georgia Gkioxari, Richard Szeliski, and Justin Johnson. 2020. SynSin: End-to-end View Synthesis from a Single Image. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 7467--7477.Google ScholarGoogle ScholarCross RefCross Ref
  85. Bing Xu, Naiyan Wang, Tianqi Chen, and Mu Li. 2015. Empirical Evaluation of Rectified Activations in Convolutional Network. arXiv preprint arXiv:1505.00853 (2015).Google ScholarGoogle Scholar
  86. Zhenpei Yang, Yuning Chai, Dragomir Anguelov, Yin Zhou, Pei Sun, Dumitru Erhan, Sean Rafferty, and Henrik Kretzschmar. 2020. SurfelGAN: Synthesizing Realistic Sensor Data for Autonomous Driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).Google ScholarGoogle ScholarCross RefCross Ref
  87. Wang Yifan, Felice Serena, Shihao Wu, Cengiz Öztireli, and Olga Sorkine-Hornung. 2019. Differentiable Surface Splatting for Point-based Geometry Processing. ACM Transactions on Graphics (TOG) 38, 6 (2019), 1--14.Google ScholarGoogle ScholarDigital LibraryDigital Library
  88. Alex Yu, Vickie Ye, Matthew Tancik, and Angjoo Kanazawa. 2021. pixelNeRF: Neural Radiance Fields From One or Few Images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 4578--4587.Google ScholarGoogle ScholarCross RefCross Ref
  89. Jiahui Yu, Zhe Lin, Jimei Yang, Xiaohui Shen, Xin Lu, and Thomas S Huang. 2019. Free-Form Image Inpainting with Gated Convolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 4471--4480.Google ScholarGoogle ScholarCross RefCross Ref
  90. Sergey Zakharov, Wadim Kehl, Arjun Bhargava, and Adrien Gaidon. 2020. Autolabeling 3D Objects with Differentiable Rendering of SDF Shape Priors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 12224--12233.Google ScholarGoogle ScholarCross RefCross Ref
  91. Kai Zhang, Gernot Riegler, Noah Snavely, and Vladlen Koltun. 2020. NeRF++: Analyzing and Improving Neural Radiance Fields. arXiv preprint arXiv:2010.07492 (2020).Google ScholarGoogle Scholar
  92. Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. 2018. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. In CVPR.Google ScholarGoogle Scholar
  93. Tinghui Zhou, Richard Tucker, John Flynn, Graham Fyffe, and Noah Snavely. 2018. Stereo Magnification: Learning View Synthesis Using Multiplane Images. ACM Transactions on Graphics (TOG) 37, 4 (2018), 1--12.Google ScholarGoogle ScholarDigital LibraryDigital Library
  94. Tinghui Zhou, Shubham Tulsiani, Weilun Sun, Jitendra Malik, and Alexei A Efros. 2016. View Synthesis by Appearance Flow. In European conference on computer vision. Springer, 286--301.Google ScholarGoogle ScholarCross RefCross Ref
  95. Jun-Yan Zhu, Zhoutong Zhang, Chengkai Zhang, Jiajun Wu, Antonio Torralba, Josh Tenenbaum, and Bill Freeman. 2018. Visual Object Networks: Image Generation with Disentangled 3D Representations. Advances in Neural Information Processing Systems 31 (2018), 118--129.Google ScholarGoogle Scholar
  96. Matthias Zwicker, Hanspeter Pfister, Jeroen Van Baar, and Markus Gross. 2001. Surface Splatting. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques. 371--378.Google ScholarGoogle Scholar

Index Terms

  1. ADOP: approximate differentiable one-pixel point rendering

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          Full Access

          • Published in

            cover image ACM Transactions on Graphics
            ACM Transactions on Graphics  Volume 41, Issue 4
            July 2022
            1978 pages
            ISSN:0730-0301
            EISSN:1557-7368
            DOI:10.1145/3528223
            Issue’s Table of Contents

            Copyright © 2022 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 22 July 2022
            Published in tog Volume 41, Issue 4

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader