Abstract
In this paper we present ADOP, a novel point-based, differentiable neural rendering pipeline. Like other neural renderers, our system takes as input calibrated camera images and a proxy geometry of the scene, in our case a point cloud. To generate a novel view, the point cloud is rasterized with learned feature vectors as colors and a deep neural network fills the remaining holes and shades each output pixel. The rasterizer renders points as one-pixel splats, which makes it very fast and allows us to compute gradients with respect to all relevant input parameters efficiently. Furthermore, our pipeline contains a fully differentiable physically-based photometric camera model, including exposure, white balance, and a camera response function. Following the idea of inverse rendering, we use our renderer to refine its input in order to reduce inconsistencies and optimize the quality of its output. In particular, we can optimize structural parameters like the camera pose, lens distortions, point positions and features, and a neural environment map, but also photometric parameters like camera response function, vignetting, and per-image exposure and white balance. Because our pipeline includes photometric parameters, e.g. exposure and camera response function, our system can smoothly handle input images with varying exposure and white balance, and generates high-dynamic range output. We show that due to the improved input, we can achieve high render quality, also for difficult input, e.g. with imperfect camera calibrations, inaccurate proxy geometry, or varying exposure. As a result, a simpler and thus faster deep neural network is sufficient for reconstruction. In combination with the fast point rasterization, ADOP achieves real-time rendering rates even for models with well over 100M points.
https://github.com/darglein/ADOP
Supplemental Material
- Agisoft. 2021. Metashape. https://www.agisoft.com/Google Scholar
- Marc Alexa, Markus Gross, Mark Pauly, Hanspeter Pfister, Marc Stamminger, and Matthias Zwicker. 2004. Point-based Computer Graphics. In ACM SIGGRAPH 2004 Course Notes. 7--es.Google Scholar
- Kara-Ali Aliev, Artem Sevastopolsky, Maria Kolos, Dmitry Ulyanov, and Victor Lempitsky. 2020. Neural Point-Based Graphics. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23--28, 2020, Proceedings, Part XXII 16. Springer, 696--712.Google Scholar
- Mario Botsch, Alexander Hornung, Matthias Zwicker, and Leif Kobbelt. 2005. High-quality surface splatting on today's GPUs. In Proceedings Eurographics/IEEE VGTC Symposium Point-Based Graphics, 2005. IEEE, 17--141.Google Scholar
Digital Library
- Giang Bui, Truc Le, Brittany Morago, and Ye Duan. 2018. Point-Based Rendering Enhancement via Deep Learning. The Visual Computer 34, 6 (2018), 829--841.Google Scholar
Digital Library
- Gaurav Chaurasia, Sylvain Duchene, Olga Sorkine-Hornung, and George Drettakis. 2013. Depth synthesis and local warps for plausible image-based navigation. ACM Transactions on Graphics (TOG) 32, 3 (2013), 1--12.Google Scholar
Digital Library
- Wenzheng Chen, Huan Ling, Jun Gao, Edward Smith, Jaakko Lehtinen, Alec Jacobson, and Sanja Fidler. 2019. Learning to Predict 3D Objects With an Interpolation-Based Differentiable Renderer. Advances in Neural Information Processing Systems 32 (2019), 9609--9619.Google Scholar
- Peng Dai, Yinda Zhang, Zhuwen Li, Shuaicheng Liu, and Bing Zeng. 2020. Neural Point Cloud Rendering via Multi-Plane Projection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).Google Scholar
Cross Ref
- Paul Debevec, Yizhou Yu, and George Boshokov. 1998. Efficient view-dependent IBR with projective texture-mapping. In EG Rendering Workshop, Vol. 4.Google Scholar
- Paul E Debevec and Jitendra Malik. 2008. Recovering High Dynamic Range Radiance Maps from Photographs. In ACM SIGGRAPH 2008 classes. 1--10.Google Scholar
Digital Library
- Eric Enderton, Erik Sintorn, Peter Shirley, and David Luebke. 2010. Stochastic Transparency. IEEE transactions on visualization and computer graphics 17, 8 (2010), 1036--1047.Google Scholar
Digital Library
- Jakob Engel, Vladlen Koltun, and Daniel Cremers. 2017. Direct Sparse Odometry. IEEE transactions on pattern analysis and machine intelligence 40, 3 (2017), 611--625.Google Scholar
- John Flynn, Ivan Neulander, James Philbin, and Noah Snavely. 2016. DeepStereo: Learning to Predict New Views from the World's Imagery. In Proceedings of the IEEE conference on computer vision and pattern recognition. 5515--5524.Google Scholar
- Yaroslav Ganin, Daniil Kononenko, Diana Sungatullina, and Victor Lempitsky. 2016. DeepWarp: Photorealistic Image Resynthesis for Gaze Manipulation. In European conference on computer vision. Springer, 311--326.Google Scholar
- Kyle Genova, Forrester Cole, Aaron Maschinot, Aaron Sarna, Daniel Vlasic, and William T Freeman. 2018. Unsupervised Training for 3D Morphable Model Regression. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 8377--8386.Google Scholar
Cross Ref
- Daniel B Goldman. 2010. Vignette and Exposure Calibration and Compensation. IEEE transactions on pattern analysis and machine intelligence 32, 12 (2010), 2276--2288.Google Scholar
Digital Library
- Michael D Grossberg and Shree K Nayar. 2003. Determining the Camera Response from Images: What is Knowable? IEEE Transactions on pattern analysis and machine intelligence 25, 11 (2003), 1455--1467.Google Scholar
Digital Library
- Jeffrey P Grossman and William J Dally. 1998. Point Sample Rendering. In Eurographics Workshop on Rendering Techniques. Springer, 181--192.Google Scholar
- Christian Günther, Thomas Kanzok, Lars Linsen, and Paul Rosenthal. 2013. A GPGPU-based pipeline for accelerated rendering of point clouds. (2013).Google Scholar
- John Hable. 2010. Filmic Tonemapping for Real-time Rendering. In Siggraph 2010 Color Course.Google Scholar
- Zhizhong Han, Chao Chen, Yu-Shen Liu, and Matthias Zwicker. 2020. DRWR: A Differentiable Renderer without Rendering for Unsupervised 3D Structure Learning from Silhouette Images. In International Conference on Machine Learning 2020.Google Scholar
- Peter Hedman, Julien Philip, True Price, Jan-Michael Frahm, George Drettakis, and Gabriel Brostow. 2018. Deep Blending for Free-Viewpoint Image-Based Rendering. ACM Transactions on Graphics (TOG) 37, 6 (2018), 1--15.Google Scholar
Digital Library
- Philipp Henzler, Niloy J Mitra, and Tobias Ritschel. 2019. Escaping Plato's Cave: 3D Shape From Adversarial Rendering. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 9984--9993.Google Scholar
Cross Ref
- Xin Huang, Qi Zhang, Feng Ying, Hongdong Li, Xuan Wang, and Qing Wang. 2021. HDR-NeRF: High Dynamic Range Neural Radiance Fields. arXiv preprint arXiv:2111.14451 (2021).Google Scholar
- Eldar Insafutdinov and Alexey Dosovitskiy. 2018. Unsupervised learning of shape and pose with differentiable point clouds. In Proceedings of the 32nd International Conference on Neural Information Processing Systems. 2807--2817.Google Scholar
- Yoonwoo Jeong, Seokjun Ahn, Christopher Choy, Anima Anandkumar, Minsu Cho, and Jaesik Park. 2021. Self-Calibrating Neural Radiance Fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 5846--5854.Google Scholar
Cross Ref
- Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. 2014. Caffe: Convolutional architecture for fast feature embedding. In Proceedings of the 22nd ACM international conference on Multimedia. 675--678.Google Scholar
Digital Library
- Yue Jiang, Dantong Ji, Zhizhong Han, and Matthias Zwicker. 2020. SDFDiff: Differentiable Rendering of Signed Distance Fields for 3D Shape Optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1251--1261.Google Scholar
Cross Ref
- Justin Johnson, Alexandre Alahi, and Li Fei-Fei. 2016. Perceptual Losses for Real-Time Style Transfer and Super-Resolution. In European conference on computer vision. Springer, 694--711.Google Scholar
- Hiroharu Kato, Deniz Beker, Mihai Morariu, Takahiro Ando, Toru Matsuoka, Wadim Kehl, and Adrien Gaidon. 2020. Differentiable Rendering: A Survey. arXiv preprint arXiv:2006.12057 (2020).Google Scholar
- Hiroharu Kato and Tatsuya Harada. 2019. Learning View Priors for Single-View 3D Reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 9778--9787.Google Scholar
Cross Ref
- Hiroharu Kato, Yoshitaka Ushiku, and Tatsuya Harada. 2018. Neural 3D Mesh Renderer. In Proceedings of the IEEE conference on computer vision and pattern recognition. 3907--3916.Google Scholar
Cross Ref
- Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7--9, 2015, Conference Track Proceedings, Yoshua Bengio and Yann LeCun (Eds.). http://arxiv.org/abs/1412.6980Google Scholar
- Arno Knapitsch, Jaesik Park, Qian-Yi Zhou, and Vladlen Koltun. 2017. Tanks and Temples: Benchmarking Large-Scale Scene Reconstruction. ACM Transactions on Graphics (ToG) 36, 4 (2017), 1--13.Google Scholar
Digital Library
- Leif Kobbelt and Mario Botsch. 2004. A Survey of Point-Based Techniques in Computer Graphics. Computers & Graphics 28, 6 (2004), 801--814.Google Scholar
Digital Library
- Georgios Kopanas, Julien Philip, Thomas Leimkühler, and George Drettakis. 2021. Point-Based Neural Rendering with Per-View Optimization. In Computer Graphics Forum, Vol. 40. Wiley Online Library, 29--43.Google Scholar
- Christoph Lassner and Michael Zollhöfer. 2021. Pulsar: Efficient Sphere-Based Neural Rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 1440--1449.Google Scholar
Cross Ref
- Hoang-An Le, Thomas Mensink, Partha Das, and Theo Gevers. 2020. Novel View Synthesis from Single Images via Point Cloud Transformation. arXiv preprint arXiv:2009.08321 (2020).Google Scholar
- Marc Levoy and Turner Whitted. 1985. The Use of Points as a Display Primitive. Citeseer.Google Scholar
- Chen-Hsuan Lin, Chen Kong, and Simon Lucey. 2018. Learning Efficient Point Cloud Generation for Dense 3D Object Reconstruction. In proceedings of the AAAI Conference on Artificial Intelligence, Vol. 32.Google Scholar
Cross Ref
- Chen-Hsuan Lin, Wei-Chiu Ma, Antonio Torralba, and Simon Lucey. 2021. BARF: Bundle-Adjusting Neural Radiance Fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 5741--5751.Google Scholar
Cross Ref
- Shichen Liu, Tianye Li, Weikai Chen, and Hao Li. 2019a. Soft Rasterizer: A Differentiable Renderer for Image-based 3D Reasoning. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 7708--7717.Google Scholar
Cross Ref
- Shichen Liu, Shunsuke Saito, Weikai Chen, and Hao Li. 2019b. Learning to Infer Implicit Surfaces without 3D Supervision. Advances in Neural Information Processing Systems 32 (2019), 8295--8306.Google Scholar
- Shaohui Liu, Yinda Zhang, Songyou Peng, Boxin Shi, Marc Pollefeys, and Zhaopeng Cui. 2020. DIST: Rendering Deep Implicit Signed Distance Function with Differentiable Sphere Tracing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019--2028.Google Scholar
Cross Ref
- Stephen Lombardi, Tomas Simon, Jason Saragih, Gabriel Schwartz, Andreas Lehrmann, and Yaser Sheikh. 2019. Neural Volumes: Learning Dynamic Renderable Volumes From Images. ACM Transactions on Graphics (TOG) 38, 4 (2019), 1--14.Google Scholar
Digital Library
- Matthew M Loper and Michael J Black. 2014. OpenDR: An approximate differentiable renderer. In European Conference on Computer Vision. Springer, 154--169.Google Scholar
Cross Ref
- Ricardo Marroquim, Martin Kraus, and Paulo Roma Cavalcanti. 2007. Efficient Point-Based Rendering Using Image Reconstruction.. In PBG@ Eurographics. 101--108.Google Scholar
- Ricardo Martin-Brualla, Noha Radwan, Mehdi SM Sajjadi, Jonathan T Barron, Alexey Dosovitskiy, and Daniel Duckworth. 2021. NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 7210--7219.Google Scholar
Cross Ref
- Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. 2019. Occupancy Networks: Learning 3D Reconstruction in Function Space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 4460--4470.Google Scholar
Cross Ref
- Moustafa Meshry, Dan B Goldman, Sameh Khamis, Hugues Hoppe, Rohit Pandey, Noah Snavely, and Ricardo Martin-Brualla. 2019. Neural rerendering in the wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 6878--6887.Google Scholar
Cross Ref
- Ben Mildenhall, Peter Hedman, Ricardo Martin-Brualla, Pratul Srinivasan, and Jonathan T Barron. 2021. NeRF in the Dark: High Dynamic Range View Synthesis from Noisy Raw Images. arXiv preprint arXiv:2111.13679 (2021).Google Scholar
- Ben Mildenhall, Pratul P Srinivasan, Rodrigo Ortiz-Cayon, Nima Khademi Kalantari, Ravi Ramamoorthi, Ren Ng, and Abhishek Kar. 2019. Local Light Field Fusion: Practical View Synthesis With Prescriptive Sampling Guidelines. ACM Transactions on Graphics (TOG) 38, 4 (2019), 1--14.Google Scholar
Digital Library
- Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. 2020. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. In European conference on computer vision. Springer, 405--421.Google Scholar
Digital Library
- Raul Mur-Artal and Juan D Tardós. 2017. ORB-SLAM2: An open-source slam system for monocular, stereo, and rgb-d cameras. IEEE transactions on robotics 33, 5 (2017), 1255--1262.Google Scholar
Digital Library
- NavVis. 2021. VLX Reality Capture. (2021).Google Scholar
- Thu Nguyen-Phuoc, Chuan Li, Stephen Balaban, and Yong-Liang Yang. 2018. RenderNet: A deep convolutional network for differentiable rendering from 3D shapes. In Proceedings of the 32nd International Conference on Neural Information Processing Systems. 7902--7912.Google Scholar
- Michael Niemeyer, Lars Mescheder, Michael Oechsle, and Andreas Geiger. 2020. Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 3504--3515.Google Scholar
Cross Ref
- Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. 2019. DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 165--174.Google Scholar
Cross Ref
- Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019), 8026--8037.Google Scholar
- Hanspeter Pfister, Matthias Zwicker, Jeroen Van Baar, and Markus Gross. 2000. Surfels: Surface elements as rendering primitives. In Proceedings of the 27th annual conference on Computer graphics and interactive techniques. 335--342.Google Scholar
Digital Library
- Ruggero Pintus, Enrico Gobbetti, and Marco Agus. 2011. Real-time rendering of massive unstructured raw point clouds using screen-space operators. In Proceedings of the 12th International conference on Virtual Reality, Archaeology and Cultural Heritage. 105--112.Google Scholar
Digital Library
- Francesco Pittaluga, Sanjeev J. Koppal, Sing Bing Kang, and Sudipta N. Sinha. 2019. Revealing Scenes by Inverting Structure From Motion Reconstructions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).Google Scholar
- Reinhold Preiner, Stefan Jeschke, and Michael Wimmer. 2012. Auto Splats: Dynamic Point Cloud Visualization on the GPU.. In EGPGV@ Eurographics. 139--148.Google Scholar
- Nikhila Ravi, Jeremy Reizenstein, David Novotny, Taylor Gordon, Wan-Yen Lo, Justin Johnson, and Georgia Gkioxari. 2020. Accelerating 3D Deep Learning with PyTorch3D. arXiv preprint arXiv:2007.08501 (2020).Google Scholar
- Helge Rhodin, Nadia Robertini, Christian Richardt, Hans-Peter Seidel, and Christian Theobalt. 2015. A Versatile Scene Model With Differentiable Visibility Applied to Generative Pose Estimation. In Proceedings of the IEEE International Conference on Computer Vision. 765--773.Google Scholar
Digital Library
- Gernot Riegler and Vladlen Koltun. 2020. Free View Synthesis. In European Conference on Computer Vision. Springer, 623--640.Google Scholar
- Gernot Riegler and Vladlen Koltun. 2021. Stable View Synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 12216--12225.Google Scholar
Cross Ref
- Paul Rosenthal and Lars Linsen. 2008. Image-space point cloud rendering. In Proceedings of Computer Graphics International. Citeseer, 136--143.Google Scholar
- Darius Rückert and Marc Stamminger. 2021. Snake-SLAM: Efficient Global Visual Inertial SLAM using Decoupled Nonlinear Optimization. In 2021 International Conference on Unmanned Aircraft Systems (ICUAS). IEEE, 219--228.Google Scholar
Cross Ref
- Johannes L Schönberger and Jan-Michael Frahm. 2016. Structure-from-motion revisited. In Proceedings of the IEEE conference on computer vision and pattern recognition. 4104--4113.Google Scholar
Cross Ref
- Markus Schütz, Bernhard Kerbl, and Michael Wimmer. 2021. Rendering Point Clouds with Compute Shaders and Vertex Order Optimization. Computer Graphics Forum (2021). Google Scholar
Cross Ref
- Dave Shreiner, Bill The Khronos OpenGL ARB Working Group, et al. 2009. OpenGL programming guide: the official guide to learning OpenGL, versions 3.0 and 3.1. Pearson Education.Google Scholar
- Harry Shum and Sing Bing Kang. 2000. Review of image-based rendering techniques. In Visual Communications and Image Processing 2000, Vol. 4067. International Society for Optics and Photonics, 2--13.Google Scholar
- Vincent Sitzmann, Julien Martel, Alexander Bergman, David Lindell, and Gordon Wetzstein. 2020. Implicit neural representations with periodic activation functions. Advances in Neural Information Processing Systems 33 (2020).Google Scholar
- Vincent Sitzmann, Justus Thies, Felix Heide, Matthias Nießner, Gordon Wetzstein, and Michael Zollhofer. 2019. DeepVoxels: Learning Persistent 3D Feature Embeddings. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2437--2446.Google Scholar
Cross Ref
- Zhenbo Song, Wayne Chen, Dylan Campbell, and Hongdong Li. 2020. Deep Novel View Synthesis from Colored 3D Point Clouds. In European Conference on Computer Vision. Springer, 1--17.Google Scholar
Digital Library
- Pratul P Srinivasan, Richard Tucker, Jonathan T Barron, Ravi Ramamoorthi, Ren Ng, and Noah Snavely. 2019. Pushing the Boundaries of View Extrapolation With Multiplane Images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 175--184.Google Scholar
Cross Ref
- Matthew Tancik, Vincent Casser, Xinchen Yan, Sabeek Pradhan, Ben Mildenhall, Pratul P Srinivasan, Jonathan T Barron, and Henrik Kretzschmar. 2022. Block-NeRF: Scalable Large Scene Neural View Synthesis. arXiv preprint arXiv:2202.05263 (2022).Google Scholar
- Ayush Tewari, Ohad Fried, Justus Thies, Vincent Sitzmann, Stephen Lombardi, Kalyan Sunkavalli, Ricardo Martin-Brualla, Tomas Simon, Jason Saragih, Matthias Nießner, et al. 2020. State of the Art on Neural Rendering. In Computer Graphics Forum, Vol. 39. Wiley Online Library, 701--727.Google Scholar
Cross Ref
- Justus Thies, Michael Zollhöfer, and Matthias Nießner. 2019. Deferred Neural Rendering: Image Synthesis using Neural Texturess. ACM Transactions on Graphics (TOG) 38, 4 (2019), 1--12.Google Scholar
Digital Library
- Justus Thies, Michael Zollhöfer, Christian Theobalt, Marc Stamminger, and Matthias Nießner. 2020. Image-guided neural object rendering. In 8th International Conference on Learning Representations. OpenReview. net.Google Scholar
- Richard Tucker and Noah Snavely. 2020. Single-View View Synthesis With Multiplane Images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 551--560.Google Scholar
Cross Ref
- Shubham Tulsiani, Tinghui Zhou, Alexei A Efros, and Jitendra Malik. 2017. Multi-View Supervision for Single-View Reconstruction via Differentiable Ray Consistency. In Proceedings of the IEEE conference on computer vision and pattern recognition. 2626--2634.Google Scholar
Cross Ref
- Olivia Wiles, Georgia Gkioxari, Richard Szeliski, and Justin Johnson. 2020. SynSin: End-to-end View Synthesis from a Single Image. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 7467--7477.Google Scholar
Cross Ref
- Bing Xu, Naiyan Wang, Tianqi Chen, and Mu Li. 2015. Empirical Evaluation of Rectified Activations in Convolutional Network. arXiv preprint arXiv:1505.00853 (2015).Google Scholar
- Zhenpei Yang, Yuning Chai, Dragomir Anguelov, Yin Zhou, Pei Sun, Dumitru Erhan, Sean Rafferty, and Henrik Kretzschmar. 2020. SurfelGAN: Synthesizing Realistic Sensor Data for Autonomous Driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).Google Scholar
Cross Ref
- Wang Yifan, Felice Serena, Shihao Wu, Cengiz Öztireli, and Olga Sorkine-Hornung. 2019. Differentiable Surface Splatting for Point-based Geometry Processing. ACM Transactions on Graphics (TOG) 38, 6 (2019), 1--14.Google Scholar
Digital Library
- Alex Yu, Vickie Ye, Matthew Tancik, and Angjoo Kanazawa. 2021. pixelNeRF: Neural Radiance Fields From One or Few Images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 4578--4587.Google Scholar
Cross Ref
- Jiahui Yu, Zhe Lin, Jimei Yang, Xiaohui Shen, Xin Lu, and Thomas S Huang. 2019. Free-Form Image Inpainting with Gated Convolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 4471--4480.Google Scholar
Cross Ref
- Sergey Zakharov, Wadim Kehl, Arjun Bhargava, and Adrien Gaidon. 2020. Autolabeling 3D Objects with Differentiable Rendering of SDF Shape Priors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 12224--12233.Google Scholar
Cross Ref
- Kai Zhang, Gernot Riegler, Noah Snavely, and Vladlen Koltun. 2020. NeRF++: Analyzing and Improving Neural Radiance Fields. arXiv preprint arXiv:2010.07492 (2020).Google Scholar
- Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. 2018. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. In CVPR.Google Scholar
- Tinghui Zhou, Richard Tucker, John Flynn, Graham Fyffe, and Noah Snavely. 2018. Stereo Magnification: Learning View Synthesis Using Multiplane Images. ACM Transactions on Graphics (TOG) 37, 4 (2018), 1--12.Google Scholar
Digital Library
- Tinghui Zhou, Shubham Tulsiani, Weilun Sun, Jitendra Malik, and Alexei A Efros. 2016. View Synthesis by Appearance Flow. In European conference on computer vision. Springer, 286--301.Google Scholar
Cross Ref
- Jun-Yan Zhu, Zhoutong Zhang, Chengkai Zhang, Jiajun Wu, Antonio Torralba, Josh Tenenbaum, and Bill Freeman. 2018. Visual Object Networks: Image Generation with Disentangled 3D Representations. Advances in Neural Information Processing Systems 31 (2018), 118--129.Google Scholar
- Matthias Zwicker, Hanspeter Pfister, Jeroen Van Baar, and Markus Gross. 2001. Surface Splatting. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques. 371--378.Google Scholar
Index Terms
ADOP: approximate differentiable one-pixel point rendering
Recommendations
Image-based rendering of diffuse, specular and glossy surfaces from a single image
SIGGRAPH '01: Proceedings of the 28th annual conference on Computer graphics and interactive techniquesIn this paper, we present a new method to recover an approximation of the bidirectional reflectance distribution function (BRDF) of the surfaces present in a real scene. This is done from a single photograph and a 3D geometric model of the scene. The ...
Technical Section: Reflectance modeling for a textured object under uncontrolled illumination from high dynamic range maps
During the past several years, considerable work has been presented on the methods for measuring and modeling the observed reflectance properties of materials. However, most of these works have been done under controlled lighting configurations, and ...
PS-NeRF: Neural Inverse Rendering for Multi-view Photometric Stereo
Computer Vision – ECCV 2022AbstractTraditional multi-view photometric stereo (MVPS) methods are often composed of multiple disjoint stages, resulting in noticeable accumulated errors. In this paper, we present a neural inverse rendering method for MVPS based on implicit ...





Comments