ABSTRACT
Loss functions for Neural Rendering Jun-Yan Zhu
- [D-NeRF] Albert Pumarola, Enric Corona, Gerard Pons-Moll, and Francesc Moreno-Noguer. 2021. D-NeRF: Neural Radiance Fields for Dynamic Scenes. Computer Vision and Pattern Recognition (CVPR).Google Scholar
- [DyNeRF] Tianye Li, Mira Slavcheva, Michael Zollhoefer, Simon Green, Christoph Lassner, Changil Kim, Tanner Schmidt, Steven Lovegrove, Michael Goesele, and Zhaoyang Lv. 2021. Neural 3D Video Synthesis. arXiv.Google Scholar
- [NeRF] Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. 2020. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. European Conference on Computer Vision (ECCV).Google Scholar
Digital Library
- [Nerfies] Keunhong Park, Utkarsh Sinha, Jonathan T. Barron, Sofien Bouaziz, Dan B Goldman, Steven M. Seitz, and Ricardo Martin-Brualla. 2020. Deformable Neural RadianceFields. arXiv.Google Scholar
- [NeRFlow] Yilun Du, Yinan Zhang, Hong-Xing Yu, Joshua B. Tenenbaum, and Jiajun Wu. 2020. Neural Radiance Flow for 4D View Synthesis and Video Processing. arXiv.Google Scholar
- [NR-NeRF] Edgar Tretschk, Ayush Tewari, Vladislav Golyanik, Michael Zollhoefer, Christoph Lassner, and Christian Theobalt. 2020. Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synthesis of a Dynamic Scene From Monocular Video. arXiv.Google Scholar
- [NSFF] Zhengqi Li, Simon Niklaus, Noah Snavely, and Oliver Wang. 2021. Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes. Computer Vision and Pattern Recognition (CVPR).Google Scholar
- [Video-NeRF] Wenqi Xian, Jia-Bin Huang, Johannes Kopf, and Changil Kim. 2020. Space-time Neural Irradiance Fields for Free-Viewpoint Video. arXiv.Google Scholar
- Richard A. Newcombe, Dieter Fox, and Steven M. Seitz. 2015. DynamicFusion: Reconstruction and Tracking of Non-Rigid Scenes in Real-Time. Computer Vision and Pattern Recognition (CVPR).Google Scholar
- Rui Yu, Chris Russell, Neill D. F. Campbell, and Lourdes Agapito. 2015. Direct, Dense, and Deformable: Template-Based Non-Rigid 3D Reconstruction from RGB Video. International Conference on Computer Vision (ICCV).Google Scholar
Digital Library
- Jae Shin Yoon, Kihwan Kim, Orazio Gallo, Hyun Soo Park, and Jan Kautz. 2020. Novel View Synthesis of Dynamic Scenes with Globally Coherent Depths from a Monocular Camera. Computer Vision and Pattern Recognition (CVPR).Google Scholar
- Stephen Lombardi, Tomas Simon, Jason Saragih, Gabriel Schwartz, Andreas Lehrmann, and Yaser Sheikh. 2019. Neural Volumes: Learning Dynamic Renderable Volumes from Images. SIGGRAPH.Google Scholar
Digital Library
- Aayush Bansal, Minh Vo, Yaser Sheikh, Deva Ramanan, and Srinivasa Narasimhan. 2020. 4D Visualization of Dynamic Events from Unconstrained Multi-View Videos. Computer Vision and Pattern Recognition (CVPR).Google Scholar
- Michael Niemeyer, Lars Mescheder, Michael Oechsle, and Andreas Geiger. 2020. Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision. Computer Vision and Pattern Recognition (CVPR).Google Scholar
- Michael Niemeyer, Lars Mescheder, Michael Oechsle, and Andreas Geiger. 2019. Occupancy Flow: 4D Reconstruction by Learning Particle Dynamics. International Conference on Computer Vision (ICCV).Google Scholar
Cross Ref
- Mojtaba Bemana, Karol Myszkowski, Hans-Peter Seidel, and Tobias Ritschel. 2020. X-Fields: Implicit Neural View-, Light- and Time-Image Interpolation. SIGGRAPH Asia.Google Scholar
- Patrick Esser, Johannes Haux, Timo Milbich, Björn Ommer (2018). Towards Learning a Realistic Rendering of Human Behavior. CVPR.Google Scholar
- Albert Pumarola, Antonio Agudo, Alberto Sanfeliu, Francesc Moreno-Noguer (2018). Unsupervised Person Image Synthesis in Arbitrary Poses. CVPR.Google Scholar
- Chenyang Si, Wei Wang, Liang Wang, Tieniu Tan (2018). Multistage Adversarial Losses for Pose-Based Human Image Synthesis. CVPR.Google Scholar
- Caroline Chan, Shiry Ginosar, Tinghui Zhou, Alexei A. Efros (2019). Everybody Dance Now. ICCV.Google Scholar
- Kfir Aberman, Mingyi Shi, Jing Liao, Dani Lischinski, Baoquan Chen, Daniel Cohen-Or (2019). Deep Video-Based Performance Cloning. EG.Google Scholar
- A. Siarohin, S. Lathuiliere, S. Tulyakov, E. Ricci, N. Sebe (2019), First Order Motion Model for Image Animation, NeurIPS.Google Scholar
- Liqian Ma, Qianru Sun, Stamatios Georgoulis, Luc Van Gool, Bernt Schiele, Mario Fritz (2018), Disentangled Person Image Generation, CVPR.Google Scholar
- Ricardo Martin-Brualla, Rohit Pandey, Shuoran Yang, Pavel Pidlypenskyi, Jonathan Taylor, Julien Valentin, Sameh Khamis, Philip Davidson, Anastasia Tkach, Peter Lincoln, Adarsh Kowdle, Christoph Rhemann, Dan B Goldman, Cem Keskin, Steve Seitz, Shahram Izadi, Sean Fanello (2018). LookinGood: Enhancing Performance Capture with Real-Time Neural Re-Rendering. ToG.Google Scholar
Digital Library
- Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Guilin Liu, Andrew Tao, Jan Kautz, Bryan Catanzaro (2018). Video-to-Video Synthesis. NeurIPS.Google Scholar
- Lingjie Liu, Weipeng Xu, Michael Zollhoefer, Hyeongwoo Kim, Florian Bernard, Marc Habermann, Wenping Wang, Christian Theobalt (2019). Neural Rendering and Reenactment of Human Actor Videos. ToG.Google Scholar
- Yining Li, Chen Huang, Chen Change Loy (2019). Dense intrinsic appearance flow for human pose transfer. CVPR.Google Scholar
- Kripasindhu Sarkar, Dushyant Mehta, Weipeng Xu, Vladislav Golyanik, Christian Theobalt (2020). Neural Re-Rendering of Humans from a Single Image. ECCV.Google Scholar
- Lingjie Liu, Weipeng Xu, Marc Habermann, Michael Zollhoefer, Florian Bernard, Hyeongwoo Kim, Wenping Wang, Christian Theobalt (2020). Neural Human Video Rendering by Learning Dynamic Textures and Rendering-to-Video Translation. TVCG.Google Scholar
- C. Lawrence Zitnick, Sing Bing Kang, Matthew Uyttendaele, Simon Winder, Richard Szeliski (2004). High-quality video view interpolation using a layered representation. ToG.Google Scholar
- Carsten Stoll, Jürgen Gall, Edilson de Aguiar, Sebastian Thrun, Christian Theobalt (2010). Video-Based Reconstruction of Animatable Human Characters. ToG.Google Scholar
- Feng Xu, Yebin Liu, Carsten Stoll, James Tompkin, Gaurav Bharaj, Qionghai Dai, Hans-Peter Seidel, Jan Kautz, Christian Theobalt (2011). Video-based Characters: Creating New Human Performances from a Multi-view Video Database. SIGGRAPH.Google Scholar
Digital Library
- Guannan Li, Yebin Liu, Qionghai Dai (2014). Free-viewpoint Video Relighting from Multi-view Sequence Under General Illumination. MVA.Google Scholar
- Dan Casas, Marco Volino, John Collomosse, Adrian Hilton (2014). 4D Video Textures for Interactive Character Appearance. CGF.Google Scholar
- Marco Volino, Dan Casas, John Collomosse, Adrian Hilton (2014). Optimal Representation of Multiple View Video. BMVC.Google Scholar
- Alvaro Collet, Ming Chuang, Pat Sweeney, Don Gillett, Dennis Evseev, David Calabrese, Hugues Hoppe, Adam Kirk, Steve Sullivan (2015). High-quality streamable free-viewpoint video. ToG.Google Scholar
- Aliaksandra Shysheya, Egor Zakharov, Kara-Ali Aliev, Renat Bashirov, Egor Burkov, Karim Iskakov, Aleksei Ivakhnenko, Yury Malkov, Igor Pasechnik, Dmitry Ulyanov, Alexander Vakhitov, Victor Lempitsky (2019). Textured Neural Avatars. CVPR.Google Scholar
- Stephen Lombardi, Tomas Simon, Jason Saragih, Gabriel Schwartz, Andreas Lehrmann, Yaser Sheikh (2019). Neural volumes: Learning dynamic renderable volumes from images. ToG.Google Scholar
Digital Library
- Vincent Sitzmann, Justus Thies, Felix Heide, Matthias Niessner, Gordon Wetzstein, Michael Zollhöfer (2019). DeepVoxels: Learning Persistent 3D Feature Embeddings. CVPR.Google Scholar
- Abhimitra Meka, Rohit Pandey, Christian Haene, Sergio Orts-Escolano, Peter Barnum, Philip Davidson, Daniel Erickson, Yinda Zhang, Jonathan Taylor, Sofien Bouaziz, Chloe Legendre, Wan-Chun Ma, Ryan Overbeck, Thabo Beeler, Paul Debevec, Shahram Izadi, Christian Theobalt, Christoph Rhemann, Sean Fanello (2020). Deep Relightable Textures. SigAsia.Google Scholar
- Albert Pumarola, Enric Corona, Gerard Pons-Moll, Francesc Moreno-Noguer (2020). D-NeRF: Neural Radiance Fields for Dynamic Scenes. CVPR.Google Scholar
- Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, Ren Ng (2020). NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. ECCV.Google Scholar
- Marc Habermann, Lingjie Liu, Weipeng Xu, Michael Zollhoefer, Gerard Pons-Moll, Christian Theobalt (2021). Real-time Deep Dynamic Characters. SIGGRAPH.Google Scholar
- J. S. Yoon, L. Liu, V. Golyanik, K. Sarkar, H. S. Park and C. Theobalt, Pose-Guided Human Animation from a Single Image in the Wild, Computer Vision and Pattern Recognition (CVPR), 2021Google Scholar
Cross Ref
- K. Sarkar, V. Golyanik, L. Liu and C. Theobalt, Style and Pose Control for Image Synthesis of Humans from a Single Monocular View, arXiv.org, 2021.Google Scholar
- K. Sarkar, L. Liu, V. Golyanik, and C. Theobalt, HumanGAN: A Generative Model of Humans Images, arXiv.org, 2021Google Scholar
Index Terms
Advances in neural rendering
Recommendations
Scalable neural indoor scene rendering
We propose a scalable neural scene reconstruction and rendering method to support distributed training and interactive rendering of large indoor scenes. Our representation is based on tiles. Tile appearances are trained in parallel through a background ...
Deferred neural rendering: image synthesis using neural textures
The modern computer graphics pipeline can synthesize images at remarkable visual quality; however, it requires well-defined, high-quality 3D content as input. In this work, we explore the use of imperfect 3D content, for instance, obtained from photo-...




Comments