skip to main content
10.1145/3450508.3464573acmconferencesArticle/Chapter ViewAbstractPublication PagessiggraphConference Proceedingsconference-collections
course

Advances in neural rendering

Authors Info & Claims
Published:21 July 2021Publication History

ABSTRACT

Loss functions for Neural Rendering Jun-Yan Zhu

References

  1. [D-NeRF] Albert Pumarola, Enric Corona, Gerard Pons-Moll, and Francesc Moreno-Noguer. 2021. D-NeRF: Neural Radiance Fields for Dynamic Scenes. Computer Vision and Pattern Recognition (CVPR).Google ScholarGoogle Scholar
  2. [DyNeRF] Tianye Li, Mira Slavcheva, Michael Zollhoefer, Simon Green, Christoph Lassner, Changil Kim, Tanner Schmidt, Steven Lovegrove, Michael Goesele, and Zhaoyang Lv. 2021. Neural 3D Video Synthesis. arXiv.Google ScholarGoogle Scholar
  3. [NeRF] Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. 2020. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. European Conference on Computer Vision (ECCV).Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. [Nerfies] Keunhong Park, Utkarsh Sinha, Jonathan T. Barron, Sofien Bouaziz, Dan B Goldman, Steven M. Seitz, and Ricardo Martin-Brualla. 2020. Deformable Neural RadianceFields. arXiv.Google ScholarGoogle Scholar
  5. [NeRFlow] Yilun Du, Yinan Zhang, Hong-Xing Yu, Joshua B. Tenenbaum, and Jiajun Wu. 2020. Neural Radiance Flow for 4D View Synthesis and Video Processing. arXiv.Google ScholarGoogle Scholar
  6. [NR-NeRF] Edgar Tretschk, Ayush Tewari, Vladislav Golyanik, Michael Zollhoefer, Christoph Lassner, and Christian Theobalt. 2020. Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synthesis of a Dynamic Scene From Monocular Video. arXiv.Google ScholarGoogle Scholar
  7. [NSFF] Zhengqi Li, Simon Niklaus, Noah Snavely, and Oliver Wang. 2021. Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes. Computer Vision and Pattern Recognition (CVPR).Google ScholarGoogle Scholar
  8. [Video-NeRF] Wenqi Xian, Jia-Bin Huang, Johannes Kopf, and Changil Kim. 2020. Space-time Neural Irradiance Fields for Free-Viewpoint Video. arXiv.Google ScholarGoogle Scholar
  9. Richard A. Newcombe, Dieter Fox, and Steven M. Seitz. 2015. DynamicFusion: Reconstruction and Tracking of Non-Rigid Scenes in Real-Time. Computer Vision and Pattern Recognition (CVPR).Google ScholarGoogle Scholar
  10. Rui Yu, Chris Russell, Neill D. F. Campbell, and Lourdes Agapito. 2015. Direct, Dense, and Deformable: Template-Based Non-Rigid 3D Reconstruction from RGB Video. International Conference on Computer Vision (ICCV).Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Jae Shin Yoon, Kihwan Kim, Orazio Gallo, Hyun Soo Park, and Jan Kautz. 2020. Novel View Synthesis of Dynamic Scenes with Globally Coherent Depths from a Monocular Camera. Computer Vision and Pattern Recognition (CVPR).Google ScholarGoogle Scholar
  12. Stephen Lombardi, Tomas Simon, Jason Saragih, Gabriel Schwartz, Andreas Lehrmann, and Yaser Sheikh. 2019. Neural Volumes: Learning Dynamic Renderable Volumes from Images. SIGGRAPH.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Aayush Bansal, Minh Vo, Yaser Sheikh, Deva Ramanan, and Srinivasa Narasimhan. 2020. 4D Visualization of Dynamic Events from Unconstrained Multi-View Videos. Computer Vision and Pattern Recognition (CVPR).Google ScholarGoogle Scholar
  14. Michael Niemeyer, Lars Mescheder, Michael Oechsle, and Andreas Geiger. 2020. Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision. Computer Vision and Pattern Recognition (CVPR).Google ScholarGoogle Scholar
  15. Michael Niemeyer, Lars Mescheder, Michael Oechsle, and Andreas Geiger. 2019. Occupancy Flow: 4D Reconstruction by Learning Particle Dynamics. International Conference on Computer Vision (ICCV).Google ScholarGoogle ScholarCross RefCross Ref
  16. Mojtaba Bemana, Karol Myszkowski, Hans-Peter Seidel, and Tobias Ritschel. 2020. X-Fields: Implicit Neural View-, Light- and Time-Image Interpolation. SIGGRAPH Asia.Google ScholarGoogle Scholar
  17. Patrick Esser, Johannes Haux, Timo Milbich, Björn Ommer (2018). Towards Learning a Realistic Rendering of Human Behavior. CVPR.Google ScholarGoogle Scholar
  18. Albert Pumarola, Antonio Agudo, Alberto Sanfeliu, Francesc Moreno-Noguer (2018). Unsupervised Person Image Synthesis in Arbitrary Poses. CVPR.Google ScholarGoogle Scholar
  19. Chenyang Si, Wei Wang, Liang Wang, Tieniu Tan (2018). Multistage Adversarial Losses for Pose-Based Human Image Synthesis. CVPR.Google ScholarGoogle Scholar
  20. Caroline Chan, Shiry Ginosar, Tinghui Zhou, Alexei A. Efros (2019). Everybody Dance Now. ICCV.Google ScholarGoogle Scholar
  21. Kfir Aberman, Mingyi Shi, Jing Liao, Dani Lischinski, Baoquan Chen, Daniel Cohen-Or (2019). Deep Video-Based Performance Cloning. EG.Google ScholarGoogle Scholar
  22. A. Siarohin, S. Lathuiliere, S. Tulyakov, E. Ricci, N. Sebe (2019), First Order Motion Model for Image Animation, NeurIPS.Google ScholarGoogle Scholar
  23. Liqian Ma, Qianru Sun, Stamatios Georgoulis, Luc Van Gool, Bernt Schiele, Mario Fritz (2018), Disentangled Person Image Generation, CVPR.Google ScholarGoogle Scholar
  24. Ricardo Martin-Brualla, Rohit Pandey, Shuoran Yang, Pavel Pidlypenskyi, Jonathan Taylor, Julien Valentin, Sameh Khamis, Philip Davidson, Anastasia Tkach, Peter Lincoln, Adarsh Kowdle, Christoph Rhemann, Dan B Goldman, Cem Keskin, Steve Seitz, Shahram Izadi, Sean Fanello (2018). LookinGood: Enhancing Performance Capture with Real-Time Neural Re-Rendering. ToG.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Guilin Liu, Andrew Tao, Jan Kautz, Bryan Catanzaro (2018). Video-to-Video Synthesis. NeurIPS.Google ScholarGoogle Scholar
  26. Lingjie Liu, Weipeng Xu, Michael Zollhoefer, Hyeongwoo Kim, Florian Bernard, Marc Habermann, Wenping Wang, Christian Theobalt (2019). Neural Rendering and Reenactment of Human Actor Videos. ToG.Google ScholarGoogle Scholar
  27. Yining Li, Chen Huang, Chen Change Loy (2019). Dense intrinsic appearance flow for human pose transfer. CVPR.Google ScholarGoogle Scholar
  28. Kripasindhu Sarkar, Dushyant Mehta, Weipeng Xu, Vladislav Golyanik, Christian Theobalt (2020). Neural Re-Rendering of Humans from a Single Image. ECCV.Google ScholarGoogle Scholar
  29. Lingjie Liu, Weipeng Xu, Marc Habermann, Michael Zollhoefer, Florian Bernard, Hyeongwoo Kim, Wenping Wang, Christian Theobalt (2020). Neural Human Video Rendering by Learning Dynamic Textures and Rendering-to-Video Translation. TVCG.Google ScholarGoogle Scholar
  30. C. Lawrence Zitnick, Sing Bing Kang, Matthew Uyttendaele, Simon Winder, Richard Szeliski (2004). High-quality video view interpolation using a layered representation. ToG.Google ScholarGoogle Scholar
  31. Carsten Stoll, Jürgen Gall, Edilson de Aguiar, Sebastian Thrun, Christian Theobalt (2010). Video-Based Reconstruction of Animatable Human Characters. ToG.Google ScholarGoogle Scholar
  32. Feng Xu, Yebin Liu, Carsten Stoll, James Tompkin, Gaurav Bharaj, Qionghai Dai, Hans-Peter Seidel, Jan Kautz, Christian Theobalt (2011). Video-based Characters: Creating New Human Performances from a Multi-view Video Database. SIGGRAPH.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Guannan Li, Yebin Liu, Qionghai Dai (2014). Free-viewpoint Video Relighting from Multi-view Sequence Under General Illumination. MVA.Google ScholarGoogle Scholar
  34. Dan Casas, Marco Volino, John Collomosse, Adrian Hilton (2014). 4D Video Textures for Interactive Character Appearance. CGF.Google ScholarGoogle Scholar
  35. Marco Volino, Dan Casas, John Collomosse, Adrian Hilton (2014). Optimal Representation of Multiple View Video. BMVC.Google ScholarGoogle Scholar
  36. Alvaro Collet, Ming Chuang, Pat Sweeney, Don Gillett, Dennis Evseev, David Calabrese, Hugues Hoppe, Adam Kirk, Steve Sullivan (2015). High-quality streamable free-viewpoint video. ToG.Google ScholarGoogle Scholar
  37. Aliaksandra Shysheya, Egor Zakharov, Kara-Ali Aliev, Renat Bashirov, Egor Burkov, Karim Iskakov, Aleksei Ivakhnenko, Yury Malkov, Igor Pasechnik, Dmitry Ulyanov, Alexander Vakhitov, Victor Lempitsky (2019). Textured Neural Avatars. CVPR.Google ScholarGoogle Scholar
  38. Stephen Lombardi, Tomas Simon, Jason Saragih, Gabriel Schwartz, Andreas Lehrmann, Yaser Sheikh (2019). Neural volumes: Learning dynamic renderable volumes from images. ToG.Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Vincent Sitzmann, Justus Thies, Felix Heide, Matthias Niessner, Gordon Wetzstein, Michael Zollhöfer (2019). DeepVoxels: Learning Persistent 3D Feature Embeddings. CVPR.Google ScholarGoogle Scholar
  40. Abhimitra Meka, Rohit Pandey, Christian Haene, Sergio Orts-Escolano, Peter Barnum, Philip Davidson, Daniel Erickson, Yinda Zhang, Jonathan Taylor, Sofien Bouaziz, Chloe Legendre, Wan-Chun Ma, Ryan Overbeck, Thabo Beeler, Paul Debevec, Shahram Izadi, Christian Theobalt, Christoph Rhemann, Sean Fanello (2020). Deep Relightable Textures. SigAsia.Google ScholarGoogle Scholar
  41. Albert Pumarola, Enric Corona, Gerard Pons-Moll, Francesc Moreno-Noguer (2020). D-NeRF: Neural Radiance Fields for Dynamic Scenes. CVPR.Google ScholarGoogle Scholar
  42. Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, Ren Ng (2020). NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. ECCV.Google ScholarGoogle Scholar
  43. Marc Habermann, Lingjie Liu, Weipeng Xu, Michael Zollhoefer, Gerard Pons-Moll, Christian Theobalt (2021). Real-time Deep Dynamic Characters. SIGGRAPH.Google ScholarGoogle Scholar
  44. J. S. Yoon, L. Liu, V. Golyanik, K. Sarkar, H. S. Park and C. Theobalt, Pose-Guided Human Animation from a Single Image in the Wild, Computer Vision and Pattern Recognition (CVPR), 2021Google ScholarGoogle ScholarCross RefCross Ref
  45. K. Sarkar, V. Golyanik, L. Liu and C. Theobalt, Style and Pose Control for Image Synthesis of Humans from a Single Monocular View, arXiv.org, 2021.Google ScholarGoogle Scholar
  46. K. Sarkar, L. Liu, V. Golyanik, and C. Theobalt, HumanGAN: A Generative Model of Humans Images, arXiv.org, 2021Google ScholarGoogle Scholar

Index Terms

  1. Advances in neural rendering
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          SIGGRAPH '21: ACM SIGGRAPH 2021 Courses
          August 2021
          2220 pages
          ISBN:9781450383615
          DOI:10.1145/3450508

          Copyright © 2021 Owner/Author

          Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 21 July 2021

          Check for updates

          Qualifiers

          • course

          Acceptance Rates

          Overall Acceptance Rate1,822of8,601submissions,21%

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader