skip to main content
research-article
Open Access

Physics informed neural fields for smoke reconstruction with sparse data

Published:22 July 2022Publication History
Skip Abstract Section

Abstract

High-fidelity reconstruction of dynamic fluids from sparse multiview RGB videos remains a formidable challenge, due to the complexity of the underlying physics as well as the severe occlusion and complex lighting in the captured data. Existing solutions either assume knowledge of obstacles and lighting, or only focus on simple fluid scenes without obstacles or complex lighting, and thus are unsuitable for real-world scenes with unknown lighting conditions or arbitrary obstacles. We present the first method to reconstruct dynamic fluid phenomena by leveraging the governing physics (ie, Navier -Stokes equations) in an end-to-end optimization from a mere set of sparse video frames without taking lighting conditions, geometry information, or boundary conditions as input. Our method provides a continuous spatio-temporal scene representation using neural networks as the ansatz of density and velocity solution functions for fluids as well as the radiance field for static objects. With a hybrid architecture that separates static and dynamic contents apart, fluid interactions with static obstacles are reconstructed for the first time without additional geometry input or human labeling. By augmenting time-varying neural radiance fields with physics-informed deep learning, our method benefits from the supervision of images and physical priors. Our progressively growing model with regularization further disentangles the density-color ambiguity in the radiance field, which allows for a more robust optimization from the given input of sparse views. A pretrained density-to-velocity fluid model is leveraged in addition as the data prior to avoid suboptimal velocity solutions which underestimate vorticity but trivially fulfill physical equations. Our method exhibits high-quality results with relaxed constraints and strong flexibility on a representative set of synthetic and real flow captures. Code and sample tests are at https://people.mpi-inf.mpg.de/~mchu/projects/PI-NeRF/.

Skip Supplemental Material Section

Supplemental Material

3528223.3530169.mp4

presentation

References

  1. Bradley Atcheson, Wolfgang Heidrich, and Ivo Ihrke. 2009. An evaluation of optical flow algorithms for background oriented schlieren imaging. Experiments in Fluids 46, 3 (01 Mar 2009), 467--476. Google ScholarGoogle ScholarCross RefCross Ref
  2. Peter Bauer, Alan Thorpe, and Gilbert Brunet. 2015. The quiet revolution of numerical weather prediction. Nature 525, 7567 (2015), 47--55.Google ScholarGoogle Scholar
  3. Jens Berg and Kaj Nyström. 2018. A unified deep artificial neural network approach to partial differential equations in complex geometries. Neurocomputing 317 (2018), 28--41. Google ScholarGoogle ScholarCross RefCross Ref
  4. Robert Bridson. 2015. Fluid simulation for computer graphics. CRC press.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Michael Broxton, John Flynn, Ryan Overbeck, Daniel Erickson, Peter Hedman, Matthew Duvall, Jason Dourgarian, Jay Busch, Matt Whalen, and Paul Debevec. 2020. Immersive light field video with a layered mesh representation. ACM Transactions on Graphics (TOG) 39, 4 (2020), 86--1.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Dennis M Bushnell and KJ Moore. 1991. Drag reduction in nature. Annual review of fluid mechanics 23, 1 (1991), 65--79.Google ScholarGoogle Scholar
  7. Rohan Chabra, Jan E Lenssen, Eddy Ilg, Tanner Schmidt, Julian Straub, Steven Lovegrove, and Richard Newcombe. 2020. Deep local shapes: Learning local SDF priors for detailed 3D reconstruction. In European Conference on Computer Vision. Springer, 608--625.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Jianchuan Chen, Ying Zhang, Di Kang, Xuefei Zhe, Linchao Bao, and Huchuan Lu. 2021. Animatable Neural Radiance Fields from Monocular RGB Video. arXiv:2106.13629 [cs.CV]Google ScholarGoogle Scholar
  9. Zhiqin Chen and Hao Zhang. 2019. Learning implicit fields for generative shape modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5939--5948.Google ScholarGoogle ScholarCross RefCross Ref
  10. Julian Chibane, Gerard Pons-Moll, et al. 2020. Neural Unsigned Distance Fields for Implicit Function Learning. Advances in Neural Information Processing Systems 33 (2020).Google ScholarGoogle Scholar
  11. Mengyu Chu, Nils Thuerey, Hans-Peter Seidel, Christian Theobalt, and Rhaleb Zayer. 2021. Learning meaningful controls for fluids. ACM Transactions on Graphics (TOG) 40, 4 (2021), 1--13.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Mengyu Chu, You Xie, Jonas Mayer, Laura Leal-Taixé, and Nils Thuerey. 2020. Learning temporal coherence via self-supervision for GAN-based video generation. ACM Transactions on Graphics (TOG) 39, 4 (2020), 75--1.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Marie-Lena Eckert, Kiwon Um, and Nils Thuerey. 2019. ScalarFlow: a large-scale volumetric data set of real-world scalar transport flows for computer animation and machine learning. ACM Transactions on Graphics (TOG) 38, 6 (2019), 1--16.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Gerrit E Elsinga, Fulvio Scarano, Bernhard Wieneke, and Bas W van Oudheusden. 2006. Tomographic particle image velocimetry. Experiments in fluids 41, 6 (2006), 933--947.Google ScholarGoogle Scholar
  15. SM Ali Eslami, Danilo Jimenez Rezende, Frederic Besse, Fabio Viola, Ari S Morcos, Marta Garnelo, Avraham Ruderman, Andrei A Rusu, Ivo Danihelka, Karol Gregor, et al. 2018. Neural scene representation and rendering. Science 360, 6394 (2018), 1204--1210.Google ScholarGoogle Scholar
  16. Zahra Forootaninia and Rahul Narain. 2020. Frequency-domain smoke guiding. ACM Transactions on Graphics (TOG) 39, 6 (2020), 1--10.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Erik Franz, Barbara Solenthaler, and Nils Thuerey. 2021. Global Transport for Fluid Reconstruction with Learned Self-Supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1632--1642.Google ScholarGoogle ScholarCross RefCross Ref
  18. Guy Gafni, Justus Thies, Michael Zollhoefer, and Matthias Niessner. 2020. Dynamic Neural Radiance Fields for Monocular 4D Facial Avatar Reconstruction. arXiv:2012.03065 [cs.CV]Google ScholarGoogle Scholar
  19. Zhenglin Geng, Daniel Johnson, and Ronald Fedkiw. 2020. Coercing machine learning to output physically accurate results. J. Comput. Phys. 406 (2020), 109099.Google ScholarGoogle ScholarCross RefCross Ref
  20. Frederic Gibou, David Hyde, and Ron Fedkiw. 2019. Sharp interface approaches and deep learning techniques for multiphase flows. J. Comput. Phys. 380 (2019), 442--463.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Jonathan Granskog, Fabrice Rousselle, Marios Papas, and Jan Novák. 2020. Compositional neural scene representations for shading inference. ACM Transactions on Graphics (TOG) 39, 4 (2020), 135--1.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. I. Grant. 1997. Particle Image Velocimetry: a Review. Proceedings of the Institution of Mechanical Engineers 211, 1 (1997), 55--76.Google ScholarGoogle Scholar
  23. James Gregson, Ivo Ihrke, Nils Thuerey, and Wolfgang Heidrich. 2014. From capture to simulation: connecting forward and inverse problems in fluids. ACM Transactions on Graphics (TOG) 33, 4 (2014), 1--11.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Jinwei Gu, S.K. Nayar, E. Grinspun, P.N. Belhumeur, and R. Ramamoorthi. 2013. Compressive Structured Light for Recovering Inhomogeneous Participating Media. PAMI 35, 3 (2013), 555--567.Google ScholarGoogle Scholar
  25. S. He, K. Reif, and R. Unbehauen. 2000. Multilayer Neural Networks for Solving a Class of Partial Differential Equations. Neural Netw. 13, 3 (apr 2000), 385--396. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Yu Ji, Jinwei Ye, and Jingyi Yu. 2013. Reconstructing Gas Flows Using Light-Path Approximation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Theodore Kim, Nils Thuerey, Doug James, and Markus Gross. 2008. Wavelet turbulence for fluid simulation. ACM Transactions on Graphics (TOG) 27, 3 (2008), 1--6.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Youngjoong Kwon, Dahun Kim, Duygu Ceylan, and Henry Fuchs. 2021. Neural Human Performer: Learning Generalizable Radiance Fields for Human Performance Rendering. NeurIPS (2021).Google ScholarGoogle Scholar
  29. I.E. Lagaris, A. Likas, and D.I. Fotiadis. 1998. Artificial neural networks for solving ordinary and partial differential equations. IEEE Transactions on Neural Networks 9, 5 (1998), 987--1000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Christian Ledig, Lucas Theis, Ferenc Huszár, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, et al. 2017. Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE conference on computer vision and pattern recognition. 4681--4690.Google ScholarGoogle ScholarCross RefCross Ref
  31. Z. Li, Simon Niklaus, Noah Snavely, and Oliver Wang. 2020. Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes. ArXiv abs/2011.13084 (2020).Google ScholarGoogle Scholar
  32. Lingjie Liu, Marc Habermann, Viktor Rudnev, Kripasindhu Sarkar, Jiatao Gu, and Christian Theobalt. 2021. Neural Actor: Neural Free-view Synthesis of Human Actors with Pose Control. ACM Trans. Graph.(ACM SIGGRAPH Asia) (2021).Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Stephen Lombardi, Tomas Simon, Jason Saragih, Gabriel Schwartz, Andreas Lehrmann, and Yaser Sheikh. 2019. Neural volumes: learning dynamic renderable volumes from images. ACM Transactions on Graphics (TOG) 38, 4 (2019), 1--14.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Stephen Lombardi, Tomas Simon, Gabriel Schwartz, Michael Zollhoefer, Yaser Sheikh, and Jason Saragih. 2021. Mixture of Volumetric Primitives for Efficient Neural Rendering. arXiv:2103.01954 [cs.GR]Google ScholarGoogle Scholar
  35. Nam Mai-Duy and Thanh Tran-Cong. 2003. Approximation of function and its derivatives using radial basis function networks. Applied Mathematical Modelling 27, 3 (2003), 197--220. Google ScholarGoogle ScholarCross RefCross Ref
  36. Ben Mildenhall, Pratul P Srinivasan, Rodrigo Ortiz-Cayon, Nima Khademi Kalantari, Ravi Ramamoorthi, Ren Ng, and Abhishek Kar. 2019. Local light field fusion: Practical view synthesis with prescriptive sampling guidelines. ACM Transactions on Graphics (TOG) 38, 4 (2019), 1--14.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. 2020. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. In ECCV.Google ScholarGoogle Scholar
  38. Thomas Müller, Alex Evans, Christoph Schied, and Alexander Keller. 2022. Instant Neural Graphics Primitives with a Multiresolution Hash Encoding. arXiv preprint arXiv:2201.05989 (2022).Google ScholarGoogle Scholar
  39. Michael Niemeyer, Lars Mescheder, Michael Oechsle, and Andreas Geiger. 2020. Differentiable volumetric rendering: Learning implicit 3d representations without 3d supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 3504--3515.Google ScholarGoogle ScholarCross RefCross Ref
  40. Samira Pakravan, Pouria A. Mistani, Miguel A. Aragon-Calvo, and Frederic Gibou. 2021. Solving inverse-PDE problems with physics-aware neural networks. J. Comput. Phys. 440 (2021), 110414.Google ScholarGoogle ScholarCross RefCross Ref
  41. Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. 2019. DeepSDF: Learning continuous signed distance functions for shape representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 165--174.Google ScholarGoogle ScholarCross RefCross Ref
  42. Keunhong Park, Utkarsh Sinha, Jonathan T. Barron, Sofien Bouaziz, Dan B Goldman, Steven M. Seitz, and Ricardo Martin-Brualla. 2020. Deformable Neural Radiance Fields. arXiv preprint arXiv:2011.12948 (2020).Google ScholarGoogle Scholar
  43. Keunhong Park, Utkarsh Sinha, Peter Hedman, Jonathan T. Barron, Sofien Bouaziz, Dan B Goldman, Ricardo Martin-Brualla, and Steven M. Seitz. 2021. HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields. ACM Trans. Graph. 40, 6, Article 238 (dec 2021).Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Sida Peng, Junting Dong, Qianqian Wang, Shangzhan Zhang, Qing Shuai, Hujun Bao, and Xiaowei Zhou. 2021a. Animatable Neural Radiance Fields for Human Body Modeling. ICCV (2021).Google ScholarGoogle Scholar
  45. Songyou Peng, Michael Niemeyer, Lars Mescheder, Marc Pollefeys, and Andreas Geiger. 2020. Convolutional occupancy networks. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23--28, 2020, Proceedings, Part III 16. Springer, 523--540.Google ScholarGoogle Scholar
  46. Sida Peng, Yuanqing Zhang, Yinghao Xu, Qianqian Wang, Qing Shuai, Hujun Bao, and Xiaowei Zhou. 2021b. Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans. In CVPR.Google ScholarGoogle Scholar
  47. Julien Philip, Michaël Gharbi, Tinghui Zhou, Alexei A Efros, and George Drettakis. 2019. Multi-view relighting using a geometry-aware network. ACM Transactions on Graphics (TOG) 38, 4 (2019), 1--14.Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Sheng Qiu, Chen Li, Changbo Wang, and Hong Qin. 2021. A Rapid, End-to-end, Generative Model for Gaseous Phenomena from Limited Views. (2021). Google ScholarGoogle ScholarCross RefCross Ref
  49. M. Raissi, P. Perdikaris, and G.E. Karniadakis. 2019. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 378 (2019), 686--707. Google ScholarGoogle ScholarCross RefCross Ref
  50. Maziar Raissi, Alireza Yazdani, and George Em Karniadakis. 2020. Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations. Science 367, 6481 (2020), 1026--1030.Google ScholarGoogle Scholar
  51. Amit Raj, Michael Zollhoefer, Tomas Simon, Jason Saragih, Shunsuke Saito, James Hays, and Stephen Lombardi. 2021. PVA: Pixel-aligned Volumetric Avatars. arXiv:2101.02697 [cs.CV]Google ScholarGoogle Scholar
  52. Shunsuke Saito, Zeng Huang, Ryota Natsume, Shigeo Morishima, Angjoo Kanazawa, and Hao Li. 2019. PIFu: Pixel-aligned implicit function for high-resolution clothed human digitization. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 2304--2314.Google ScholarGoogle ScholarCross RefCross Ref
  53. Katja Schwarz, Yiyi Liao, Michael Niemeyer, and Andreas Geiger. 2020. GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis. Advances in Neural Information Processing Systems 33 (2020).Google ScholarGoogle Scholar
  54. Kuangyu Shi, Michael Souvatzoglou, Sabrina T Astner, Peter Vaupel, Fridtjof Nüsslin, Jan J Wilkens, and Sibylle I Ziegler. 2010. Quantitative assessment of hypoxia kinetic models by a cross-study of dynamic 18F-FAZA and 15O-H2O in patients with head and neck tumors. Journal of Nuclear Medicine 51, 9 (2010), 1386--1394.Google ScholarGoogle ScholarCross RefCross Ref
  55. Justin Sirignano and Konstantinos Spiliopoulos. 2018. DGM: A deep learning algorithm for solving partial differential equations. J. Comput. Phys. 375 (Dec 2018), 1339--1364. Google ScholarGoogle ScholarCross RefCross Ref
  56. Vincent Sitzmann, Julien N.P. Martel, Alexander W. Bergman, David B. Lindell, and Gordon Wetzstein. 2020. Implicit Neural Representations with Periodic Activation Functions. In Proc. NeurIPS.Google ScholarGoogle Scholar
  57. Vincent Sitzmann, Justus Thies, Felix Heide, Matthias Nießner, Gordon Wetzstein, and Michael Zollhofer. 2019a. DeepVoxels: Learning persistent 3D feature embeddings. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2437--2446.Google ScholarGoogle ScholarCross RefCross Ref
  58. Vincent Sitzmann, Michael Zollhöfer, and Gordon Wetzstein. 2019b. Scene representation networks: Continuous 3D-structure-aware neural scene representations. In Advances in Neural Information Processing Systems. 1121--1132.Google ScholarGoogle Scholar
  59. Pratul P Srinivasan, Boyang Deng, Xiuming Zhang, Matthew Tancik, Ben Mildenhall, and Jonathan T Barron. 2021. NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 7495--7504.Google ScholarGoogle ScholarCross RefCross Ref
  60. Pratul P Srinivasan, Richard Tucker, Jonathan T Barron, Ravi Ramamoorthi, Ren Ng, and Noah Snavely. 2019. Pushing the boundaries of view extrapolation with multiplane images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 175--184.Google ScholarGoogle ScholarCross RefCross Ref
  61. Shih-Yang Su, Frank Yu, Michael Zollhoefer, and Helge Rhodin. 2021. A-NeRF: Surface-free Human 3D Pose Refinement via Neural Rendering. arXiv preprint arXiv:2102.06199 (2021).Google ScholarGoogle Scholar
  62. Matthew Tancik, Pratul P. Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan T. Barron, and Ren Ng. 2020. Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains. NeurIPS (2020).Google ScholarGoogle Scholar
  63. Ayush Tewari, Ohad Fried, Justus Thies, Vincent Sitzmann, Stephen Lombardi, Kalyan Sunkavalli, Ricardo Martin-Brualla, Tomas Simon, Jason Saragih, Matthias Nießner, et al. 2020. State of the art on neural rendering. In Computer Graphics Forum, Vol. 39. Wiley Online Library, 701--727.Google ScholarGoogle ScholarCross RefCross Ref
  64. Nils Thuerey and Tobias Pfaff. 2018. MantaFlow. http://mantaflow.com.Google ScholarGoogle Scholar
  65. Edgar Tretschk, Ayush Tewari, Vladislav Golyanik, Michael Zollhöfer, Christoph Lassner, and Christian Theobalt. 2021. Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synthesis of a Dynamic Scene From Monocular Video. In IEEE International Conference on Computer Vision (ICCV). IEEE.Google ScholarGoogle ScholarCross RefCross Ref
  66. Kiwon Um, Robert Brand, Yun Raymond Fei, Philipp Holl, and Nils Thuerey. 2020. Solver-in-the-loop: Learning from differentiable physics to interact with iterative pde-solvers. Advances in Neural Information Processing Systems 33 (2020), 6111--6122.Google ScholarGoogle Scholar
  67. Ziyan Wang, Timur Bagautdinov, Stephen Lombardi, Tomas Simon, Jason Saragih, Jessica Hodgins, and Michael Zollhöfer. 2020. Learning Compositional Radiance Fields of Dynamic Human Heads. arXiv:2012.09955 [cs.CV]Google ScholarGoogle Scholar
  68. Wenqi Xian, Jia-Bin Huang, Johannes Kopf, and Changil Kim. 2020. Space-time Neural Irradiance Fields for Free-Viewpoint Video. arXiv:2011.12950 [cs.CV]Google ScholarGoogle Scholar
  69. You Xie, Erik Franz, Mengyu Chu, and Nils Thuerey. 2018. tempoGAN: A Temporally Coherent, Volumetric GAN for Super-resolution Fluid Flow. ACM Transactions on Graphics (TOG) 37, 4 (2018), 95.Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. Jinhui Xiong, Ramzi Idoughi, Andres A Aguirre-Pablo, Abdulrahman B Aljedaani, Xiong Dun, Qiang Fu, Sigurdur T Thoroddsen, and Wolfgang Heidrich. 2017. Rainbow particle imaging velocimetry for dense 3D fluid velocity imaging. ACM Transactions on Graphics (TOG) 36, 4 (2017), 1--14.Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. Lior Yariv, Yoni Kasten, Dror Moran, Meirav Galun, Matan Atzmon, Basri Ronen, and Yaron Lipman. 2020. Multiview Neural Surface Reconstruction by Disentangling Geometry and Appearance. Advances in Neural Information Processing Systems 33 (2020).Google ScholarGoogle Scholar
  72. Wang Yifan, Lukas Rahmann, and Olga Sorkine-hornung. 2022. Geometry-Consistent Neural Shape Representation with Implicit Displacement Fields. In International Conference on Learning Representations.Google ScholarGoogle Scholar
  73. Wang Yifan, Felice Serena, Shihao Wu, Cengiz Öztireli, and Olga Sorkine-Hornung. 2019. Differentiable surface splatting for point-based geometry processing. ACM Transactions on Graphics (TOG) 38, 6 (2019), 1--14.Google ScholarGoogle ScholarDigital LibraryDigital Library
  74. Alex Yu, Ruilong Li, Matthew Tancik, Hao Li, Ren Ng, and Angjoo Kanazawa. 2021. PlenOctrees for Real-time Rendering of Neural Radiance Fields. In ICCV.Google ScholarGoogle Scholar
  75. Guangming Zang, Ramzi Idoughi, Congli Wang, Anthony Bennett, Jianguo Du, Scott Skeen, William L. Roberts, Peter Wonka, and Wolfgang Heidrich. 2020. TomoFluid: Reconstructing Dynamic Fluid from Sparse View Videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE. https://repository.kaust.edu.sa/handle/10754/662380Google ScholarGoogle ScholarCross RefCross Ref
  76. Xiuming Zhang, Sean Fanello, Yun-Ta Tsai, Tiancheng Sun, Tianfan Xue, Rohit Pandey, Sergio Orts-Escolano, Philip Davidson, Christoph Rhemann, Paul Debevec, et al. 2021. Neural light transport for relighting and view synthesis. ACM Transactions on Graphics (TOG) 40, 1 (2021), 1--17.Google ScholarGoogle ScholarDigital LibraryDigital Library
  77. Quan Zheng, Gurprit Singh, and Hans-Peter Seidel. 2021. Neural Relightable Participating Media Rendering. Advances in Neural Information Processing Systems 34 (2021).Google ScholarGoogle Scholar
  78. Tinghui Zhou, Richard Tucker, John Flynn, Graham Fyffe, and Noah Snavely. 2018. Stereo magnification: learning view synthesis using multiplane images. ACM Transactions on Graphics (TOG) 37, 4 (2018), 1--12.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Physics informed neural fields for smoke reconstruction with sparse data

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image ACM Transactions on Graphics
        ACM Transactions on Graphics  Volume 41, Issue 4
        July 2022
        1978 pages
        ISSN:0730-0301
        EISSN:1557-7368
        DOI:10.1145/3528223
        Issue’s Table of Contents

        Copyright © 2022 Owner/Author

        This work is licensed under a Creative Commons Attribution International 4.0 License.

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 22 July 2022
        Published in tog Volume 41, Issue 4

        Check for updates

        Qualifiers

        • research-article

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader