skip to main content
research-article

A Multi-Pass GAN for Fluid Flow Super-Resolution

Published:26 July 2019Publication History
Skip Abstract Section

Abstract

We propose a novel method to up-sample volumetric functions with generative neural networks using several orthogonal passes. Our method decomposes generative problems on Cartesian field functions into multiple smaller sub-problems that can be learned more efficiently. Specifically, we utilize two separate generative adversarial networks: the first one up-scales slices which are parallel to the XY-plane, whereas the second one refines the whole volume along the Z---axis working on slices in the YZ-plane. In this way, we obtain full coverage for the 3D target function and can leverage spatio-temporal supervision with a set of discriminators. Additionally, we demonstrate that our method can be combined with curriculum learning and progressive growing approaches. We arrive at a first method that can up-sample volumes by a factor of eight along each dimension, i.e., increasing the number of degrees of freedom by 512. Large volumetric up-scaling factors such as this one have previously not been attainable as the required number of weights in the neural networks renders adversarial training runs prohibitively difficult. We demonstrate the generality of our trained networks with a series of comparisons to previous work, a variety of complex 3D results, and an analysis of the resulting performance.

Skip Supplemental Material Section

Supplemental Material

References

  1. Martín Arjovsky and Léon Bottou. 2017. Towards Principled Methods for Training Generative Adversarial Networks. CoRR abs/1701.04862 (2017). arXiv:1701.04862 http://arxiv.org/abs/1701.04862Google ScholarGoogle Scholar
  2. Steve Bako, Thijs Vogels, Brian McWilliams, Mark Meyer, Jan Novák, Alex Harvill, Pradeep Sen, Tony Derose, and Fabrice Rousselle. 2017. Kernel-predicting convolutional networks for denoising Monte Carlo renderings. ACM Trans. Graph. 36, 4 (2017), 97--1. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Christopher Batty, Florence Bertails, and Robert Bridson. 2007. A fast variational framework for accurate solid-fluid coupling. In ACM Transactions on Graphics (TOG). ACM, 100. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Prateep Bhattacharjee and Sukhendu Das. 2017. Temporal coherency based criteria for predicting video frames using deep multi-stage generative adversarial networks. In Advances in Neural Information Processing Systems. 4268--4277. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Chakravarty R Alla Chaitanya, Anton S Kaplanyan, Christoph Schied, Marco Salvi, Aaron Lefohn, Derek Nowrouzezahrai, and Timo Aila. 2017. Interactive reconstruction of Monte Carlo image sequences using a recurrent denoising autoencoder. ACM Transactions on Graphics (TOG) 36, 4 (2017), 98. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Dongdong Chen, Jing Liao, Lu Yuan, Nenghai Yu, and Gang Hua. 2017. Coherent online video style transfer. In Proceedings of the IEEE International Conference on Computer Vision. 1105--1114.Google ScholarGoogle ScholarCross RefCross Ref
  7. Mengyu Chu and Nils Thuerey. 2017. Data-driven synthesis of smoke flows with CNN-based feature descriptors. ACM Transactions on Graphics (TOG) 36, 4 (2017), 69. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Mengyu Chu, You Xie, Laura Leal-Taixé, and Nils Thuerey. 2018. Temporally Coherent GANs for Video Super-Resolution (TecoGAN). arXiv preprint arXiv:1811.09393 (2018).Google ScholarGoogle Scholar
  9. Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. 2014. Learning a deep convolutional network for image super-resolution. In European conference on computer vision. Springer, 184--199.Google ScholarGoogle ScholarCross RefCross Ref
  10. Amir Barati Farimani, Joseph Gomes, and Vijay S Pande. 2017. Deep learning the physics of transport phenomena. arXiv preprint arXiv:1709.02432 (2017).Google ScholarGoogle Scholar
  11. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in neural information processing systems. 2672--2680. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Identity mappings in deep residual networks. In European conference on computer vision. Springer, 630--645.Google ScholarGoogle ScholarCross RefCross Ref
  13. SoHyeon Jeong, Barbara Solenthaler, Marc Pollefeys, Markus Gross, et al. 2015. Data-driven fluid simulations using regression forests. ACM Transactions on Graphics (TOG) 34, 6 (2015), 199. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Justin Johnson, Alexandre Alahi, and Li Fei-Fei. 2016. Perceptual losses for real-time style transfer and super-resolution. In European conference on computer vision. Springer, 694--711.Google ScholarGoogle ScholarCross RefCross Ref
  15. Simon Kallweit, Thomas Müller, Brian McWilliams, Markus Gross, and Jan Novák. 2017. Deep scattering: Rendering atmospheric clouds with radiance-predicting neural networks. ACM Transactions on Graphics (TOG) 36, 6 (2017), 231. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. 2017. Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196 (2017).Google ScholarGoogle Scholar
  17. John A Kennedy, Ora Israel, Alex Frenkel, Rachel Bar-Shalom, and Haim Azhari. 2006. Super-resolution in PET imaging. IEEE transactions on medical imaging 25, 2 (2006), 137--147.Google ScholarGoogle Scholar
  18. Byungsoo Kim, Vinicius C Azevedo, Nils Thuerey, Theodore Kim, Markus Gross, and Barbara Solenthaler. 2018. Deep Fluids: A Generative Network for Parameterized Fluid Simulations. arXiv preprint arXiv:1806.02071 (2018).Google ScholarGoogle Scholar
  19. ByungMoon Kim, Yingjie Liu, Ignacio Llamas, and Jaroslaw R Rossignac. 2005. Flowfixer: Using bfecc for fluid simulation. Technical Report. Georgia Institute of Technology.Google ScholarGoogle Scholar
  20. Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee. 2016. Accurate image super-resolution using very deep convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition. 1646--1654.Google ScholarGoogle ScholarCross RefCross Ref
  21. Theodore Kim, Nils Thürey, Doug James, and Markus Gross. 2008. Wavelet turbulence for fluid simulation. In ACM Transactions on Graphics (TOG), Vol. 27. ACM, 50. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. CoRR abs/1412.6980 (2014). arXiv:1412.6980 http://arxiv.org/abs/1412.6980Google ScholarGoogle Scholar
  23. Wei-Sheng Lai, Jia-Bin Huang, Narendra Ahuja, and Ming-Hsuan Yang. 2017. Deep laplacian pyramid networks for fast and accurate superresolution. In IEEE Conference on Computer Vision and Pattern Recognition, Vol. 2. 5.Google ScholarGoogle Scholar
  24. Christian Ledig, Lucas Theis, Ferenc Huszár, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, et al. 2017. Photo-realistic single image super-resolution using a generative adversarial network. arXiv preprint (2017).Google ScholarGoogle Scholar
  25. Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee. 2017. Enhanced deep residual networks for single image super-resolution. In CVPR, Vol. 1. 3.Google ScholarGoogle Scholar
  26. Zichao Long, Yiping Lu, Xianzhong Ma, and Bin Dong. 2017. Pde-net: Learning pdes from data. arXiv preprint arXiv:1710.09668 (2017).Google ScholarGoogle Scholar
  27. Xudong Mao, Qing Li, Haoran Xie, Raymond Y. K. Lau, and Zhen Wang. 2016. Multi-class Generative Adversarial Networks with the L2 Loss Function. CoRR abs/1611.04076 (2016). arXiv:1611.04076 http://arxiv.org/abs/1611.04076Google ScholarGoogle Scholar
  28. Mehdi Mirza and Simon Osindero. 2014. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784 (2014).Google ScholarGoogle Scholar
  29. Rahul Narain, Jason Sewall, Mark Carlson, and Ming C Lin. 2008. Fast animation of turbulence using energy transport and procedural synthesis. In ACM Transactions on Graphics (TOG), Vol. 27. ACM, 166. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Xue Bin Peng, Glen Berseth, KangKang Yin, and Michiel Van De Panne. 2017. Deeploco: Dynamic locomotion skills using hierarchical deep reinforcement learning. ACM Transactions on Graphics (TOG) 36, 4 (2017), 41. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Lukas Prantl, Boris Bonev, and Nils Thuerey. 2017. Pre-computed liquid spaces with generative neural networks and optical flow. arXiv preprint arXiv:1704.07854 (2017).Google ScholarGoogle Scholar
  32. Alec Radford, Luke Metz, and Soumith Chintala. 2016. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. Proc. ICLR (2016).Google ScholarGoogle Scholar
  33. Manuel Ruder, Alexey Dosovitskiy, and Thomas Brox. 2016. Artistic style transfer for videos. In German Conference on Pattern Recognition. Springer, 26--36.Google ScholarGoogle ScholarCross RefCross Ref
  34. Masaki Saito, Eiichi Matsumoto, and Shunta Saito. 2017. Temporal generative adversarial nets with singular value clipping. In Proceedings of the IEEE International Conference on Computer Vision. 2830--2839.Google ScholarGoogle ScholarCross RefCross Ref
  35. Mehdi SM Sajjadi, Bernhard Schölkopf, and Michael Hirsch. 2017. Enhancenet: Single image super-resolution through automated texture synthesis. In Computer Vision (ICCV), 2017 IEEE International Conference on. IEEE, 4501--4510.Google ScholarGoogle ScholarCross RefCross Ref
  36. Andrew Selle, Ronald Fedkiw, Byungmoon Kim, Yingjie Liu, and Jarek Rossignac. 2008. An unconditionally stable MacCormack method. Journal of Scientific Computing 35, 2-3 (2008), 350--371. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Jos Stam. 1999. Stable Fluids.. In Siggraph, Vol. 99. 121--128. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Ying Tai, Jian Yang, and Xiaoming Liu. 2017. Image super-resolution via deep recursive residual network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Vol. 1. 5.Google ScholarGoogle ScholarCross RefCross Ref
  39. Yun Teng, David IW Levin, and Theodore Kim. 2016. Eulerian solid-fluid coupling. ACM Transactions on Graphics (TOG) 35, 6 (2016), 200. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Nils Thuerey and Tobias Pfaff. 2018. MantaFlow. (2018). http://mantaflow.com.Google ScholarGoogle Scholar
  41. Radu Timofte, Vincent De Smet, and Luc Van Gool. 2014. A+: Adjusted anchored neighborhood regression for fast super-resolution. In Asian conference on computer vision. Springer, 111--126. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Jonathan Tompson, Kristofer Schlachter, Pablo Sprechmann, and Ken Perlin. 2017. Accelerating eulerian fluid simulation with convolutional networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 3424--3433. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Tong Tong, Gen Li, Xiejie Liu, and Qinquan Gao. 2017. Image super-resolution using dense skip connections. In Computer Vision (ICCV), 2017 IEEE International Conference on. IEEE, 4809--4817.Google ScholarGoogle ScholarCross RefCross Ref
  44. Kiwon Um, Xiangyu Hu, and Nils Thuerey. 2018. Liquid splash modeling with neural networks. In Computer Graphics Forum, Vol. 37. Wiley Online Library, 171--182.Google ScholarGoogle Scholar
  45. Yifan Wang, Federico Perazzi, Brian McWilliams, Alexander Sorkine-Hornung, Olga Sorkine-Hornung, and Christopher Schroers. 2018. A Fully Progressive Approach to Single-Image Super-Resolution. CoRR abs/1804.02900 (2018). arXiv:1804.02900 http://arxiv.org/abs/1804.02900Google ScholarGoogle Scholar
  46. You Xie, Erik Franz, Mengyu Chu, and Nils Thuerey. 2018. tempoGAN: A Temporally Coherent, Volumetric GAN for Super-resolution Fluid Flow. arXiv preprint arXiv:1801.09710 (2018). Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Seqgan: Sequence generative adversarial nets with policy gradient. In Thirty-First AAAI Conference on Artificial Intelligence. Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Liangpei Zhang, Hongyan Zhang, Huanfeng Shen, and Pingxiang Li. 2010. A super-resolution reconstruction algorithm for surveillance images. Signal Processing 90, 3 (2010), 848--859. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. A Multi-Pass GAN for Fluid Flow Super-Resolution

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader
      About Cookies On This Site

      We use cookies to ensure that we give you the best experience on our website.

      Learn more

      Got it!