skip to main content
research-article

Non-stationary texture synthesis by adversarial expansion

Published:30 July 2018Publication History
Skip Abstract Section

Abstract

The real world exhibits an abundance of non-stationary textures. Examples include textures with large scale structures, as well as spatially variant and inhomogeneous textures. While existing example-based texture synthesis methods can cope well with stationary textures, non-stationary textures still pose a considerable challenge, which remains unresolved. In this paper, we propose a new approach for example-based non-stationary texture synthesis. Our approach uses a generative adversarial network (GAN), trained to double the spatial extent of texture blocks extracted from a specific texture exemplar. Once trained, the fully convolutional generator is able to expand the size of the entire exemplar, as well as of any of its sub-blocks. We demonstrate that this conceptually simple approach is highly effective for capturing large scale structures, as well as other non-stationary attributes of the input exemplar. As a result, it can cope with challenging textures, which, to our knowledge, no other existing method can handle.

Skip Supplemental Material Section

Supplemental Material

a49-zhou.mp4

References

  1. Urs Bergmann, Nikolay Jetchev, and Roland Vollgraf. 2017. Learning Texture Manifolds with the Periodic Spatial GAN. CoRR abs/1705.06566 (2017).Google ScholarGoogle Scholar
  2. Soheil Darabi, Eli Shechtman, Connelly Barnes, Dan B Goldman, and Pradeep Sen. 2012. Image Melding: Combining Inconsistent Images using Patch-based Synthesis. ACM Trans. Graph. (Proc. SIGGRAPH 2012) 31, 4 (2012). Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Alexei A. Efros and William T. Freeman. 2001. Image quilting for texture synthesis and transfer. In Proc. SIGGRAPH 2001. 341--346. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Alexei A. Efros and Thomas K. Leung. 1999. Texture Synthesis by Non-Parametric Sampling. Proc. ICCV '99 2 (1999), 1033--1038. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. L.A. Gatys, A.S. Ecker, and M. Bethge. 2015a. Texture Synthesis Using Convolutional Neural Networks. Advances in Neural Information Processing Systems 28 (May 2015), 262--270. http://arxiv.org/abs/1505.07376 Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. 2015b. A Neural Algorithm of Artistic Style. CoRR abs/1508.06576 (Aug 2015). http://arxiv.org/abs/1508.06576Google ScholarGoogle Scholar
  7. Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative Adversarial Networks. CoRR abs/1406.2661 (June 2014). https://arxiv.org/abs/1406.2661Google ScholarGoogle Scholar
  8. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. In Proc. CVPR 2016.Google ScholarGoogle ScholarCross RefCross Ref
  9. Aaron Hertzmann, Charles E. Jacobs, Nuria Oliver, Brian Curless, and David H. Salesin. 2001. Image Analogies. Proc. SIGGRAPH 2001 (August 2001), 327--340. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. 2016. Image-to-Image Translation with Conditional Adversarial Networks. CoRR abs/1611.07004 (2016). arXiv:1611.07004 http://arxiv.org/abs/1611.07004Google ScholarGoogle Scholar
  11. Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. 2017. Image-to-Image Translation with Conditional Adversarial Networks. In Proc. CVPR 2017.Google ScholarGoogle ScholarCross RefCross Ref
  12. Nikolay Jetchev, Urs Bergmann, and Roland Vollgraf. 2016. Texture Synthesis with Spatial Generative Adversarial Networks. CoRR abs/1611.08207 (2016).Google ScholarGoogle Scholar
  13. Justin Johnson, Alexandre Alahi, and Li Fei-Fei. 2016. Perceptual Losses for Real-Time Style Transfer and Super-Resolution. In Proc. ECCV 2016, Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling (Eds.), Vol. Part II. 694--711.Google ScholarGoogle ScholarCross RefCross Ref
  14. A. Kaspar, B. Neubert, D. Lischinski, M. Pauly, and J. Kopf. 2015. Self tuning texture optimization. Computer Graphics Forum 34, 2 (May 2015). Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. CoRR abs/1412.6980 (2014). arXiv:1412.6980 http://arxiv.org/abs/1412.6980Google ScholarGoogle Scholar
  16. Vivek Kwatra, Irfan Essa, Aaron Bobick, and Nipun Kwatra. 2005. Texture optimization for example-based synthesis. ACM Trans. Graph. 24, 3 (Proc. SIGGRAPH 2005) (2005), 795--802. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Vivek Kwatra, Arno Schödl, Irfan Essa, Greg Turk, and Aaron Bobick. 2003. Graphcut textures: image and video synthesis using graph cuts. ACM Trans. Graph. 22, 3 (Proc. SIGGRAPH 2003) (2003), 277--286. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew P. Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, and Wenzhe Shi. 2016. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. CoRR abs/1609.04802 (2016). arXiv:1609.04802 http://arxiv.org/abs/1609.04802Google ScholarGoogle Scholar
  19. Sylvain Lefebvre and Hugues Hoppe. 2006. Appearance-space texture synthesis. ACM Transactions on Graphics 25, 3 (Proc. SIGGRAPH 2006) (2006), 541--548. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Chuan Li and Michael Wand. 2016. Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks. In Proc. ECCV 2016, Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling (Eds.), Vol. Part III. 702--716.Google ScholarGoogle ScholarCross RefCross Ref
  21. Yanxi Liu, Web-Chieh Lin, and James H. Hays. 2004. Near-Regular Texture Analysis and Manipulation. ACM Transactions on Graphics (Proceedings of SIGGRAPH 2004) 23, 3 (2004). Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Amir Rosenberger, Daniel Cohen-Or, and Dani Lischinski. 2009. Layered Shape Synthesis: Automatic Generation of Control Maps for Non-Stationary Textures. ACM Trans. Graph 28, 5 (Dec. 2009), 107:1--9. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Omry Sendik and Daniel Cohen-Or. 2017. Deep Correlations for Texture Synthesis. ACM Trans. Graph. 36, 5, Article 161 (July 2017), 15 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Karen Simonyan and Andrew Zisserman. 2014. Very Deep Convolutional Networks for Large-Scale Image Recognition. CoRR abs/1409.1556 (2014). arXiv:1409.1556 http://arxiv.org/abs/1409.1556Google ScholarGoogle Scholar
  25. Dmitry Ulyanov, Vadim Lebedev, Andrea Vedaldi, and Victor S. Lempitsky. 2016. Texture Networks: Feed-forward Synthesis of Textures and Stylized Images. CoRR abs/1603.03417 (2016). arXiv:1603.03417 http://arxiv.org/abs/1603.03417Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Li-Yi Wei, Sylvain Lefebvre, Vivek Kwatra, and Greg Turk. 2009. State of the Art in Example-based Texture Synthesis. In Eurographics 2009 State of The Art Reports. Eurographics.Google ScholarGoogle Scholar
  27. Li-Yi Wei and Marc Levoy. 2000. Fast texture synthesis using tree-structured vector quantization. Proc. SIGGRAPH 2000 (2000), 479--488. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Y. Wexler, E. Shechtman, and M. Irani. 2007. Space-time completion of video. Transactions on Pattern Analysis and Machine Intelligence (PAMI) 29, 3 (2007), 463--476. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Jingdan Zhang, Kun Zhou, Luiz Velho, Baining Guo, and Heung-Yeung Shum. 2003. Synthesis of progressively-variant textures on arbitrary surfaces. ACM Trans. Graph. 22, 3 (Proc. SIGGRAPH 2003) (2003), 295--302. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Yang Zhou, Huajie Shi, Dani Lischinski, Minglun Gong, Johannes Kopf, and Hui Huang. 2017. Analysis and Controlled Synthesis of Inhomogeneous Textures. Computer Graphics Forum (Proc. Eurographics 2017) 36, 2 (2017). Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. 2017. Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks. CoRR abs/1703.10593 (2017). arXiv:1703.10593 http://arxiv.org/abs/1703.10593Google ScholarGoogle Scholar

Index Terms

  1. Non-stationary texture synthesis by adversarial expansion

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image ACM Transactions on Graphics
        ACM Transactions on Graphics  Volume 37, Issue 4
        August 2018
        1670 pages
        ISSN:0730-0301
        EISSN:1557-7368
        DOI:10.1145/3197517
        Issue’s Table of Contents

        Copyright © 2018 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 30 July 2018
        Published in tog Volume 37, Issue 4

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader