skip to main content
research-article

Multi-granularity Brushstrokes Network for Universal Style Transfer

Authors Info & Claims
Published:04 March 2022Publication History
Skip Abstract Section

Abstract

Neural style transfer has been developed in recent years, where both performance and efficiency have been greatly improved. However, most existing methods do not transfer the brushstrokes information of style images well. In this article, we address this issue by training a multi-granularity brushstrokes network based on a parallel coding structure. Specifically, we first adopt the content parsing module to obtain the spatial distribution of content image and the smoothness of different regions. Then, different brushstrokes features are transformed by a multi-granularity style-swap module guided by the region content map. Finally, the stylized features of the two branches are fused to enhance the stylized results. The multi-granularity brushstrokes network is jointly supervised by a new multi-layer brushstroke loss and pre-existing loss. The proposed method is close to the artistic drawing process. In addition, we can control whether the color of the stylized results tend to be the style image or the content image. Experimental results demonstrate the advantage of our proposed method compare with the existing schemes.

REFERENCES

  1. [1] Wiki Art Gallery. 2011. A case for critical thinking. Issues Account. Edu. 26, 3 (2011), 593608.Google ScholarGoogle ScholarCross RefCross Ref
  2. [2] Chen Tian Qi and Schmidt Mark. 2016. Fast patch-based style transfer of arbitrary style. Retrieved from arXiv:1612.04337.Google ScholarGoogle Scholar
  3. [3] Deng Yingying, Tang Fan, Dong Weiming, Sun Wen, Huang Feiyue, and Xu Changsheng. 2020. Arbitrary style transfer via multi-adaptation network. In Proceedings of the 28th ACM International Conference on Multimedia (MM’20). ACM, 27192727.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. [4] Frigo Oriel, Sabater Neus, Delon Julie, and Hellier Pierre. 2016. Split and match: Example-based adaptive patch sampling for unsupervised style transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’16). IEEE Computer Society, 553561.Google ScholarGoogle ScholarCross RefCross Ref
  5. [5] Fu Yunfei, Yu Hongchuan, Yeh Chih-Kuo, Lee Tong-Yee, and Zhang Jian J.. 2021. Fast accurate and automatic brushstroke extraction. ACM Trans. Multim. Comput. Commun. Appl. 17, 2 (2021), 44:1–44:24.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. [6] Fu Yunfei, Yu Hongchuan, Yeh Chih-Kuo, Zhang Jian-Jun, and Lee Tong-Yee. 2019. High relief from brush painting. IEEE Trans. Vis. Comput. Graph. 25, 9 (2019), 27632776.Google ScholarGoogle ScholarCross RefCross Ref
  7. [7] Gatys Leon A., Ecker Alexander S., and Bethge Matthias. 2016. Image style transfer using convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’16). IEEE Computer Society, 24142423.Google ScholarGoogle ScholarCross RefCross Ref
  8. [8] Gatys Leon A., Ecker Alexander S., Bethge Matthias, Hertzmann Aaron, and Shechtman Eli. 2017. Controlling perceptual factors in neural style transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17). IEEE Computer Society, 37303738.Google ScholarGoogle ScholarCross RefCross Ref
  9. [9] Hertzmann Aaron, Jacobs Charles E., Oliver Nuria, Curless Brian, and Salesin David. 2001. Image analogies. In Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH’01), Pocock Lynn (Ed.). ACM, 327340.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. [10] Huang Xun and Belongie Serge J.. 2017. Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE International Conference on Computer Vision (ICCV’17). IEEE Computer Society, 15101519.Google ScholarGoogle ScholarCross RefCross Ref
  11. [11] Jing Yongcheng and Liu Yang. 2018. Stroke controllable fast style transfer with adaptive receptive fields. In Proceedings of the 15th European Conference on Computer Vision (ECCV’18) (Lecture Notes in Computer Science), Vol. 11217. Springer, 244260.Google ScholarGoogle ScholarCross RefCross Ref
  12. [12] Johnson Justin, Alahi Alexandre, and Fei-Fei Li. 2016. Perceptual losses for real-time style transfer and super-resolution. In Proceedings of the 15th European Conference on Computer Vision (ECCV’16) (Lecture Notes in Computer Science), Vol. 9906. Springer, 694711.Google ScholarGoogle ScholarCross RefCross Ref
  13. [13] Kingma Diederik P. and Ba Jimmy. 2015. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations (ICLR’15), Bengio Yoshua and LeCun Yann (Eds.).Google ScholarGoogle Scholar
  14. [14] Lee Hochang, Seo Sanghyun, and Yoon Kyunghyun. 2011. Directional texture transfer with edge enhancement. Comput. Graph. 35, 1 (2011), 8191.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. [15] Li Chuan and Wand Michael. 2016. Combining Markov random fields and convolutional neural networks for image synthesis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’16). IEEE Computer Society, 24792486.Google ScholarGoogle ScholarCross RefCross Ref
  16. [16] Li Shaohua, Xu Xinxing, Nie Liqiang, and Chua Tat-Seng. 2017. Laplacian-steered neural style transfer. In Proceedings of the ACM on Multimedia Conference (MM’17). ACM, 17161724.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. [17] Li Xueting, Liu Sifei, Kautz Jan, and Yang Ming-Hsuan. 2019. Learning linear transformations for fast image and video style transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’19). Computer Vision Foundation/IEEE, 38093817.Google ScholarGoogle ScholarCross RefCross Ref
  18. [18] Li Yijun, Fang Chen, Yang Jimei, Wang Zhaowen, Lu Xin, and Yang Ming-Hsuan. 2017. Universal style transfer via feature transforms. In Proceedings of the Annual Conference on Neural Information Processing Systems. 386396.Google ScholarGoogle ScholarCross RefCross Ref
  19. [19] Li Yanghao, Wang Naiyan, Liu Jiaying, and Hou Xiaodi. 2017. Demystifying neural style transfer. In Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI’17), Sierra Carles (Ed.). ijcai.org, 22302236.Google ScholarGoogle ScholarCross RefCross Ref
  20. [20] Lin Tsung-Yi, Maire Michael, Belongie Serge J., Hays James, Perona Pietro, Ramanan Deva, Dollár Piotr, and Zitnick C. Lawrence. 2014. Microsoft COCO: Common objects in context. In Proceedings of the 13th European Conference on Computer Vision (ECCV’14) (Lecture Notes in Computer Science), Vol. 8693. Springer, 740755.Google ScholarGoogle ScholarCross RefCross Ref
  21. [21] Liu Xiao-Chang, Cheng Ming-Ming, Lai Yu-Kun, and Rosin Paul L.. 2017. Depth-aware neural style transfer. In Proceedings of the 15th International Symposium on Non-Photorealistic Animation and Rendering ([email protected]’17). ACM, 4:1–4:10.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. [22] Park Dae Young and Lee Kwang Hee. 2019. Arbitrary style transfer with style-attentional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’19). Computer Vision Foundation/IEEE, 58805888.Google ScholarGoogle ScholarCross RefCross Ref
  23. [23] Selim Ahmed, Elgharib Mohamed A., and Doyle Linda. 2016. Painting style transfer for head portraits using convolutional neural networks. ACM Trans. Graph. 35, 4 (2016), 129:1–129:18.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. [24] Shen Falong, Yan Shuicheng, and Zeng Gang. 2017. Meta networks for neural style transfer. Retrieved from https://arXiv:1709.04111.Google ScholarGoogle Scholar
  25. [25] Sheng Lu, Lin Ziyi, Shao Jing, and Wang Xiaogang. 2018. Avatar-net: Multi-scale zero-shot style transfer by feature decoration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’18). IEEE Computer Society, 82428250.Google ScholarGoogle ScholarCross RefCross Ref
  26. [26] Siarohin Aliaksandr, Zen Gloria, Majtanovic Cveta, Alameda-Pineda Xavier, Ricci Elisa, and Sebe Nicu. 2019. Increasing image memorability with neural style transfer. ACM Trans. Multim. Comput. Commun. Appl. 15, 2 (2019), 42:1–42:22.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. [27] Simonyan Karen and Zisserman Andrew. 2015. Very deep convolutional networks for large-scale image recognition. In Proceedings of the 3rd International Conference on Learning Representations (ICLR’15), Bengio Yoshua and LeCun Yann (Eds.). 114.Google ScholarGoogle Scholar
  28. [28] Ulyanov Dmitry, Lebedev Vadim, Vedaldi Andrea, and Lempitsky Victor S.. 2016. Texture networks: Feed-forward synthesis of textures and stylized images. In Proceedings of the 33nd International Conference on Machine Learning (ICML’16) (JMLR Workshop and Conference Proceedings), Balcan Maria-Florina and Weinberger Kilian Q. (Eds.), Vol. 48. JMLR.org, 13491357.Google ScholarGoogle Scholar
  29. [29] Wang Xiaolong, Girshick Ross B., Gupta Abhinav, and He Kaiming. Non-local neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’18). 77947803.Google ScholarGoogle Scholar
  30. [30] Wang Xin, Oxholm Geoffrey, Zhang Da, and Wang Yuan-Fang. 2017. Multimodal transfer: A hierarchical deep convolutional neural network for fast artistic style transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17). IEEE Computer Society, 71787186.Google ScholarGoogle ScholarCross RefCross Ref
  31. [31] Wang Zhizhong, Zhao Lei, Chen Haibo, Qiu Lihong, Mo Qihang, Lin Sihuan, Xing Wei, and Lu Dongming. 2020. Diversified arbitrary style transfer via deep feature perturbation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR’20). IEEE, 77867795.Google ScholarGoogle ScholarCross RefCross Ref
  32. [32] Woo Sanghyun, Park Jongchan, Lee Joon-Young, and Kweon In So. CBAM: Convolutional block attention module. In Proceedings of the 15th European Conference on Computer Vision (ECCV’18), Ferrari Vittorio, Hebert Martial, Sminchisescu Cristian, and Weiss Yair (Eds.).Google ScholarGoogle Scholar
  33. [33] Yang Lingchen, Yang Lumin, Zhao Mingbo, and Zheng Youyi. 2018. Controlling stroke size in fast style transfer with recurrent convolutional neural network. Comput. Graph. Forum 37, 7 (2018), 97107.Google ScholarGoogle ScholarCross RefCross Ref
  34. [34] Yao Yuan, Ren Jianqiang, Xie Xuansong, Liu Weidong, Liu Yong-Jin, and Wang Jun. 2019. Attention-aware multi-stroke style transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’19). Computer Vision Foundation/IEEE, 14671475.Google ScholarGoogle ScholarCross RefCross Ref
  35. [35] Zhang Yulun, Fang Chen, Wang Yilin, Wang Zhaowen, Lin Zhe, Fu Yun, and Yang Jimei. 2019. Multimodal style transfer via graph cuts. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV’19). IEEE, 59425950.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Multi-granularity Brushstrokes Network for Universal Style Transfer

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Multimedia Computing, Communications, and Applications
      ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 18, Issue 4
      November 2022
      497 pages
      ISSN:1551-6857
      EISSN:1551-6865
      DOI:10.1145/3514185
      • Editor:
      • Abdulmotaleb El Saddik
      Issue’s Table of Contents

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 4 March 2022
      • Accepted: 1 December 2021
      • Revised: 1 November 2021
      • Received: 1 July 2021
      Published in tomm Volume 18, Issue 4

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Refereed

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    View Full Text

    HTML Format

    View this article in HTML Format .

    View HTML Format
    About Cookies On This Site

    We use cookies to ensure that we give you the best experience on our website.

    Learn more

    Got it!