skip to main content
research-article

Fast Accurate and Automatic Brushstroke Extraction

Authors Info & Claims
Published:11 May 2021Publication History
Skip Abstract Section

Abstract

Brushstrokes are viewed as the artist’s “handwriting” in a painting. In many applications such as style learning and transfer, mimicking painting, and painting authentication, it is highly desired to quantitatively and accurately identify brushstroke characteristics from old masters’ pieces using computer programs. However, due to the nature of hundreds or thousands of intermingling brushstrokes in the painting, it still remains challenging. This article proposes an efficient algorithm for brush Stroke extraction based on a Deep neural network, i.e., DStroke. Compared to the state-of-the-art research, the main merit of the proposed DStroke is to automatically and rapidly extract brushstrokes from a painting without manual annotation, while accurately approximating the real brushstrokes with high reliability. Herein, recovering the faithful soft transitions between brushstrokes is often ignored by the other methods. In fact, the details of brushstrokes in a master piece of painting (e.g., shapes, colors, texture, overlaps) are highly desired by artists since they hold promise to enhance and extend the artists’ powers, just like microscopes extend biologists’ powers. To demonstrate the high efficiency of the proposed DStroke, we perform it on a set of real scans of paintings and a set of synthetic paintings, respectively. Experiments show that the proposed DStroke is noticeably faster and more accurate at identifying and extracting brushstrokes, outperforming the other methods.

References

  1. Yagiz Aksoy, Tunc Ozan Aydin, and Marc Pollefeys. 2017. Designing effective inter-pixel information flow for natural image matting. In Proceedings of CVPR’17. DOI:10.1109/CVPR.2017.32Google ScholarGoogle ScholarCross RefCross Ref
  2. Y. Aksoy, T.-H. Oh, S. Paris, M. Pollefeys, and W. Matusik. 2018. Semantic soft segmentation. ACM Transactions on Graphics 37, 4 (2018), 72. DOI:doi.org/10.1145/3197517.3201275 Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Elad Aharoni-Mack, Yakov Shambik, and Dani Lischinski. 2017. Pigment-based recoloring of watercolor paintings. In Proceedings of the ACM Symposium on Non-Photorealistic Animation and Rendering (NPAR ’17), Article 1. DOI:doi.org/10.1145/3092919.3092926 Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. I. E. Berezhnoy, E. O. Postma, and H. J. van den Herik. 2009. Automatic extraction of brushstroke orientation from paintings. Machine Vision and Applications 20, 1 (2009). DOI:doi.org/10.1007/s00138-007-0098-7 Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Q. Chen, D. Li, and C.-K. Tang. 2013. KNN matting. IEEE Transactions on PAMI 35, 9 (2013), 2175--2188. DOI:10.1109/TPAMI.2013.18 Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. 2017. Deeplab: Semantic image segmentation with deep convolutional nets, aAtrous convolution, and fully connected CRFs. IEEE Transactions on PAMI 40, 4 (2017), 834--848. DOI:10.1109/TPAMI.2017.2699184Google ScholarGoogle ScholarCross RefCross Ref
  7. Y. Fu, H. Yu, C. Yeh, J. J. Zhang, and T. Lee. 2018. High relief from brush painting. IEEE Transactions on Visualization and Computer Graphics, DOI:10.1109/TVCG.2018.2860004Google ScholarGoogle Scholar
  8. B. Gooch, G. Coombe, and P. Shirley. 2002. Artistic vision: Painterly rendering using computer vision techniques. In Proceedings of the ACM 2nd International Symposium on Non-Photorealistic Animation and Rendering. 83--ff. DOI:10.1145/508530.508545 Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. A. Hertzmann. 2003. A survey of stroke-based rendering. IEEE Computer Graphics and Applications 23, 4 (2003), 70--81. DOI:10.1109/MCG.2003.1210867 Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. A. Hertzmann. 2002. Fast paint texture. In Proceedings of the ACM 2nd International Symposium on Non-Photorealistic Animation and Rendering (NPAR’02). 91--ff. DOI:doi.org/10.1145/508530.508546 Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. K. He, G. Gkioxari, P. Dollár, and R. Girshick. 2017. Mask r-CNN. In Proceedings of IEEE ICCV’17. 2980--2988. DOI:10.1109/ICCV.2017.322Google ScholarGoogle Scholar
  12. S. Hegde, C. Gatzidis, and F. Tian. 2013. Painterly rendering techniques: A state-of-the-art review of current approaches. Computer Animation and Virtual Worlds 24, 1 (2013), 43--64. DOI:doi.org/10.1002/cav.1435Google ScholarGoogle ScholarCross RefCross Ref
  13. K. He, J. Sun, and X. Tang. 2013. Guided image filtering. IEEE Transactions on PAMI 6 (2013), 1397--1409. DOI:10.1109/TPAMI.2012.213 Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. T. Hurtut. 2010. 2D Artistic Images Analysis, a Content-Based Survey. (2010). Retrieved from http://hal.archivesouvertes.fr/hal-00459401_v2/.Google ScholarGoogle Scholar
  15. Zhongyi Han, Benzheng Wei, Ashley Mercado, Stephanie Leung, and Shuo Li. 2018. Spine-GAN: Semantic segmentation of multiple spinal structures. Medical Image Analysis 50 (2018), 23--35. DOI:doi.org/10.1016/j.media.2018.08.005Google ScholarGoogle ScholarCross RefCross Ref
  16. Krassimira Ivanova, Peter Stanchev, Koen Vanhoof, and Phillip Ein-Dor. 2010. Semantic and abstraction content of art images. In Proceedings of the Mediterranean Conference on Information Systems 2010. http://aisel.aisnet.org/mcis2010/42.Google ScholarGoogle Scholar
  17. Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. 2017. Image-to-image translation with conditional adversarial networks. In Proceedings of CVPR’17. DOI:10.1109/CVPR.2017.632Google ScholarGoogle Scholar
  18. Yongcheng Jing, Yang Liu, Yezhou Yang, Zunlei Feng, Yizhou Yu, Dacheng Tao, and Mingli Song. 2018. Stroke controllable fast style transfer with adaptive receptive fields. In Proc.eedings of the European Conference on Computer Vision 2018. 238--254. DOI:10.1007/978-3-030-01261-8_15Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Y. Koyama and M. Goto. 2018. Decomposing images into layers with advanced color blending. Computer Graphics Forum 37 (2018), 397--407. DOI:10.1111/cgf.13577.Google ScholarGoogle ScholarCross RefCross Ref
  20. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2017. ImageNet classification with deep convolutional neural networks. Communications of the ACM 60, 6 (2017), 84--90. DOI:doi.org/10.1145/3065386 Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. S. Lutz, K. Amplianitis, and A. Smolic. 2018. AlphaGAN: Generative adversarial networks for natural image matting. In Proceedings of the British Machine Vision Conference 2018. Retrieved from https://arxiv.org/abs/1807.10088.Google ScholarGoogle Scholar
  22. C. Li and T. Chen. 2009. Aesthetic visual quality assessment of paintings. IEEE Journal of Selected Topics in Signal Processing 3, 2 (2009), 236--252. DOI:10.1109/JSTSP.2009.2015077Google ScholarGoogle ScholarCross RefCross Ref
  23. A. Levin, D. Lischinski, and Y. Weiss. 2008. A closed-form solution to natural image matting. IEEE Transactions on PAMI 30, 2 (2008), 228--242. DOI:10.1109/TPAMI.2007.1177 Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. A. Levin, A. Rav-Acha, and D. Lischinski. 2008. Spectral matting. IEEE Transactions on PAMI 30, 10 (2008), 1699--1712. DOI:10.1109/TPAMI.2007.1177 Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. J. Li, L. Yao, E. Hendriks, and J. Z. Wang. 2012. Rhythmic brush-strokes distinguish van Gogh from his contemporaries: Findings via automated brushstroke extraction. IEEE Transactions. on PAMI 34, 6 (2012), 1159--1176. DOI:10.1109/TPAMI.2011.203 Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. J. McCann and N. Pollard. 2009. Local layering. ACM Trans. on Graphics 28, 3 (2009), 84. DOI:doi.org/10.1145/1576246.1531390 Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. J. McCann and N. S. Pollard. 2012. Soft stacking. Computer Graphics Forum 31 (2012), 469--478. DOI:doi.org/10.1111/j.1467-8659.2012.03026.x Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. T. Porter and T. Duff. 1984. Compositing digital images. In Proceedings of ACM SIGGRAPH84-Computer Graphics, Vol. 18, 253--259. DOI:doi.org/10.1145/964965.808606 Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. O. Ronneberger, P. Fischer, and T. Brox. 2015. U-Net: Convolutional networks for biomedical image segmentation. In Proceedings of Medical Image Computing and Computer-Assisted Intervention (MICCAI’15). Springer, LNCS Vol. 9351. DOI:10.1007/978-3-319-24574-4_28Google ScholarGoogle Scholar
  30. C. Richardt, J. Lopez-Moreno, A. Bousseau, M. Agrawala, and G. Drettakis. 2014. Vectorising bitmaps into semi-transparent gradient layers. Computer Graphics Forum 33 (2014), 11--19. DOI:doi.org/10.1111/cgf.12408Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Lior Shamir. 2015. What makes a Pollock Pollock: A machine vision approach. International Journal of Arts and Technology 8, 1 (2015), 1--10. DOI:10.1504/IJART.2015.067389Google ScholarGoogle ScholarCross RefCross Ref
  32. D. Singaraju and R. Vidal. 2011. Estimation of alpha mattes for multiple image layers. IEEE Transactions on PAMI 33, 7 (2011), 1295--1309. DOI:10.1109/TPAMI.2010.206 Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Jianchao Tan, Jose Echevarria, and Yotam Gingold. 2018. Efficient palette-based decomposition and recoloring of images via RGBXY-space geometry. ACM Transactions on Graphics 37, 6 (2018), Article 262, DOI:doi.org/10.1145/3272127.3275054 Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. J. Tan, J.-M. Lien, and Y. Gingold. 2016. Decomposing images into layers via RGB-space geometry. ACM Transactions on Graphics 36, 1 (2016), 7. DOI:doi.org/10.1145/2988229 Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Laurens J. P. van der Maaten and Eric O. Postma. 2010. Texton-based analysis of paintings. In Proceedings of the SPIE, Vol. 7798, id.77980H. DOI:doi.org/10.1117/12.863082Google ScholarGoogle Scholar
  36. Ning Xie, Hirotaka Hachiya, and Masashi Sugiyama. 2013. Artist agent: A reinforcement learning approach to automatic stroke generation in oriental ink painting. IEICE Transactions on Information and Systems E96.D, 5 (2013), 1134--1144. DOI:10.1587/transinf.E96.D.1134Google ScholarGoogle Scholar
  37. S. Xu, Y. Xu, S. B. Kang, D. H. Salesin, Y. Pan, and H.-Y. Shum. 2006. Animating Chinese paintings through stroke-based decomposition. ACM Transactions on Graphics 25, 2 (2006), 239--267. DOI:doi.org/10.1145/1138450.1138454 Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. N. Xie, T. Zhao, F. Tian, X. Zhang, and M. Sugiyama. 2015. Stroke-based stylization learning and rendering with inverse reinforcement learning. In Proceedings of the 24th International Conference on Artificial Intelligence (IJCAI’15). 2531--2537. DOI:10.5555/2832581.2832603 Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Ningyuan Zheng, Yifan Jiang, and Dingjiang Huang. 2019. StrokeNet: A neural painting environment. In Proceedings of the International Conference on Learning Representations 2019. Retrieved from https://dblp.org/rec/conf/iclr/ZhengJH19.html.Google ScholarGoogle Scholar
  40. H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia. 2017. Pyramid scene parsing network. In Proceedings of IEEE CVPR’17. 6230--6239. DOI:10.1109/CVPR.2017.660Google ScholarGoogle Scholar
  41. F. Lamberti, A. Sanna, and G. Paravati. 2014. Computer-assisted analysis of painting brushstrokes: Digital image processing for unsupervised extraction of visible features from van Gogh’s works. EURASIP Journal on Image and Video Processing 2014 (2014), 53. DOI:doi.org/10.1186/1687-5281-2014-53Google ScholarGoogle ScholarCross RefCross Ref
  42. S. Zhang, T. Chen, Y. Zhang, S. Hu, and R. R. Martin. 2009. Vectorizing cartoon animations. IEEE Transactions on Visualization and Computer Graphics 15, 4 (2009), 618--629. DOI:10.1109/TVCG.2009.9 Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Robert L. Cook and Kenneth E. Torrance. 1981. A reflectance model for computer graphics. SIGGRAPH Computer Graphics 15, 3 (August 1981), 307--316. DOI: https://doi.org/10.1145/965161.806819 Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Retrieved from https://www.kylebrush.com/.Google ScholarGoogle Scholar
  45. Liqiang Nie, Meng Liu, and Xuemeng Song. 2019. Multimodal Learning toward Micro-Video Understanding, Multimodal Learning toward Micro-Video Understanding, Morgan-Claypool. DOI:10.2200/S00938ED1V01Y201907IVM020Google ScholarGoogle Scholar
  46. P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik. 2009. From contours to regions: An empirical evaluation. In IEEE CVPR’09. 2294--2301, DOI: https://doi.org/10.1109/CVPR.2009.5206707Google ScholarGoogle Scholar
  47. Jingwan Lu, Connelly Barnes, Stephen DiVerdi, and Adam Finkelstein. 2013. RealBrush: Painting with examples of physical media. ACM Transactions on Graphics 32, 4 (2013), Article 117, 12 pages. DOI: https://doi.org/10.1145/2461912.2461998 Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Fast Accurate and Automatic Brushstroke Extraction

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format
    About Cookies On This Site

    We use cookies to ensure that we give you the best experience on our website.

    Learn more

    Got it!