Abstract
Brushstrokes are viewed as the artist’s “handwriting” in a painting. In many applications such as style learning and transfer, mimicking painting, and painting authentication, it is highly desired to quantitatively and accurately identify brushstroke characteristics from old masters’ pieces using computer programs. However, due to the nature of hundreds or thousands of intermingling brushstrokes in the painting, it still remains challenging. This article proposes an efficient algorithm for brush Stroke extraction based on a Deep neural network, i.e., DStroke. Compared to the state-of-the-art research, the main merit of the proposed DStroke is to automatically and rapidly extract brushstrokes from a painting without manual annotation, while accurately approximating the real brushstrokes with high reliability. Herein, recovering the faithful soft transitions between brushstrokes is often ignored by the other methods. In fact, the details of brushstrokes in a master piece of painting (e.g., shapes, colors, texture, overlaps) are highly desired by artists since they hold promise to enhance and extend the artists’ powers, just like microscopes extend biologists’ powers. To demonstrate the high efficiency of the proposed DStroke, we perform it on a set of real scans of paintings and a set of synthetic paintings, respectively. Experiments show that the proposed DStroke is noticeably faster and more accurate at identifying and extracting brushstrokes, outperforming the other methods.
- Yagiz Aksoy, Tunc Ozan Aydin, and Marc Pollefeys. 2017. Designing effective inter-pixel information flow for natural image matting. In Proceedings of CVPR’17. DOI:10.1109/CVPR.2017.32Google Scholar
Cross Ref
- Y. Aksoy, T.-H. Oh, S. Paris, M. Pollefeys, and W. Matusik. 2018. Semantic soft segmentation. ACM Transactions on Graphics 37, 4 (2018), 72. DOI:doi.org/10.1145/3197517.3201275 Google Scholar
Digital Library
- Elad Aharoni-Mack, Yakov Shambik, and Dani Lischinski. 2017. Pigment-based recoloring of watercolor paintings. In Proceedings of the ACM Symposium on Non-Photorealistic Animation and Rendering (NPAR ’17), Article 1. DOI:doi.org/10.1145/3092919.3092926 Google Scholar
Digital Library
- I. E. Berezhnoy, E. O. Postma, and H. J. van den Herik. 2009. Automatic extraction of brushstroke orientation from paintings. Machine Vision and Applications 20, 1 (2009). DOI:doi.org/10.1007/s00138-007-0098-7 Google Scholar
Digital Library
- Q. Chen, D. Li, and C.-K. Tang. 2013. KNN matting. IEEE Transactions on PAMI 35, 9 (2013), 2175--2188. DOI:10.1109/TPAMI.2013.18 Google Scholar
Digital Library
- L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. 2017. Deeplab: Semantic image segmentation with deep convolutional nets, aAtrous convolution, and fully connected CRFs. IEEE Transactions on PAMI 40, 4 (2017), 834--848. DOI:10.1109/TPAMI.2017.2699184Google Scholar
Cross Ref
- Y. Fu, H. Yu, C. Yeh, J. J. Zhang, and T. Lee. 2018. High relief from brush painting. IEEE Transactions on Visualization and Computer Graphics, DOI:10.1109/TVCG.2018.2860004Google Scholar
- B. Gooch, G. Coombe, and P. Shirley. 2002. Artistic vision: Painterly rendering using computer vision techniques. In Proceedings of the ACM 2nd International Symposium on Non-Photorealistic Animation and Rendering. 83--ff. DOI:10.1145/508530.508545 Google Scholar
Digital Library
- A. Hertzmann. 2003. A survey of stroke-based rendering. IEEE Computer Graphics and Applications 23, 4 (2003), 70--81. DOI:10.1109/MCG.2003.1210867 Google Scholar
Digital Library
- A. Hertzmann. 2002. Fast paint texture. In Proceedings of the ACM 2nd International Symposium on Non-Photorealistic Animation and Rendering (NPAR’02). 91--ff. DOI:doi.org/10.1145/508530.508546 Google Scholar
Digital Library
- K. He, G. Gkioxari, P. Dollár, and R. Girshick. 2017. Mask r-CNN. In Proceedings of IEEE ICCV’17. 2980--2988. DOI:10.1109/ICCV.2017.322Google Scholar
- S. Hegde, C. Gatzidis, and F. Tian. 2013. Painterly rendering techniques: A state-of-the-art review of current approaches. Computer Animation and Virtual Worlds 24, 1 (2013), 43--64. DOI:doi.org/10.1002/cav.1435Google Scholar
Cross Ref
- K. He, J. Sun, and X. Tang. 2013. Guided image filtering. IEEE Transactions on PAMI 6 (2013), 1397--1409. DOI:10.1109/TPAMI.2012.213 Google Scholar
Digital Library
- T. Hurtut. 2010. 2D Artistic Images Analysis, a Content-Based Survey. (2010). Retrieved from http://hal.archivesouvertes.fr/hal-00459401_v2/.Google Scholar
- Zhongyi Han, Benzheng Wei, Ashley Mercado, Stephanie Leung, and Shuo Li. 2018. Spine-GAN: Semantic segmentation of multiple spinal structures. Medical Image Analysis 50 (2018), 23--35. DOI:doi.org/10.1016/j.media.2018.08.005Google Scholar
Cross Ref
- Krassimira Ivanova, Peter Stanchev, Koen Vanhoof, and Phillip Ein-Dor. 2010. Semantic and abstraction content of art images. In Proceedings of the Mediterranean Conference on Information Systems 2010. http://aisel.aisnet.org/mcis2010/42.Google Scholar
- Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. 2017. Image-to-image translation with conditional adversarial networks. In Proceedings of CVPR’17. DOI:10.1109/CVPR.2017.632Google Scholar
- Yongcheng Jing, Yang Liu, Yezhou Yang, Zunlei Feng, Yizhou Yu, Dacheng Tao, and Mingli Song. 2018. Stroke controllable fast style transfer with adaptive receptive fields. In Proc.eedings of the European Conference on Computer Vision 2018. 238--254. DOI:10.1007/978-3-030-01261-8_15Google Scholar
Digital Library
- Y. Koyama and M. Goto. 2018. Decomposing images into layers with advanced color blending. Computer Graphics Forum 37 (2018), 397--407. DOI:10.1111/cgf.13577.Google Scholar
Cross Ref
- Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2017. ImageNet classification with deep convolutional neural networks. Communications of the ACM 60, 6 (2017), 84--90. DOI:doi.org/10.1145/3065386 Google Scholar
Digital Library
- S. Lutz, K. Amplianitis, and A. Smolic. 2018. AlphaGAN: Generative adversarial networks for natural image matting. In Proceedings of the British Machine Vision Conference 2018. Retrieved from https://arxiv.org/abs/1807.10088.Google Scholar
- C. Li and T. Chen. 2009. Aesthetic visual quality assessment of paintings. IEEE Journal of Selected Topics in Signal Processing 3, 2 (2009), 236--252. DOI:10.1109/JSTSP.2009.2015077Google Scholar
Cross Ref
- A. Levin, D. Lischinski, and Y. Weiss. 2008. A closed-form solution to natural image matting. IEEE Transactions on PAMI 30, 2 (2008), 228--242. DOI:10.1109/TPAMI.2007.1177 Google Scholar
Digital Library
- A. Levin, A. Rav-Acha, and D. Lischinski. 2008. Spectral matting. IEEE Transactions on PAMI 30, 10 (2008), 1699--1712. DOI:10.1109/TPAMI.2007.1177 Google Scholar
Digital Library
- J. Li, L. Yao, E. Hendriks, and J. Z. Wang. 2012. Rhythmic brush-strokes distinguish van Gogh from his contemporaries: Findings via automated brushstroke extraction. IEEE Transactions. on PAMI 34, 6 (2012), 1159--1176. DOI:10.1109/TPAMI.2011.203 Google Scholar
Digital Library
- J. McCann and N. Pollard. 2009. Local layering. ACM Trans. on Graphics 28, 3 (2009), 84. DOI:doi.org/10.1145/1576246.1531390 Google Scholar
Digital Library
- J. McCann and N. S. Pollard. 2012. Soft stacking. Computer Graphics Forum 31 (2012), 469--478. DOI:doi.org/10.1111/j.1467-8659.2012.03026.x Google Scholar
Digital Library
- T. Porter and T. Duff. 1984. Compositing digital images. In Proceedings of ACM SIGGRAPH84-Computer Graphics, Vol. 18, 253--259. DOI:doi.org/10.1145/964965.808606 Google Scholar
Digital Library
- O. Ronneberger, P. Fischer, and T. Brox. 2015. U-Net: Convolutional networks for biomedical image segmentation. In Proceedings of Medical Image Computing and Computer-Assisted Intervention (MICCAI’15). Springer, LNCS Vol. 9351. DOI:10.1007/978-3-319-24574-4_28Google Scholar
- C. Richardt, J. Lopez-Moreno, A. Bousseau, M. Agrawala, and G. Drettakis. 2014. Vectorising bitmaps into semi-transparent gradient layers. Computer Graphics Forum 33 (2014), 11--19. DOI:doi.org/10.1111/cgf.12408Google Scholar
Digital Library
- Lior Shamir. 2015. What makes a Pollock Pollock: A machine vision approach. International Journal of Arts and Technology 8, 1 (2015), 1--10. DOI:10.1504/IJART.2015.067389Google Scholar
Cross Ref
- D. Singaraju and R. Vidal. 2011. Estimation of alpha mattes for multiple image layers. IEEE Transactions on PAMI 33, 7 (2011), 1295--1309. DOI:10.1109/TPAMI.2010.206 Google Scholar
Digital Library
- Jianchao Tan, Jose Echevarria, and Yotam Gingold. 2018. Efficient palette-based decomposition and recoloring of images via RGBXY-space geometry. ACM Transactions on Graphics 37, 6 (2018), Article 262, DOI:doi.org/10.1145/3272127.3275054 Google Scholar
Digital Library
- J. Tan, J.-M. Lien, and Y. Gingold. 2016. Decomposing images into layers via RGB-space geometry. ACM Transactions on Graphics 36, 1 (2016), 7. DOI:doi.org/10.1145/2988229 Google Scholar
Digital Library
- Laurens J. P. van der Maaten and Eric O. Postma. 2010. Texton-based analysis of paintings. In Proceedings of the SPIE, Vol. 7798, id.77980H. DOI:doi.org/10.1117/12.863082Google Scholar
- Ning Xie, Hirotaka Hachiya, and Masashi Sugiyama. 2013. Artist agent: A reinforcement learning approach to automatic stroke generation in oriental ink painting. IEICE Transactions on Information and Systems E96.D, 5 (2013), 1134--1144. DOI:10.1587/transinf.E96.D.1134Google Scholar
- S. Xu, Y. Xu, S. B. Kang, D. H. Salesin, Y. Pan, and H.-Y. Shum. 2006. Animating Chinese paintings through stroke-based decomposition. ACM Transactions on Graphics 25, 2 (2006), 239--267. DOI:doi.org/10.1145/1138450.1138454 Google Scholar
Digital Library
- N. Xie, T. Zhao, F. Tian, X. Zhang, and M. Sugiyama. 2015. Stroke-based stylization learning and rendering with inverse reinforcement learning. In Proceedings of the 24th International Conference on Artificial Intelligence (IJCAI’15). 2531--2537. DOI:10.5555/2832581.2832603 Google Scholar
Digital Library
- Ningyuan Zheng, Yifan Jiang, and Dingjiang Huang. 2019. StrokeNet: A neural painting environment. In Proceedings of the International Conference on Learning Representations 2019. Retrieved from https://dblp.org/rec/conf/iclr/ZhengJH19.html.Google Scholar
- H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia. 2017. Pyramid scene parsing network. In Proceedings of IEEE CVPR’17. 6230--6239. DOI:10.1109/CVPR.2017.660Google Scholar
- F. Lamberti, A. Sanna, and G. Paravati. 2014. Computer-assisted analysis of painting brushstrokes: Digital image processing for unsupervised extraction of visible features from van Gogh’s works. EURASIP Journal on Image and Video Processing 2014 (2014), 53. DOI:doi.org/10.1186/1687-5281-2014-53Google Scholar
Cross Ref
- S. Zhang, T. Chen, Y. Zhang, S. Hu, and R. R. Martin. 2009. Vectorizing cartoon animations. IEEE Transactions on Visualization and Computer Graphics 15, 4 (2009), 618--629. DOI:10.1109/TVCG.2009.9 Google Scholar
Digital Library
- Robert L. Cook and Kenneth E. Torrance. 1981. A reflectance model for computer graphics. SIGGRAPH Computer Graphics 15, 3 (August 1981), 307--316. DOI: https://doi.org/10.1145/965161.806819 Google Scholar
Digital Library
- Retrieved from https://www.kylebrush.com/.Google Scholar
- Liqiang Nie, Meng Liu, and Xuemeng Song. 2019. Multimodal Learning toward Micro-Video Understanding, Multimodal Learning toward Micro-Video Understanding, Morgan-Claypool. DOI:10.2200/S00938ED1V01Y201907IVM020Google Scholar
- P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik. 2009. From contours to regions: An empirical evaluation. In IEEE CVPR’09. 2294--2301, DOI: https://doi.org/10.1109/CVPR.2009.5206707Google Scholar
- Jingwan Lu, Connelly Barnes, Stephen DiVerdi, and Adam Finkelstein. 2013. RealBrush: Painting with examples of physical media. ACM Transactions on Graphics 32, 4 (2013), Article 117, 12 pages. DOI: https://doi.org/10.1145/2461912.2461998 Google Scholar
Digital Library
Index Terms
Fast Accurate and Automatic Brushstroke Extraction
Recommendations
Automatic extraction of brushstroke orientation from paintings
Spatial characteristics play a major role in the human analysis of paintings. One of the main spatial characteristics is the pattern of brushstrokes. The orientation, shape, and distribution of brushstrokes are important clues for analysis. This paper ...
Neural Brushstroke Engine: Learning a Latent Style Space of Interactive Drawing Tools
We propose Neural Brushstroke Engine, the first method to apply deep generative models to learn a distribution of interactive drawing tools. Our conditional GAN model learns the latent space of drawing styles from a small set (about 200) of unlabeled ...
Fast and scalable system for automatic artist identification
Digital music technologies enable users to create and use large collections of music. One of the desirable features for users is the ability to automatically organize the collection and search in it. One of the operations that they need is automatic ...






Comments