Abstract
Changing the style of an image/video while preserving its content is a crucial criterion to access a new neural style transfer algorithm. However, it is very challenging to transfer a new map art style to a certain video in which “content” comprises a map background and animation objects. In this article, we present a novel comprehensive system that solves the problems in transferring map art style in such video. Our system takes as input an arbitrary video, a map image, and an off-the-shelf map art image. It then generates an artistic video without damaging the functionality of the map and the consistency in details. To solve this challenge, we propose a novel network, Map Art Video Network (MAViNet), the tailored objective functions, and a rich training set with rich animation contents and different map structures. We have evaluated our method on various challenging cases and many comparisons with those of the related works. Our method substantially outperforms state-of-the-art methods in terms of visual quality and meets the mentioned criteria in this research domain.
- [1] . 2021. ArtFlow: Unbiased image style transfer via reversible neural flows. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 862–871.Google Scholar
Cross Ref
- [2] . 2022. Understanding and creating art with AI: Review and outlook. ACM Trans. Multim. Comput. Commun. Applic. 18, 2 (2022), 1–22.Google Scholar
Digital Library
- [3] . 2016. Semantic style transfer and turning two-bit doodles into fine artworks. arXiv preprint arXiv:1603.01768 (2016).Google Scholar
- [4] . 2017. Coherent online video style transfer. In Proceedings of the IEEE International Conference on Computer Vision. 1105–1114.Google Scholar
Cross Ref
- [5] . 2017. StyleBank: An explicit representation for neural image style transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1897–1906.Google Scholar
Cross Ref
- [6] . 2018. Unseen object segmentation in videos via transferable representations. In Proceedings of the Asian Conference on Computer Vision. Springer, 615–631.Google Scholar
- [7] . 2021. Arbitrary video style transfer via multi-channel correlation. In Proceedings of the AAAI Conference on Artificial Intelligence. 1210–1217.Google Scholar
Cross Ref
- [8] . 2015. FlowNet: Learning optical flow with convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision. 2758–2766.Google Scholar
Digital Library
- [9] . 2016. A learned representation for artistic style. arXiv preprint arXiv:1610.07629 (2016).Google Scholar
- [10] Ed Fairburn 2021. Ed-Fairburn, Original Artwork and Illustration. Retrieved from https://edfairburn.com/.Google Scholar
- [11] Chang Gao, Derun Gu, Fangjun Zhang, and Yizhou Yu. 2018. Reconet: Real-time coherent video style transfer network. Asian Conference on Computer Vision, Springer, 637–653.Google Scholar
- [12] . 2018. Real-time coherent video style transfer network. In Proceedings of the Asian Conference on Computer Vision.Google Scholar
- [13] . 2018. ReCoNet: Real-time coherent video style transfer network. In Proceedings of the Asian Conference on Computer Vision. Springer, 637–653.Google Scholar
- [14] . 2015. A neural algorithm of artistic style. arXiv preprint arXiv:1508.06576 (2015).Google Scholar
- [15] . 2016. Image style transfer using convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2414–2423.Google Scholar
Cross Ref
- [16] . 2001. Non-photorealistic Rendering.
DOI: Google ScholarCross Ref
- [17] . 2017. Characterizing and improving stability in neural style transfer. In Proceedings of the IEEE International Conference on Computer Vision. 4067–4076.Google Scholar
Cross Ref
- [18] . 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 770–778.Google Scholar
Cross Ref
- [19] . 2010. Image quality metrics: PSNR vs. SSIM. In Proceedings of the 20th International Conference on Pattern Recognition. IEEE, 2366–2369.Google Scholar
Digital Library
- [20] . 2017. Real-time neural style transfer for videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 783–791.Google Scholar
Cross Ref
- [21] . 2017. Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE International Conference on Computer Vision. 1501–1510.Google Scholar
Cross Ref
- [22] . 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning. PMLR, 448–456.Google Scholar
Digital Library
- [23] . 2019. Neural style transfer: A review. IEEE Trans. Visualiz. Comput. Graph. 26, 11 (2019), 3365–3385.Google Scholar
Cross Ref
- [24] . 2016. Perceptual losses for real-time style transfer and super-resolution. In Proceedings of the European Conference on Computer Vision. Springer, 694–711.Google Scholar
Cross Ref
- [25] . 2020. Analyzing and improving the image quality of StyleGAN. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 8110–8119.Google Scholar
Cross Ref
- [26] . 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).Google Scholar
- [27] . 2019. Learning linear transformations for fast image and video style transfer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 3809–3817.Google Scholar
Cross Ref
- [28] . 2017. Visual attribute transfer through deep image analogy. arXiv preprint arXiv:1705.01088 (2017).Google Scholar
- [29] . 2022. Correlation-based and content-enhanced network for video style transfer. Pattern Anal. Applic. (2022), 1–13.Google Scholar
- [30] . 2021. AdaAttN: Revisit attention mechanism in arbitrary neural style transfer. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 6649–6658.Google Scholar
Cross Ref
- [31] . 2017. Richer convolutional features for edge detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3000–3009.Google Scholar
Cross Ref
- [32] . 2017. Deep photo style transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4990–4998.Google Scholar
Cross Ref
- [33] . 2010. Rectified linear units improve restricted Boltzmann machines. In International Conference on Machine Learning (ICML).Google Scholar
- [34] . 2016. Deconvolution and checkerboard artifacts. Distill 1, 10 (2016), e3.Google Scholar
- [35] . 2019. PyTorch: An imperative style, high-performance deep learning library. arXiv preprint arXiv:1912.01703 (2019).Google Scholar
- [36] . 2017. The 2017 DAVIS challenge on video object segmentation. arXiv:1704.00675 (2017).Google Scholar
- [37] . 2015. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 (2015).Google Scholar
- [38] . 2022. The impact of artificial intelligence on the creativity of videos. ACM Trans. Multim. Comput. Commun. Applic. 18, 1 (2022), 1–27.Google Scholar
Digital Library
- [39] . 2013. Image and video-based artistic stylisation. In Computational Imaging and Vision. Springer.Google Scholar
- [40] . 2016. Artistic style transfer for videos. In Proceedings of the German Conference on Pattern Recognition. Springer, 26–36.Google Scholar
Cross Ref
- [41] . 2018. Artistic style transfer for videos and spherical images. Int. J. Comput. Vis. 126, 11 (2018), 1199–1219.Google Scholar
Digital Library
- [42] . 2016. Painting style transfer for head portraits using convolutional neural networks. ACM Trans. Graph. 35, 4 (2016), 1–18.Google Scholar
Digital Library
- [43] . 2021. Map art style transfer with multi-stage framework. Multim. Tools Applic. 80, 3 (2021), 4279–4293.Google Scholar
Digital Library
- [44] . 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).Google Scholar
- [45] . 2020. Filter response normalization layer: Eliminating batch dependence in the training of deep neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 11237–11246.Google Scholar
Cross Ref
- [46] . 2002. Non-photorealistic Computer Graphics: Modeling, Rendering, and Animation. Morgan Kaufmann.Google Scholar
- [47] . 2017. Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 6924–6932.Google Scholar
Cross Ref
- [48] . 2021. Learning self-supervised space-time CNN for fast video style transfer. IEEE Trans. Image Process. 30 (2021), 2501–2512.Google Scholar
Cross Ref
- [49] . 2018. Improved style transfer by respecting inter-layer correlations. arXiv preprint arXiv:1801.01933 (2018).Google Scholar
- [50] . 2018. Multi-style generative network for real-time transfer. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops.Google Scholar
- [51] . 2020. Deep image blending. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 231–240.Google Scholar
Cross Ref
- [52] . 2018. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 586–595.Google Scholar
Cross Ref
Index Terms
Structure-aware Video Style Transfer with Map Art
Recommendations
Map art style transfer with multi-stage framework
AbstractWe propose a multi-stage framework to create the stylized map art images. Existing techniques are successful in transferring style in photos. Yet, the noise in results and the harmonization in the generated art images still need to be ...
Map style formalization: rendering techniques extension for cartography
Expresive '16: Proceedings of the Joint Symposium on Computational Aesthetics and Sketch Based Interfaces and Modeling and Non-Photorealistic Animation and RenderingCartographic design requires controllable methods and tools to produce maps that are adapted to users' needs and preferences. The formalized rules and constraints for cartographic representation come mainly from the conceptual framework of graphic ...
Aesthetic-Aware Image Style Transfer
MM '20: Proceedings of the 28th ACM International Conference on MultimediaStyle transfer aims to synthesize an image which inherits the content of one image while preserving a similar style of the other one. The "style'' of an image usually refers to its unique feeling conveyed from visual features, which is highly related to ...






Comments