skip to main content
survey

Understanding and Creating Art with AI: Review and Outlook

Authors Info & Claims
Published:16 February 2022Publication History
Skip Abstract Section

Abstract

Technologies related to artificial intelligence (AI) have a strong impact on the changes of research and creative practices in visual arts. The growing number of research initiatives and creative applications that emerge in the intersection of AI and art motivates us to examine and discuss the creative and explorative potentials of AI technologies in the context of art. This article provides an integrated review of two facets of AI and art: (1) AI is used for art analysis and employed on digitized artwork collections, or (2) AI is used for creative purposes and generating novel artworks. In the context of AI-related research for art understanding, we present a comprehensive overview of artwork datasets and recent works that address a variety of tasks such as classification, object detection, similarity retrieval, multimodal representations, and computational aesthetics, among others. In relation to the role of AI in creating art, we address various practical and theoretical aspects of AI Art and consolidate related works that deal with those topics in detail. Finally, we provide a concise outlook on the future progression and potential impact of AI technologies on our understanding and creation of art.

REFERENCES

  1. [1] Pavlov G. Goh A. Ramesh, M., and Gray S.. 2021. DALL\(\cdot\)E: Creating Images from Text. Retrieved January 25, 2021 from https://openai.com/blog/dall-e/.Google ScholarGoogle Scholar
  2. [2] Abry Patrice, Wendt Herwig, and Jaffard Stéphane. 2013. When Van Gogh meets Mandelbrot: Multifractal classification of painting’s texture. Signal Processing 93, 3 (2013), 554572. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. [3] Achlioptas Panos, Ovsjanikov Maks, Haydarov Kilichbek, Elhoseiny Mohamed, and Guibas Leonidas. 2021. ArtEmis: Affective language for visual art. arXiv preprint arXiv:2101.07396 (2021).Google ScholarGoogle Scholar
  4. [4] Agarwal Siddharth, Karnick Harish, Pant Nirmal, and Patel Urvesh. 2015. Genre and style based painting classification. In Proceedings of the 2015 IEEE Winter Conference on Applications of Computer Vision (WACV’15). IEEE, Los Alamitos, CA, 588594. https://doi.org/10.1109/WACV.2015.84 Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. [5] Alameda-Pineda Xavier, Ricci Elisa, Yan Yan, and Sebe Nicu. 2016. Recognizing emotions from abstract paintings using non-linear matrix completion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 52405248.Google ScholarGoogle ScholarCross RefCross Ref
  6. [6] Amirshahi Seyed Ali, Hayn-Leichsenring Gregor Uwe, Denzler Joachim, and Redies Christoph. 2014. Jenaesthetics subjective dataset: Analyzing paintings by subjective scores. In Proceedings of the European Conference on Computer Vision. 319.Google ScholarGoogle Scholar
  7. [7] Bar Yaniv, Levy Noga, and Wolf Lior. 2014. Classification of artistic styles using binarized features derived from a deep neural network. In Computer Vision—ECCV 2014 Workshops. Lecture Notes in Computer Science, Vol. 8925. Springer, 71–84. https://doi.org/10.1007/978-3-319-16178-5_5Google ScholarGoogle Scholar
  8. [8] Baraldi Lorenzo, Cornia Marcella, Grana Costantino, and Cucchiara Rita. 2018. Aligning text and document illustrations: Towards visually explainable digital humanities. In Proceedings of the 24th International Conference on Pattern Recognition (ICPR’18). IEEE, Los Alamitos, CA, 10971102.Google ScholarGoogle ScholarCross RefCross Ref
  9. [9] Bell Peter and Impett Leonardo. 2019. Ikonographie und interaktion. Computergestützte analyse von posen in bildern der heilsgeschichte. Das Mittelalter 24, 1 (2019), 3153.Google ScholarGoogle ScholarCross RefCross Ref
  10. [10] Bianco Simone, Mazzini Davide, Napoletano Paolo, and Schettini Raimondo. 2019. Multitask painting categorization by deep multibranch neural network. Expert Systems with Applications 135 (2019), 90101.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. [11] Boden Margaret A.. 2010. Creativity and Art: Three Roads to Surprise. Oxford University Press.Google ScholarGoogle Scholar
  12. [12] Boden Margaret A. and Edmonds Ernest A.. 2009. What is generative art?Digital Creativity 20, 1–2 (2009), 2146.Google ScholarGoogle Scholar
  13. [13] Bongini Pietro, Becattini Federico, Bagdanov Andrew D., and Bimbo Alberto Del. 2020. Visual question answering for cultural heritage. arXiv preprint arXiv:2003.09853 (2020).Google ScholarGoogle Scholar
  14. [14] Brock Andrew, Donahue Jeff, and Simonyan Karen. 2018. Large scale GAN training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096 (2018).Google ScholarGoogle Scholar
  15. [15] Bsteh Sheila and Vermeylen Filip. 2021. From Painting to Pixel: Understanding NFT Artworks. Retrieved June 15, 2021 from https://www.researchgate.net/publication/351346278_From_Painting_to_Pixel_Understanding_NFT_artworks.Google ScholarGoogle Scholar
  16. [16] Carneiro Gustavo, Silva Nuno Pinho Da, Bue Alessio Del, and Costeira João Paulo. 2012. Artistic image classification: An analysis on the PRINTART database. In Computer Vision—ECCV 2012. Lecture Notes in Computer Science, Vol. 7575. Springer, 143–157. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. [17] Castellano Giovanna, Lella Eufemia, and Vessio Gennaro. 2021. Visual link retrieval and knowledge discovery in painting datasets. Multimedia Tools and Applications 80, 5 (2021), 65996616.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. [18] Castellano Giovanna and Vessio Gennaro. 2020. Towards a tool for visual link retrieval and knowledge discovery in painting datasets. In Digital Libraries: The Era of Big Data and Data Science. Communications in Computer and Information Science, Vol. 1177. Springer, 105–110.Google ScholarGoogle Scholar
  19. [19] Cetinic Eva. 2020. Iconographic image captioning for artworks. In Pattern Recognition. ICPR International Workshops and Challenges. Lecture Notes in Computer Science, Vol. 12663. Springer, 502–516.Google ScholarGoogle Scholar
  20. [20] Cetinic Eva and Grgic Sonja. 2013. Automated painter recognition based on image feature extraction. In Proceedings of the 2013 55th International Symposium (ELMAR’13). IEEE, Los Alamitos, CA, 1922.Google ScholarGoogle Scholar
  21. [21] Cetinic Eva and Grgic Sonja. 2016. Genre classification of paintings. In Proceedings of the 2016 International Symposium (ELMAR’16). IEEE, Los Alamitos, CA, 201204.Google ScholarGoogle ScholarCross RefCross Ref
  22. [22] Cetinic Eva, Lipic Tomislav, and Grgic Sonja. 2018. Fine-tuning convolutional neural networks for fine art classification. Expert Systems with Applications 114 (2018), 107118.Google ScholarGoogle ScholarCross RefCross Ref
  23. [23] Cetinic Eva, Lipic Tomislav, and Grgic Sonja. 2019. A deep learning perspective on beauty, sentiment, and remembrance of art. IEEE Access 7 (2019), 7369473710.Google ScholarGoogle ScholarCross RefCross Ref
  24. [24] Cetinic Eva, Lipic Tomislav, and Grgic Sonja. 2020. Learning the principles of art history with convolutional neural networks. Pattern Recognition Letters 129 (2020), 5662.Google ScholarGoogle ScholarCross RefCross Ref
  25. [25] Chamberlain Rebecca, Mullin Caitlin, Scheerlinck Bram, and Wagemans Johan. 2018. Putting the art in artificial: Aesthetic responses to computer-generated art. Psychology of Aesthetics, Creativity, and the Arts 12, 2 (2018), 177.Google ScholarGoogle ScholarCross RefCross Ref
  26. [26] Ch’ng Eugene. 2019. Art by computing machinery: Is machine art acceptable in the artworld?ACM Transactions on Multimedia Computing, Communications, and Applications 15, 2s (2019), 117. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. [27] Christie’s. 2018. Is Artificial Intelligence Set to Become Art’s Next Medium? Retrieved December 2, 2020 from https://www.christies.com/features/A-collaboration-between-two-artists-one-human-one-a-machine-9332-1.aspx.Google ScholarGoogle Scholar
  28. [28] Christie’s. 2021. Monumental Collage by Beeple Is First Purely Digital Artwork NFT to Come to Auction. Retrieved June 15, 2021 from https://www.christies.com/features/Monumental-collage-by-Beeple-is-first-purely-digital-artwork-NFT-to-come-to-auction-11510-7.aspx.Google ScholarGoogle Scholar
  29. [29] Coeckelbergh Mark. 2017. Can machines create art?Philosophy & Technology 30, 3 (2017), 285303.Google ScholarGoogle ScholarCross RefCross Ref
  30. [30] Colton Simon. 2012. The painting fool: Stories from building an automated painter. In Computers and Creativity. Springer, 338. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. [31] Colton Simon, Pease Alison, and Saunders Rob. 2018. Issues of authenticity in autonomously creative systems. In Proceedings of the 9th International Conference on Computational Creativity.Google ScholarGoogle Scholar
  32. [32] Crowley Elliot J., Parkhi Omkar M., and Zisserman Andrew. 2015. Face painting: Querying art with photos. In Proceedings of the 2015 British Machine Vision Conference (BMVC’15). 65.1–65.13.Google ScholarGoogle ScholarCross RefCross Ref
  33. [33] Crowley Elliot J. and Zisserman Andrew. 2014. In search of art. In Computer Vision—ECCV 2014 Workshops. Lecture Notes in Computer Science, Vol. 8925. Springer, 5470.Google ScholarGoogle Scholar
  34. [34] Crowley Elliot J. and Zisserman Andrew. 2014. The state of the art: Object retrieval in paintings using discriminative regions. In Proceedings of the 2014 British Machine Vision Conference (BMVC’14).Google ScholarGoogle ScholarCross RefCross Ref
  35. [35] Crowley Elliot J. and Zisserman Andrew. 2016. The art of detection. In Computer Vision—ECCV 2016 Workshops. Lecture Notes in Computer Science, Vol. 9913. Springer, 721737.Google ScholarGoogle Scholar
  36. [36] Daniele Antonio and Song Yi-Zhe. 2019. AI + art = human. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (AIES’19). ACM, New York, NY, 155161. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. [37] David Omid E. and Netanyahu Nathan S.. 2016. DeepPainter: Painter classification using deep convolutional autoencoders. In Artificial Neural Networks and Machine Learning. Lecture Notes in Computer Science, Vol. 9887. Springer, 20–28.Google ScholarGoogle Scholar
  38. [38] Deng Jia, Dong Wei, Socher Richard, Li Li-Jia, Li Kai, and Fei-Fei Li. 2009. ImageNet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR’09). IEEE, Los Alamitos, CA, 248255.Google ScholarGoogle ScholarCross RefCross Ref
  39. [39] Deng Yingying, Tang Fan, Dong Weiming, Ma Chongyang, Huang Feiyue, Deussen Oliver, and Xu Changsheng. 2020. Exploring the representativity of art paintings. IEEE Transactions on Multimedia 23 (2020), 2794–2805.Google ScholarGoogle Scholar
  40. [40] Dorin Alan. 2013. Chance and complexity: Stochastic and generative processes in art and creativity. In Proceedings of the 2013 International Virtual Reality Conference. ACM, New York, NY, Article 19, 8 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. [41] Dorin Alan, McCabe Jonathan, McCormack Jon, Monro Gordon, and Whitelaw Mitchell. 2012. A framework for understanding generative art. Digital Creativity 23, 3–4 (2012), 239259.Google ScholarGoogle ScholarCross RefCross Ref
  42. [42] Efros Alexei A. and Freeman William T.. 2001. Image quilting for texture synthesis and transfer. In Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH’01). ACM, New York, NY, 341346. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. [43] Efthymiou Athanasios, Rudinac Stevan, Kackovic Monika, Worring Marcel, and Wijnberg Nachoem. 2021. Graph neural networks for knowledge enhanced visual representation of paintings. arXiv preprint arXiv:2105.08190 (2021). Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. [44] Elgammal Ahmed. 2019. AI is blurring the definition of artist: Advanced algorithms are using machine learning to create art autonomously. American Scientist 107, 1 (2019), 1822.Google ScholarGoogle ScholarCross RefCross Ref
  45. [45] Elgammal Ahmed, Liu Bingchen, Elhoseiny Mohamed, and Mazzone Marian. 2017. CAN: Creative adversarial networks, generating “art” by learning about styles and deviating from style norms. arXiv preprint arXiv:1706.07068 (2017).Google ScholarGoogle Scholar
  46. [46] Elgammal Ahmed, Liu Bingchen, Kim Diana, Elhoseiny Mohamed, and Mazzone Marian. 2018. The shape of art history in the eyes of the machine. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI’18). 21832191. Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. [47] Epstein Ziv, Levine Sydney, Rand David G., and Rahwan Iyad. 2020. Who gets credit for AI-generated art?iScience 23, 9 (2020), 101515.Google ScholarGoogle ScholarCross RefCross Ref
  48. [48] Eshraghian Jason K.. 2020. Human ownership of artificial creativity. Nature Machine Intelligence 2, 3 (2020), 157160.Google ScholarGoogle ScholarCross RefCross Ref
  49. [49] Florea Corneliu, Condorovici Răzvan, Vertan Constantin, Butnaru Raluca, Florea Laura, and Vrânceanu Ruxandra. 2016. Pandora: Description of a painting database for art movement recognition with baselines and perspectives. In Proceedings of the 24th European Signal Processing Conference (EUSIPCO’16). IEEE, Los Alamitos, CA, 918922.Google ScholarGoogle ScholarCross RefCross Ref
  50. [50] Franceschet Massimo, Colavizza Giovanni, Smith T’ai, Finucane Blake, Ostachowski Martin Lukas, Scalet Sergio, Perkins Jonathan, Morgan James, and Hernández Sebástian. 2020. Crypto art: A decentralized view. Leonardo 54, 4 (2020), 18.Google ScholarGoogle Scholar
  51. [51] Galanter Philip. 2003. What is generative art? Complexity theory as a context for art theory. In Proceedings of the 2003 6th Generative Art Conference (GA’03).Google ScholarGoogle Scholar
  52. [52] Garcia Noa and Vogiatzis George. 2018. How to read paintings: Semantic art understanding with multi-modal retrieval. In Computer Vision—ECCV 2018 Workshops. Lecture Notes in Computer Science, Vol. 11130. Springer, 676–691.Google ScholarGoogle Scholar
  53. [53] Garcia Noa, Ye Chentao, Liu Zihua, Hu Qingtao, Otani Mayu, Chu Chenhui, Nakashima Yuta, and Mitamura Teruko. 2020. A dataset and baselines for visual question answering on art. In Computer Vision—ECCV 2020 Workshops. Lecture Notes in Computer Science, Vol. 12536. Springer, 92–108.Google ScholarGoogle Scholar
  54. [54] Gatys Leon A., Ecker Alexander S., and Bethge Matthias. 2016. Image style transfer using convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 24142423.Google ScholarGoogle ScholarCross RefCross Ref
  55. [55] Gillotte Jessica L.. 2019. Copyright infringement in AI-generated artworks. UC Davis Law Review 53 (2019), 2655.Google ScholarGoogle Scholar
  56. [56] Girshick Ross, Donahue Jeff, Darrell Trevor, and Malik Jitendra. 2014. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR’14). IEEE, Los Alamitos, CA, 580587. Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. [57] Gonthier Nicolas, Gousseau Yann, and Ladjal Saïd. 2020. An analysis of the transfer learning of convolutional neural networks for artistic images. In Pattern Recognition. Lecture Notes in Computer Science, Vol. 12663. Springer, 546–561.Google ScholarGoogle Scholar
  58. [58] Gonthier Nicolas, Gousseau Yann, Ladjal Said, and Bonfait Olivier. 2018. Weakly supervised object detection in artworks. In Computer Vision—ECCV 2018 Workshops. Lecture Notes in Computer Science, Vol. 11130. Springer, 692–709.Google ScholarGoogle Scholar
  59. [59] Gooch Bruce and Gooch Amy. 2001. Non-Photorealistic Rendering. CRC Press, Boca Raton, FL.Google ScholarGoogle ScholarCross RefCross Ref
  60. [60] Goodfellow Ian, Pouget-Abadie Jean, Mirza Mehdi, Xu Bing, Warde-Farley David, Ozair Sherjil, Courville Aaron, and Bengio Yoshua. 2014. Generative adversarial nets. Advances in Neural Information Processing Systems 27 (2014), 26722680. Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. [61] Graham Daniel J., Hughes James M., Leder Helmut, and Rockmore Daniel N.. 2012. Statistics, vision, and the analysis of artistic style. Wiley Interdisciplinary Reviews: Computational Statistics 4, 2 (2012), 115123. Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. [62] Guadamuz Andres. 2017. Do Androids dream of electric copyright? Comparative analysis of originality in artificial intelligence generated works. Intellectual Property Quarterly. Open access, April 1, 2017.Google ScholarGoogle Scholar
  63. [63] Hayn-Leichsenring Gregor U., Lehmann Thomas, and Redies Christoph. 2017. Subjective ratings of beauty and aesthetics: Correlations with statistical image properties in western oil paintings. i-Perception 8, 3 (2017), 2041669517715474.Google ScholarGoogle ScholarCross RefCross Ref
  64. [64] Hertzmann Aaron. 1998. Painterly rendering with curved brush strokes of multiple sizes. In Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH’98). ACM, New York, NY, 453460. Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. [65] Hertzmann Aaron. 2018. Can computers create art? In Arts, Vol. 7. Multidisciplinary Digital Publishing Institute, 18.Google ScholarGoogle Scholar
  66. [66] Hertzmann Aaron. 2020. Computers do not make art, people do. Communications of the ACM 63, 5 (2020), 4548. Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. [67] Hertzmann Aaron. 2020. Visual indeterminacy in GAN art. Leonardo 53, 4 (2020), 424428. Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. [68] Hertzmann Aaron, Jacobs Charles E., Oliver Nuria, Curless Brian, and Salesin David H.. 2001. Image analogies. In Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH’01). ACM, New York, NY, 327340. Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. [69] Hong Joo-Wha. 2018. Bias in perception of art produced by artificial intelligence. In Human-Computer Interaction. Lecture Notes in Computer Science, Vol. 10902. Springer, 290–303.Google ScholarGoogle Scholar
  70. [70] Hong Joo-Wha and Curran Nathaniel Ming. 2019. Artificial intelligence, artists, and art: Attitudes toward artwork produced by humans vs. artificial intelligence. ACM Transactions on Multimedia Computing, Communications, and Applications 15, 2s (2019), 116. Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. [71] Jacobsen C. Robert and Nielsen Morten. 2013. Stylometry of paintings using hidden Markov modelling of contourlet transforms. Signal Processing 93, 3 (2013), 579591. Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. [72] Jenicek Tomas and Chum Ondřej. 2019. Linking art through human poses. In Proceedings of the 2019 International Conference on Document Analysis and Recognition (ICDAR’19). IEEE, Los Alamitos, CA, 1338–1345.Google ScholarGoogle Scholar
  73. [73] Jing Yongcheng, Yang Yezhou, Feng Zunlei, Ye Jingwen, Yu Yizhou, and Song Mingli. 2019. Neural style transfer: A review. IEEE Transactions on Visualization and Computer Graphics 26 (2019), 3365–3385.Google ScholarGoogle Scholar
  74. [74] Johnson Colin G., McCormack Jon, Santos Iria, and Romero Juan. 2019. Understanding aesthetics and fitness measures in evolutionary art systems. Complexity 2019 (2019), 3495962.Google ScholarGoogle ScholarDigital LibraryDigital Library
  75. [75] Karayev Sergey, Trentacoste Matthew, Han Helen, Agarwala Aseem, Darrell Trevor, Hertzmann Aaron, and Winnemoeller Holger. 2014. Recognizing image style. In Proceedings of the 2014 British Machine Vision Conference (BMVC’14).Google ScholarGoogle Scholar
  76. [76] Karras Tero, Laine Samuli, and Aila Timo. 2019. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’19). IEEE, Los Alamitos, CA, 44014410.Google ScholarGoogle ScholarCross RefCross Ref
  77. [77] Keren Daniel. 2002. Painter identification using local features and naive Bayes. In Proceedings of the 16th International Conference on Pattern Recognition (ICPR’02). IEEE, Los Alamitos, CA, 474477.Google ScholarGoogle ScholarCross RefCross Ref
  78. [78] Khalili Abdullah and Bouchachia Hamid. 2021. An information theory approach to aesthetic assessment of visual patterns. Entropy 23, 2 (2021), 153.Google ScholarGoogle ScholarCross RefCross Ref
  79. [79] Khan Fahad Shahbaz, Beigpour Shida, Weijer Joost Van de, and Felsberg Michael. 2014. Painting-91: A large scale database for computational painting categorization. Machine Vision and Applications 25, 6 (2014), 13851397. Google ScholarGoogle ScholarDigital LibraryDigital Library
  80. [80] Kim Diana, Liu Bingchen, Elgammal Ahmed, and Mazzone Marian. 2018. Finding principal semantics of style in art. In Proceedings of the 12th IEEE International Conference on Semantic Computing (ICSC’18). IEEE, Los Alamitos, CA, 156163.Google ScholarGoogle ScholarCross RefCross Ref
  81. [81] Kim Daniel, Son Seung-Woo, and Jeong Hawoong. 2014. Large-scale quantitative analysis of painting arts. Scientific Reports 4 (2014), 7370.Google ScholarGoogle ScholarCross RefCross Ref
  82. [82] Kim Diana, Xu Jason, Elgammal Ahmed, and Mazzone Marian. 2019. Computational analysis of content in fine art paintings. In Proceedings of the 10th International Conference on Computational Creativity (ICCC’19). 3340.Google ScholarGoogle Scholar
  83. [83] Klinke Harald. 2020. The digital transformation of art history. In The Routledge Companion to Digital Humanities and Art History. Routledge, 3242.Google ScholarGoogle Scholar
  84. [84] Lang Sabine and Ommer Björn. 2018. Attesting similarity: Supporting the organization and study of art image collections with computer vision. Digital Scholarship in the Humanities 33, 4 (2018), 845856.Google ScholarGoogle ScholarCross RefCross Ref
  85. [85] Lang Sabine and Ommer Bjorn. 2018. Reflecting on how artworks are processed and analyzed by computer vision. In Computer Vision—ECCV 2018 Workshops. Lecture Notes in Computer Science, Vol. 11130. Springer, 647–652.Google ScholarGoogle Scholar
  86. [86] Lecoutre Adrian, Négrevergne Benjamin, and Yger Florian. 2017. Recognizing art style automatically in painting with deep learning. In Proceedings of the 9th Asian Conference on Machine Learning (ACML’17).327342.Google ScholarGoogle Scholar
  87. [87] Lee Byunghwee, Seo Min Kyung, Kim Daniel, Shin In-Seob, Schich Maximilian, Jeong Hawoong, and Han Seung Kee. 2020. Dissecting landscape art history with information theory. Proceedings of the National Academy of Sciences 117, 43 (2020), 2658026590.Google ScholarGoogle ScholarCross RefCross Ref
  88. [88] Lin Hubert, Zuijlen Mitchell Van, Wijntjes Maarten W. A., Pont Sylvia C., and Bala Kavita. 2020. Insights from a large-scale database of material depictions in paintings. In Pattern Recognition. Lecture Notes in Computer Science, Vol. 12663. Springer, 531545.Google ScholarGoogle Scholar
  89. [89] Llano Maria Teresa, d’Inverno Mark, Yee-King Matthew, McCormack Jon, Ilsar Alon, Pease Alison, and Colton Simon. 2020. Explainable computational creativity. In Proceedings of the 11th International Conference on Computational Creativity (ICCC’20). 334–341.Google ScholarGoogle Scholar
  90. [90] Madhu Prathmesh, Kosti Ronak, Mührenberg Lara, Bell Peter, Maier Andreas, and Christlein Vincent. 2019. Recognizing characters in art history using deep learning. In Proceedings of the 1st Workshop on Structuring and Understanding of Multimedia Heritage Contents ([email protected] Multmedia’19). ACM, New York, NY, 1522. Google ScholarGoogle ScholarDigital LibraryDigital Library
  91. [91] Madhu Prathmesh, Marquart Tilman, Kosti Ronak, Bell Peter, Maier Andreas, and Christlein Vincent. 2020. Understanding compositional structures in art historical images using pose and gaze priors. In Computer Vision—ECCV 2020 Workshops. Lecture Notes in Computer Science, Vol. 12536. Springer, 109–125.Google ScholarGoogle Scholar
  92. [92] Mao Hui, Cheung Ming, and She James. 2017. DeepArt: Learning joint representations of visual arts. In Proceedings of the 2017 ACM on Multimedia Conference (MM’17). ACM, New York, NY, 1183–1191. Google ScholarGoogle ScholarDigital LibraryDigital Library
  93. [93] Mather George. 2018. Visual image statistics in the history of western art. Art & Perception 6, 2–3 (2018), 97115.Google ScholarGoogle ScholarCross RefCross Ref
  94. [94] Mazzone Marian and Elgammal Ahmed. 2019. Art, creativity, and the potential of artificial intelligence. In Arts, Vol. 8. Multidisciplinary Digital Publishing Institute, 26.Google ScholarGoogle Scholar
  95. [95] McCormack Jon, Bown Oliver, Dorin Alan, McCabe Jonathan, Monro Gordon, and Whitelaw Mitchell. 2014. Ten questions concerning generative computer art. Leonardo 47, 2 (2014), 135141.Google ScholarGoogle ScholarCross RefCross Ref
  96. [96] McCormack Jon, Gifford Toby, and Hutchings Patrick. 2019. Autonomy, authenticity, authorship and intention in computer generated art. In Computational Intelligence in Music, Sound, Art and Design. Lecture Notes in Computer Science, Vol. 11453. Springer, 35–50.Google ScholarGoogle Scholar
  97. [97] Filippi Joanie Lemercier Addie Wagenknecht Mat Dryhurst Memo Akten, Primavera De. 2021. A Guide to Ecofriendly CryptoArt (NFTs). Retrieved June 15, 2021 from https://github.com/memo/eco-nft.Google ScholarGoogle Scholar
  98. [98] Menis-Mastromichalakis Orfeas, Sofou Natasa, and Stamou Giorgos. 2020. Deep ensemble art style recognition. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN’20). IEEE, Los Alamitos, CA, 18.Google ScholarGoogle ScholarCross RefCross Ref
  99. [99] Mensink Thomas and Gemert Jan Van. 2014. The Rijksmuseum Challenge: Museum-centered visual recognition. In International Conference on Multimedia Retrieval (ICMR’14). ACM, New York, NY, 451. Google ScholarGoogle ScholarDigital LibraryDigital Library
  100. [100] Mermet Alexis, Kitamoto Asanobu, Suzuki Chikahiko, and Takagishi Akira. 2020. Face detection on pre-modern Japanese artworks using R-CNN and image patching for semi-automatic annotation. In Proceedings of the 2nd Workshop on Structuring and Understanding of Multimedia Heritage Contents (SUMAC’20). ACM, New York, NY, 2331. Google ScholarGoogle ScholarDigital LibraryDigital Library
  101. [101] Milani Federico and Fraternali Piero. 2020. A data set and a convolutional model for iconography classification in paintings. arXiv preprint arXiv:2010.11697 (2020). Google ScholarGoogle ScholarDigital LibraryDigital Library
  102. [102] Mohammad Saif and Kiritchenko Svetlana. 2018. Wikiart emotions: An annotated dataset of emotions evoked by art. In Proceedings of the 11th International Conference on Language Resources and Evaluation (LREC’18).Google ScholarGoogle Scholar
  103. [103] Mordvintsev Alexander, Olah Christopher, and Tyka Mike. 2015. Inceptionism: Going Deeper into Neural Networks. Retrieved 15 June 2021 from http://googleresearch.blogspot.com/2015/06/inceptionism-going-deeper-into-neural.html.Google ScholarGoogle Scholar
  104. [104] Mzoughi Olfa, Bigand André, and Renaud Christophe. 2018. Face detection in painting using deep convolutional neural networks. In Advanced Concepts for Intelligent Vision Systems. Lecture Notes in Computer Science, Vol. 11182. Springer, 333–341.Google ScholarGoogle Scholar
  105. [105] Notaro Anna. 2020. State-of-the-art: AI through the (artificial) artist’s eye. In Proceedings of the 2020 Conference on Electronic Visualization and the Arts (EVA’20). 322328.Google ScholarGoogle ScholarCross RefCross Ref
  106. [106] Posthumus Etienne. 2020. Brill Iconclass AI Test Set. Retrieved 1 February 2021 from https://labs.brill.com/ictestset/.Google ScholarGoogle Scholar
  107. [107] Qi Hanchao and Hughes Shannon. 2011. A new method for visual stylometry on impressionist paintings. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP’11). IEEE, Los Alamitos, CA, 20362039.Google ScholarGoogle ScholarCross RefCross Ref
  108. [108] Radford Alec, Kim J. W., Hallacy Chris, Ramesh A., Goh Gabriel, Agarwal Sandhini, Sastry Girish, Askell Amanda, Mishkin Pamela, Clark J., Krueger Gretchen, and Sutskever Ilya. 2021. Learning transferable visual models from natural language supervision. ArXiv abs/2103.00020 (2021).Google ScholarGoogle Scholar
  109. [109] Ragot Martin, Martin Nicolas, and Cojean Salomé. 2020. AI-generated vs. human artworks. A perception bias towards artificial intelligence? In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (CHI’20). ACM, New York, NY, 110. Google ScholarGoogle ScholarDigital LibraryDigital Library
  110. [110] Rea N.. 2018. Has Artificial Intelligence Brought Us the Next Great Art Movement? Here Are 9 Pioneering Artists Who Are Exploring AI’s Creative Potential. Retrieved December 3, 2020 from https://news.artnet.com/market/9-artists-artificial-intelligence-1384207.Google ScholarGoogle Scholar
  111. [111] Redies Christoph and Brachmann Anselm. 2017. Statistical image properties in large subsets of traditional art, bad art, and abstract art. Frontiers in Neuroscience 11 (2017), 593.Google ScholarGoogle ScholarCross RefCross Ref
  112. [112] Reed Scott, Akata Zeynep, Yan Xinchen, Logeswaran Lajanugen, Schiele Bernt, and Lee Honglak. 2016. Generative adversarial text to image synthesis. In Proceedings of the 33rd International Conference on Machine Learning (ICML’16), Vol. 48. 1060–1069. Google ScholarGoogle ScholarDigital LibraryDigital Library
  113. [113] Risset J. C.. 1982. Stochastic processes in music and art. In Stochastic Processes in Quantum Theory and Statistical Physics. Springer, 281288.Google ScholarGoogle ScholarCross RefCross Ref
  114. [114] Sabatelli Matthia, Kestemont Mike, Daelemans Walter, and Geurts Pierre. 2018. Deep transfer learning for art classification problems. In Computer Vision—ECCV 2018 Workshops. Lecture Notes in Computer Science, Vol. 11130. Springer, 631646.Google ScholarGoogle Scholar
  115. [115] Sandoval Catherine, Pirogova Elena, and Lech Margaret. 2019. Two-stage deep learning approach to the classification of fine-art paintings. IEEE Access 7 (2019), 4177041781.Google ScholarGoogle ScholarCross RefCross Ref
  116. [116] Sargentis G., Dimitriadis Panayiotis, Koutsoyiannis and Demetris. 2020. Aesthetical issues of Leonardo da Vinci’s and Pablo Picasso’s paintings with stochastic evaluation. Heritage 3, 2 (2020), 283305.Google ScholarGoogle ScholarCross RefCross Ref
  117. [117] Seguin Benoit, Striolo Carlotta, Kaplan and Frederic. 2016. Visual link retrieval in a database of paintings. In Proceedings of the European Conference on Computer Vision. 753767.Google ScholarGoogle ScholarCross RefCross Ref
  118. [118] Shamir Lior, Macura Tomasz, Orlov Nikita, Eckley D. Mark, and Goldberg Ilya G.. 2010. Impressionism, expressionism, surrealism: Automated recognition of painters and schools of art. ACM Transactions on Applied Perception 7, 2 (2010), 8. Google ScholarGoogle ScholarDigital LibraryDigital Library
  119. [119] Shamir Lior and Tarakhovsky Jane A.. 2012. Computer analysis of art. ACM Journal on Computing and Cultural Heritage 5, 2 (2012), Article 7, 11 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  120. [120] Shen Xi, Efros Alexei A., and Aubry Mathieu. 2019. Discovering visual patterns in art collections with spatially-consistent feature learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’19). IEEE, Los Alamitos, CA, 92789287.Google ScholarGoogle ScholarCross RefCross Ref
  121. [121] Sheng Shurong and Moens Marie-Francine. 2019. Generating captions for images of ancient artworks. In Proceedings of the 27th ACM International Conference on Multimedia (MM’19). ACM, New York, NY, 2478–2486. Google ScholarGoogle ScholarDigital LibraryDigital Library
  122. [122] Sidorova Elena. 2019. The cyber turn of the contemporary art market. In Arts, Vol. 8. Multidisciplinary Digital Publishing Institute, 84.Google ScholarGoogle Scholar
  123. [123] Sigaki Higor Y. D., Perc Matjaž, and Ribeiro Haroldo V.. 2018. History of art paintings through the lens of entropy and complexity. Proceedings of the National Academy of Sciences 115, 37 (2018), E8585–E8594.Google ScholarGoogle ScholarCross RefCross Ref
  124. [124] Srinivasan Ramya and Uchino Kanji. 2021. Biases in generative art: A causal look from the lens of art history. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT’21). 4151. Google ScholarGoogle ScholarDigital LibraryDigital Library
  125. [125] Stefanini Matteo, Cornia Marcella, Baraldi Lorenzo, Corsini Massimiliano, and Cucchiara Rita. 2019. Artpedia: A new visual-semantic dataset with visual and contextual sentences in the artistic domain. In Image Analysis and Processing. Lecture Notes in Computer Science, Vol. 11752. Springer, 729740.Google ScholarGoogle Scholar
  126. [126] Stephensen Jan Løhmann. 2019. Towards a philosophy of post-creative practices? Reading obvious’ “Portrait of Edmond de Belamy.”Politics of the Machine Beirut 2019 2 (2019), 2130.Google ScholarGoogle Scholar
  127. [127] Strezoski Gjorgji and Worring Marcel. 2018. OmniArt: A large-scale artistic benchmark. ACM Transactions on Multimedia Computing, Communications, and Applications 14, 4 (2018), 121. Google ScholarGoogle ScholarDigital LibraryDigital Library
  128. [128] Todorov Paul. 2019. A game of dice: Machine learning and the question concerning art. arXiv preprint arXiv:1904.01957 (2019).Google ScholarGoogle Scholar
  129. [129] Noord Nanne Van and Postma Eric. 2017. Learning scale-variant and scale-invariant features for deep image classification. Pattern Recognition 61 (2017), 583592.Google ScholarGoogle ScholarCross RefCross Ref
  130. [130] Vaswani Ashish, Shazeer Noam, Parmar Niki, Uszkoreit Jakob, Jones Llion, Gomez Aidan N., Kaiser Lukasz, and Polosukhin Illia. 2017. Attention is all you need. arXiv preprint arXiv:1706.03762 (2017). Google ScholarGoogle ScholarDigital LibraryDigital Library
  131. [131] Wang Qin, Li Rujia, Wang Qi, and Chen Shiping. 2021. Non-fungible token (NFT): Overview, evaluation, opportunities and challenges. arXiv preprint arXiv:2105.07447 (2021).Google ScholarGoogle Scholar
  132. [132] Wechsler Harry and Toor Andeep S.. 2019. Modern art challenges face detection. Pattern Recognition Letters 126 (2019), 310.Google ScholarGoogle ScholarCross RefCross Ref
  133. [133] Westlake Nicholas, Cai Hongping, and Hall Peter. 2016. Detecting people in artwork with CNNs. In Computer Vision—ECCV 2016 Workshops. Lecture Notes in Computer Science, Vol. 9931. Springer, 825841.Google ScholarGoogle Scholar
  134. [134] Wu Yuheng, Mou Yi, Li Zhipeng, and Xu Kun. 2020. Investigating American and Chinese subjects’ explicit and implicit perceptions of AI-generated artistic work. Computers in Human Behavior 104 (2020), 106186.Google ScholarGoogle ScholarCross RefCross Ref
  135. [135] Xu Tao, Zhang Pengchuan, Huang Qiuyuan, Zhang Han, Gan Zhe, Huang Xiaolei, and He Xiaodong. 2018. AttnGAN: Fine-grained text to image generation with attentional generative adversarial networks. In Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR’18). IEEE, Los Alamitos, CA, 13161324.Google ScholarGoogle ScholarCross RefCross Ref
  136. [136] Yang Heekyung and Min Kyungha. 2020. Classification of basic artistic media based on a deep convolutional approach. Visual Computer 36, 3 (2020), 559578.Google ScholarGoogle ScholarDigital LibraryDigital Library
  137. [137] Yanisky-Ravid Shlomit and Velez-Hernandez Luis Antonio. 2018. Copyrightability of artworks produced by creative robots and originality: The formality-objective model. Minnesota Journal of Law, Science & Technology 19 (2018), 1.Google ScholarGoogle Scholar
  138. [138] Yanulevskaya Victoria, Uijlings Jasper, Bruni Elia, Sartori Andreza, Zamboni Elisa, Bacci Francesca, Melcher David, and Sebe Nicu. 2012. In the eye of the beholder: Employing statistical analysis and eye tracking for analyzing abstract paintings. In Proceedings of the 20th ACM Multimedia Conference (MM’12). ACM, Los Alamitos, CA, 349358. Google ScholarGoogle ScholarDigital LibraryDigital Library
  139. [139] Zhang Jiajing, Miao Yongwei, and Yu Jinhui. 2021. A comprehensive survey on computational aesthetic evaluation of visual art images: Metrics and challenges. IEEE Access 9 (2021), 77164–77187. Google ScholarGoogle ScholarDigital LibraryDigital Library
  140. [140] Zhao Lin, Shang Meimei, Gao Fei, Li Rongsheng, Huang Fei, and Yu Jun. 2020. Representation learning of image composition for aesthetic prediction. Computer Vision and Image Understanding 199 (2020), 103024.Google ScholarGoogle ScholarCross RefCross Ref
  141. [141] Zhu Jun-Yan, Park Taesung, Isola Phillip, and Efros Alexei A.. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV’17). IEEE, Los Alamitos, CA, 22422251.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Understanding and Creating Art with AI: Review and Outlook

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          Full Access

          • Published in

            cover image ACM Transactions on Multimedia Computing, Communications, and Applications
            ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 18, Issue 2
            May 2022
            494 pages
            ISSN:1551-6857
            EISSN:1551-6865
            DOI:10.1145/3505207
            Issue’s Table of Contents

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 16 February 2022
            • Accepted: 1 July 2021
            • Revised: 1 June 2021
            • Received: 1 February 2021
            Published in tomm Volume 18, Issue 2

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • survey
            • Refereed

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          Full Text

          View this article in Full Text.

          View Full Text

          HTML Format

          View this article in HTML Format .

          View HTML Format
          About Cookies On This Site

          We use cookies to ensure that we give you the best experience on our website.

          Learn more

          Got it!