skip to main content
research-article

CAPTAIN: Comprehensive Composition Assistance for Photo Taking

Published:27 January 2022Publication History
Skip Abstract Section

Abstract

Many people are interested in taking astonishing photos and sharing them with others. Emerging high-tech hardware and software facilitate the ubiquitousness and functionality of digital photography. Because composition matters in photography, researchers have leveraged some common composition techniques, such as the rule of thirds and the perspective-related techniques, in providing photo-taking assistance. However, composition techniques developed by professionals are far more diverse than well-documented techniques can cover. We present a new approach to leverage the underexplored photography ideas, which are virtually unlimited, diverse, and correlated. We propose a comprehensive fork-join framework, named CAPTAIN (Composition Assistance for Photo Taking), to guide a photographer with a variety of photography ideas. The framework consists of a few components: integrated object detection, photo genre classification, artistic pose clustering, and personalized aesthetics-aware image retrieval. CAPTAIN is backed by a large managed dataset crawled from a Website with ideas from photography enthusiasts and professionals. The work proposes steps to decompose a given amateurish shot into composition ingredients and compose them to bring the photographer a list of useful and related ideas. The work addresses personal preferences for composition by presenting a user-specified preference list of photography ideas. We have conducted many experiments on the newly proposed components and reported findings. A user study demonstrates that the work is useful to those taking photos.

REFERENCES

  1. [1] Andriluka Mykhaylo, Pishchulin Leonid, Gehler Peter, and Schiele Bernt. 2014. 2D human pose estimation: New benchmark and state of the art analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 36863693. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. [2] Barnes Connelly, Shechtman Eli, Finkelstein Adam, and Goldman Dan. 2009. PatchMatch: A randomized correspondence algorithm for structural image editing. ACM Trans. Graph. 28, 3 (2009), 124. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. [3] Bhattacharya Subhabrata, Sukthankar Rahul, and Shah Mubarak. 2010. A framework for photo-quality assessment and enhancement based on visual aesthetics. In Proceedings of the ACM International Conference on Multimedia. ACM, New York, NY, 271280. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. [4] Bhattacharya Subhabrata, Sukthankar Rahul, and Shah Mubarak. 2011. A holistic approach to aesthetic enhancement of photographs. ACM Trans. Multimedia Comput. Commun. Appl. 7, 1 (2011), 121. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. [5] Cao Yue, Long Mingsheng, Wang Jianmin, and Liu Shichen. 2017. Deep visual-semantic quantization for efficient image retrieval. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17). IEEE, 13281337.Google ScholarGoogle ScholarCross RefCross Ref
  6. [6] Cao Zhe, Hidalgo Gines, Simon Tomas, Wei Shih-En, and Sheikh Yaser. 2019. OpenPose: Realtime multi-person 2D pose estimation using part affinity fields. IEEE Trans. Pattern Anal. Mach. Intell. 43, 1 (2019), 172186.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. [7] Cao Zhe, Simon Tomas, Wei Shih-En, and Sheikh Yaser. 2017. Realtime multi-person 2D pose estimation using part affinity fields. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17). IEEE, 72917299.Google ScholarGoogle ScholarCross RefCross Ref
  8. [8] Chang Hui-Tang, Pan Po-Cheng, Wang Yu-Chiang Frank, and Chen Ming-Syan. 2015. R2P: Recomposition and retargeting of photographic images. In Proceedings of the ACM International Conference on Multimedia. ACM, 927930. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. [9] Chang Yuan-Yang and Chen Hwann-Tzong. 2009. Finding good composition in panoramic scenes. In Proceedings of the International Conference on Computer Vision (ICCV’09). IEEE, 22252231.Google ScholarGoogle Scholar
  10. [10] Chatfield Ken, Simonyan Karen, Vedaldi Andrea, and Zisserman Andrew. 2014. Return of the devil in the details: Delving deep into convolutional nets. In Proceedings of the British Machine Vision Conference. BMVA Press.Google ScholarGoogle ScholarCross RefCross Ref
  11. [11] Chen Liang-Chieh, Papandreou George, Kokkinos Iasonas, Murphy Kevin, and Yuille Alan L.. 2017. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 40, 4 (2017), 834848.Google ScholarGoogle ScholarCross RefCross Ref
  12. [12] Cho Taeg Sang, Butman Moshe, Avidan Shai, and Freeman William T.. 2008. The patch transform and its applications to image editing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’08). IEEE, 18.Google ScholarGoogle Scholar
  13. [13] Coates Adam, Ng Andrew, and Lee Honglak. 2011. An analysis of single-layer networks in unsupervised feature learning. In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS’11), Vol. 15. PMLR, 215223.Google ScholarGoogle Scholar
  14. [14] Datta Ritendra, Joshi Dhiraj, Li Jia, and Wang James Z.. 2006. Studying aesthetics in photographic images using a computational approach. In Proceedings of the European Conference on Computer Vision (ECCV’06). Springer, 288301. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. [15] Deng Jia, Dong Wei, Socher Richard, Li Li-Jia, Li Kai, and Fei-Fei Li. 2009. Imagenet: A large-scale hierarchical image database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’09). IEEE, 248255.Google ScholarGoogle ScholarCross RefCross Ref
  16. [16] Dunn J. C.. 1973. A fuzzy relative of the ISODATA process and its use in detecting compact well-separated clusters. J. Cybernet. 3, 3 (1973), 3257.Google ScholarGoogle ScholarCross RefCross Ref
  17. [17] Farhat Farshid, Kamani Mohammad Mahdi, Mishra Sahil, and Wang James Z.. 2017. Intelligent portrait composition assistance: integrating deep-learned models and photography idea retrieval. In Proceedings of the ACM Conference on Multimedia, Thematic Workshops. ACM, 1725. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. [18] Guo Y. W., Liu M., Gu T. T., and Wang W. P.. 2012. Improving photo composition elegantly: Considering image similarity during composition optimization. Comput. Graph. Forum 31, 7 (2012), 21932202. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. [19] He Kaiming, Zhang Xiangyu, Ren Shaoqing, and Sun Jian. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’16). IEEE, 770778.Google ScholarGoogle ScholarCross RefCross Ref
  20. [20] He Siqiong, Zhou Zihan, Farhat Farshid, and Wang James Z.. 2018. Discovering triangles in portraits for supporting photographic creation. IEEE Trans. Multimedia 20, 2 (2018), 496508. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. [21] Iscen Ahmet, Avrithis Yannis, Tolias Giorgos, Furon Teddy, and Chum Ondrej. 2018. Fast spectral ranking for similarity search. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’18). IEEE, 76327641.Google ScholarGoogle ScholarCross RefCross Ref
  22. [22] Itti Laurent, Koch Christof, and Niebur Ernst. 1998. A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20, 11 (1998), 12541259. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. [23] Kamani Mohammad Mahdi, Farhat Farshid, Wistar Stephen, and Wang James Z. 2016. Shape matching using skeleton context for automated bow echo detection. In Proceedings of the International Conference on Big Data. IEEE, 901908.Google ScholarGoogle ScholarCross RefCross Ref
  24. [24] Kamani Mohammad Mahdi, Farhat Farshid, Wistar Stephen, and Wang James Z.. 2018. Skeleton matching with applications in severe weather detection. Appl. Soft Comput. 70 (2018), 11541166.Google ScholarGoogle ScholarCross RefCross Ref
  25. [25] Ke Yan, Tang Xiaoou, and Jing Feng. 2006. The design of high-level features for photo quality assessment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’06), Vol. 1. IEEE, 419426. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. [26] Ketchen David J. and Shook Christopher L.. 1996. The application of cluster analysis in strategic management research: An analysis and critique. Strateg. Manage. J. 17, 6 (1996), 441458.Google ScholarGoogle ScholarCross RefCross Ref
  27. [27] Kong Shu, Shen Xiaohui, Lin Zhe, Mech Radomir, and Fowlkes Charless. 2016. Photo aesthetics ranking network with attributes and content adaptation. In Proceedings of the European Conference on Computer Vision (ECCV’16). Springer, Cham, Germany, 662679.Google ScholarGoogle ScholarCross RefCross Ref
  28. [28] Krages Bert. 2012. The Art of Composition. Skyhorse Publishing, New York, NY.Google ScholarGoogle Scholar
  29. [29] Lauer David and Pentak Stephen. 2011. Design Basics. Wadsworth Publishing, Belmont, CA.Google ScholarGoogle Scholar
  30. [30] LeCun Yann, Bottou Léon, Bengio Yoshua, and Haffner Patrick. 1998. Gradient-based learning applied to document recognition. Proc. IEEE 86, 11 (1998), 22782324.Google ScholarGoogle ScholarCross RefCross Ref
  31. [31] Lewis David D., Yang Yiming, Rose Tony G., and Li Fan. 2004. Rcv1: A new benchmark collection for text categorization research. J. Mach. Learn. Res. 5(Apr.2004), 361397. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. [32] Li Jia, Yao Lei, and Wang James Z.. 2015. Photo composition feedback and enhancement. In Mobile Cloud Visual Media Computing. Springer, Cham, Germany, 113144.Google ScholarGoogle ScholarCross RefCross Ref
  33. [33] Li Ke, Yan Bo, Li Jun, and Majumder Aditi. 2015. Seam carving based aesthetics enhancement for photos. Sign. Process.: Image Commun. 39 (2015), 509516. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. [34] Lin Tsung-Yi, Maire Michael, Belongie Serge, Hays James, Perona Pietro, Ramanan Deva, Dollár Piotr, and Zitnick C. Lawrence. 2014. Microsoft COCO: Common objects in context. In Proceedings of the European Conference on Computer Vision (ECCV’14). Springer, Cham, Germany, 740755.Google ScholarGoogle ScholarCross RefCross Ref
  35. [35] Liu Ligang, Jin Yong, and Wu Qingbiao. 2010. Realtime aesthetic image retargeting. Comput. Aesthet. 10 (2010), 18. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. [36] Liu Zhenguang, Wang Zepeng, Yao Yiyang, Zhang Luming, and Shao Ling. 2018. Deep active learning with contaminated tags for image aesthetics assessment. IEEE Trans. Image Process. (2018), 1–1.Google ScholarGoogle Scholar
  37. [37] Lu Xin, Lin Zhe, Jin Hailin, Yang Jianchao, and Wang James Z.. 2015. Rating image aesthetics using deep learning. IEEE Trans. Multimedia 17, 11 (2015), 20212034.Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. [38] Luo Yiwen and Tang Xiaoou. 2008. Photo and video quality evaluation: Focusing on the subject. In Proceedings of the European Conference on Computer Vision (ECCV’08). Springer, Berlin, 386399. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. [39] Mai Long, Jin Hailin, Lin Zhe, Fang Chen, Brandt Jonathan, and Liu Feng. 2017. Spatial-semantic image search by visual feature synthesis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17). IEEE, 11211130.Google ScholarGoogle ScholarCross RefCross Ref
  40. [40] Mai Long, Jin Hailin, and Liu Feng. 2016. Composition-preserving deep photo aesthetics assessment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’16). IEEE, 497506.Google ScholarGoogle ScholarCross RefCross Ref
  41. [41] Marchesotti Luca, Perronnin Florent, Larlus Diane, and Csurka Gabriela. 2011. Assessing the aesthetic quality of photographs using generic image descriptors. In Proceedings of the International Conference on Computer Vision (ICCV’11). IEEE, 17841791. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. [42] Mitro Joani. 2016. Content-based image retrieval tutorial. arXiv:1608.03811. Retrieved from https://arxiv.org/abs/1608.03811.Google ScholarGoogle Scholar
  43. [43] Murray Naila, Marchesotti Luca, and Perronnin Florent. 2012. AVA: A large-scale database for aesthetic visual analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’12). IEEE, 24082415. Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. [44] Ni Bingbing, Xu Mengdi, Cheng Bin, Wang Meng, Yan Shuicheng, and Tian Qi. 2013. Learning to photograph: A compositional perspective. IEEE Trans. Multimedia 15, 5 (2013), 11381151. Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. [45] Park Jaesik, Lee Joon-Young, Tai Yu-Wing, and Kweon In So. 2012. Modeling photo composition and its application to photo re-arrangement. In Proceedings of the IEEE Conference on Image Processing. IEEE, 27412744.Google ScholarGoogle ScholarCross RefCross Ref
  46. [46] Pritch Yael, Kav-Venaki Eitam, and Peleg Shmuel. 2009. Shift-map image editing. In Proceedings of the International Conference on Computer Vision (ICCV’09), Vol. 9. IEEE, 151158.Google ScholarGoogle Scholar
  47. [47] Radenović Filip, Tolias Giorgos, and Chum Ondřej. 2018. Fine-tuning CNN image retrieval with no human annotation. IEEE Trans. Pattern Anal. Mach. Intell. 41, 7 (2018), 16551668.Google ScholarGoogle ScholarCross RefCross Ref
  48. [48] Rawat Yogesh Singh. 2015. Real-time assistance in multimedia capture using social media. In Proceedings of the ACM International Conference on Multimedia. ACM, New York, NY, 641644. Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. [49] Rawat Yogesh Singh and Kankanhalli Mohan S. 2014. Context-based photography learning using crowdsourced images and social media. In Proceedings of the ACM International Conference on Multimedia. ACM, New York, NY, 217220. Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. [50] Rawat Yogesh Singh and Kankanhalli Mohan S.. 2015. Context-aware photography learning for smart mobile devices. ACM Trans. Multimedia Comput. Commun. Appl. 12, 1s (2015), 124. Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. [51] Rawat Yogesh Singh and Kankanhalli Mohan S.. 2016. Clicksmart: A context-aware viewpoint recommendation system for mobile photography. IEEE Trans. Circ. Syst. Vid. Technol. 27, 1 (2016), 149158. Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. [52] Rawat Yogesh Singh, Shah Mubarak, and Kankanhalli Mohan S.. 2019. Photography and exploration of tourist locations based on optimal foraging theory. IEEE Trans. Circ. Syst. Vid. Technol. 30, 7 (2019), 22762287.Google ScholarGoogle Scholar
  53. [53] Rawat Yogesh Singh, Song Mingli, and Kankanhalli Mohan S. 2017. A spring-electric graph model for socialized group photography. IEEE Trans. Multimedia 20, 3 (2017), 754766. Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. [54] Redmon Joseph and Farhadi Ali. 2018. Yolov3: An incremental improvement. arXiv:1804.02767. Retrieved from https://arxiv.org/abs/1804.02767.Google ScholarGoogle Scholar
  55. [55] Ren Jian, Shen Xiaohui, Lin Zhe, Mech Radomír, and Foran David J.. 2017. Personalized image aesthetics. In Proceedings of the IEEE International Conference on Computer Vision (ICCV’17). IEEE, 638647.Google ScholarGoogle ScholarCross RefCross Ref
  56. [56] Ren S., He K., Girshick R., and Sun J.. 2017. Faster R-CNN: Towards real-time object detection with region proposal networks.IEEE Trans. Pattern Anal. Mach. Intell. 39, 6 (2017), 1137. Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. [57] Rousseeuw Peter J.. 1987. Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. J. Comput. Appl. Math. 20 (1987), 5365. Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. [58] Russell Bryan C., Torralba Antonio, Murphy Kevin P., and Freeman William T.. 2008. LabelMe: A database and web-based tool for image annotation. Int. J. Comput. Vis. 77, 1–3 (2008), 157173. Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. [59] Samii A., Měch R., and Lin Z.. 2015. Data-driven automatic cropping using semantic composition search. Comput. Graph. Forum 34, 1 (2015), 141151. Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. [60] Santella Anthony, Agrawala Maneesh, DeCarlo Doug, Salesin David, and Cohen Michael. 2006. Gaze-based interaction for semi-automatic photo cropping. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI’06). ACM, New York, NY, 771780. Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. [61] Razavian Ali Sharif, Azizpour Hossein, Sullivan Josephine, and Carlsson Stefan. 2014. CNN features off-the-shelf: An astounding baseline for recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. IEEE, 806813. Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. [62] Stentiford Fred. 2007. Attention based auto image cropping. In Proceedings of the International Conference on Computer Vision Systems.Google ScholarGoogle Scholar
  63. [63] Suh Bongwon, Ling Haibin, Bederson Benjamin B., and Jacobs David W.. 2003. Automatic thumbnail cropping and its effectiveness. In Proceedings of the ACM Symposium on User Interface Software and Technology (UIST’03). ACM, New York, NY, 95104. Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. [64] Sun Ke, Xiao Bin, Liu Dong, and Wang Jingdong. 2019. Deep high-resolution representation learning for human pose estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’19). IEEE, 56935703.Google ScholarGoogle ScholarCross RefCross Ref
  65. [65] Szegedy Christian, Liu Wei, Jia Yangqing, Sermanet Pierre, Reed Scott, Anguelov Dragomir, Erhan Dumitru, Vanhoucke Vincent, and Rabinovich Andrew. 2015. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’15). IEEE, 19.Google ScholarGoogle ScholarCross RefCross Ref
  66. [66] Talebi Hossein and Milanfar Peyman. 2018. Nima: Neural image assessment. IEEE Trans. Image Process. 27, 8 (2018), 39984011.Google ScholarGoogle ScholarCross RefCross Ref
  67. [67] Valenzuela Roberto. 2012. Picture Perfect Practice: A Self-training Guide to Mastering the Challenges of Taking Photographs. New Riders, Indianapolis, IN. Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. [68] Wang Patricia P., Zhang Wei, Li Jianguo, and Zhang Yimin. 2008. Online photography assistance by exploring geo-referenced photos on MID/UMPC. In Workshop on Multimedia Signal Processing. IEEE, 610.Google ScholarGoogle Scholar
  69. [69] Wei Zijun, Zhang Jianming, Shen Xiaohui, Lin Zhe, Mech Radomír, Hoai Minh, and Samaras Dimitris. 2018. Good view hunting: Learning photo composition from dense view pairs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’18). IEEE, 54375446.Google ScholarGoogle ScholarCross RefCross Ref
  70. [70] Wong Lai-Kuan and Low Kok-Lim. 2009. Saliency-enhanced image aesthetics class prediction. In Proceedings of the IEEE Conference on Image Processing. IEEE, 9971000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. [71] Xie Junyuan, Girshick Ross, and Farhadi Ali. 2016. Unsupervised deep embedding for clustering analysis. In Proceedings of the International Conference on Machine Learning (ICML’16). PMLR, 478487. Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. [72] Yan Jianzhou, Lin Stephen, Kang Sing, and Tang Xiaoou. 2013. Learning the change for automatic image cropping. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’13). IEEE, 971978. Google ScholarGoogle ScholarDigital LibraryDigital Library
  73. [73] Yeh Che-Hua, Barsky Brian A., and Ouhyoung Ming. 2014. Personalized photograph ranking and selection system considering positive and negative user feedback. ACM Trans. Multimedia Comput. Commun. Appl. 10, 4 (2014), 120. Google ScholarGoogle ScholarDigital LibraryDigital Library
  74. [74] Yin Wenyuan, Mei Tao, Chen Chang Wen, and Li Shipeng. 2013. Socialized mobile photography: Learning to photograph with social context via mobile devices. IEEE Trans. Multimedia 16, 1 (2013), 184200.Google ScholarGoogle ScholarCross RefCross Ref
  75. [75] Zhang Mingju, Zhang Lei, Sun Yanfeng, Feng Lin, and Ma Wei-ying. 2005. Auto cropping for digital photographs. In Proceedings of the IEEE Conference on Multimedia and Expo. IEEE.Google ScholarGoogle Scholar
  76. [76] Zhao Hengshuang, Shi Jianping, Qi Xiaojuan, Wang Xiaogang, and Jia Jiaya. 2017. Pyramid scene parsing network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17). IEEE, 28812890.Google ScholarGoogle ScholarCross RefCross Ref
  77. [77] Zhou Bolei, Zhao Hang, Puig Xavier, Fidler Sanja, Barriuso Adela, and Torralba Antonio. 2017. Scene parsing through ADE20K dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17). IEEE, 633641.Google ScholarGoogle ScholarCross RefCross Ref
  78. [78] Zhou Zihan, Farhat Farshid, and Wang James Z.. 2017. Detecting dominant vanishing points in natural scenes with application to composition-sensitive image retrieval. IEEE Trans. Multimedia 19, 12 (2017), 26512665.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. CAPTAIN: Comprehensive Composition Assistance for Photo Taking

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM Transactions on Multimedia Computing, Communications, and Applications
          ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 18, Issue 1
          January 2022
          517 pages
          ISSN:1551-6857
          EISSN:1551-6865
          DOI:10.1145/3505205
          Issue’s Table of Contents

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 27 January 2022
          • Revised: 1 April 2021
          • Accepted: 1 April 2021
          • Received: 1 May 2020
          Published in tomm Volume 18, Issue 1

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Refereed

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Full Text

        View this article in Full Text.

        View Full Text

        HTML Format

        View this article in HTML Format .

        View HTML Format
        About Cookies On This Site

        We use cookies to ensure that we give you the best experience on our website.

        Learn more

        Got it!