skip to main content
research-article

Revisiting Local Descriptor for Improved Few-Shot Classification

Authors Info & Claims
Published:06 October 2022Publication History
Skip Abstract Section

Abstract

Few-shot classification studies the problem of quickly adapting a deep learner to understanding novel classes based on few support images. In this context, recent research efforts have been aimed at designing more and more complex classifiers that measure similarities between query and support images but left the importance of feature embeddings seldom explored. We show that the reliance on sophisticated classifiers is not necessary, and a simple classifier applied directly to improved feature embeddings can instead outperform most of the leading methods in the literature. To this end, we present a new method, named DCAP, for few-shot classification, in which we investigate how one can improve the quality of embeddings by leveraging Dense Classification and Attentive Pooling (DCAP). Specifically, we propose to train a learner on base classes with abundant samples to solve dense classification problem first and then meta-train the learner on plenty of randomly sampled few-shot tasks to adapt it to few-shot scenario or the test time scenario. During meta-training, we suggest to pool feature maps by applying attentive pooling instead of the widely used global average pooling to prepare embeddings for few-shot classification. Attentive pooling learns to reweight local descriptors, explaining what the learner is looking for as evidence for decision making. Experiments on two benchmark datasets show the proposed method to be superior in multiple few-shot settings while being simpler and more explainable. Code is publicly available at https://github.com/Ukeyboard/dcap/.

REFERENCES

  1. [1] Baik Sungyong, Choi Myungsub, Choi Janghoon, Kim Heewon, and Lee Kyoung Mu. 2020. Meta-Learning with adaptive hyperparameters. In Proceedings of the 33th International Conference on Neural Information Processing Systems, Larochelle Hugo, Ranzato Marc’Aurelio, Hadsell Raia, Balcan Maria-Florina, and Lin Hsuan-Tien (Eds.).Google ScholarGoogle Scholar
  2. [2] Bendre Nihar, Terashima-Marín Hugo, and Najafirad Peyman. 2020. Learning from few samples: A survey. CoRR abs/2007.15484. Retrieved from https://arxiv.org/abs/2007.15484.Google ScholarGoogle Scholar
  3. [3] Bin Yi, Yang Yang, Zhou Jie, Huang Zi, and Shen Heng Tao. 2017. Adaptively attending to visual attributes and linguistic knowledge for captioning. In Proceedings of the 25th ACM International Conference on Multimedia. 13451353.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. [4] Bochkovskiy Alexey, Wang Chien-Yao, and Liao Hong-Yuan Mark. 2020. Yolov4: Optimal speed and accuracy of object detection. arXiv:2004.10934. Retrieved from https://arxiv.org/abs/arXiv:2004.10934.Google ScholarGoogle Scholar
  5. [5] Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. 2011. The Caltech-UCSD Birds-200-2011 Dataset. Technical Report.Google ScholarGoogle Scholar
  6. [6] Chen Wei-Yu, Liu Yen-Cheng, Kira Zsolt, Wang Yu-Chiang Frank, and Huang Jia-Bin. 2019. A closer look at few-shot classification. In Proceedings of the 7th International Conference on Learning Representations.Google ScholarGoogle Scholar
  7. [7] Chen Yinbo, Wang Xiaolong, Liu Zhuang, Xu Huijuan, and Darrell Trevor. 2020. A new meta-baseline for few-shot learning. arXiv:2003.04390. Retrieved from https://arxiv.org/abs/2003.04390.Google ScholarGoogle Scholar
  8. [8] Dhillon Guneet Singh, Chaudhari Pratik, Ravichandran Avinash, and Soatto Stefano. 2019. A baseline for few-shot image classification. In Proceedings of the International Conference on Learning Representations.Google ScholarGoogle Scholar
  9. [9] Doersch Carl, Gupta Abhinav, and Efros Alexei A.. 2015. Unsupervised visual representation learning by context prediction. In Proceedings of the IEEE International Conference on Computer Vision. 14221430.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. [10] Doersch Carl, Gupta Ankush, and Zisserman Andrew. 2020. CrossTransformers: Spatially-aware few-shot transfer. In Proceedings of the 34th Annual Conference on Neural Information Processing Systems.Google ScholarGoogle Scholar
  11. [11] Finn Chelsea, Abbeel Pieter, and Levine Sergey. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning, Volume 70. 11261135.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. [12] Furlanello Tommaso, Lipton Zachary Chase, Tschannen Michael, Itti Laurent, and Anandkumar Anima. 2018. Born-Again neural networks. In Proceedings of the 35th International Conference on Machine Learning, Volume 80. 16021611.Google ScholarGoogle Scholar
  13. [13] Gidaris Spyros, Singh Praveer, and Komodakis Nikos. 2018. Unsupervised representation learning by predicting image rotations. In Proceedings of the 6th International Conference on Learning Representations.Google ScholarGoogle Scholar
  14. [14] Guo Dan, Wang Hui, and Wang Meng. 2021. Context-Aware graph inference with knowledge distillation for visual dialog. In IEEE Transactions on Pattern Analysis and Machine Intelligence. DOI: 10.1109/TPAMI.2021.3085755Google ScholarGoogle Scholar
  15. [15] Guo Dan, Wang Hui, Zhang Hanwang, Zha Zheng-Jun, and Wang Meng. 2020. Iterative context-aware graph inference for visual dialog. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1005210061.Google ScholarGoogle ScholarCross RefCross Ref
  16. [16] Hariharan Bharath and Girshick Ross B.. 2017. Low-Shot visual recognition by shrinking and hallucinating features. In Proceedings of the International Conference on Computer Vision. 30373046.Google ScholarGoogle ScholarCross RefCross Ref
  17. [17] He Jun, Hong Richang, Liu Xueliang, Xu Mingliang, Zha Zheng-Jun, and Wang Meng. 2020. Memory-Augmented relation network for few-shot learning. In Proceedings of the 28th ACM International Conference on Multimedia. 12361244.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. [18] He Kaiming, Gkioxari Georgia, Dollár Piotr, and Girshick Ross. 2017. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision. 29612969.Google ScholarGoogle ScholarCross RefCross Ref
  19. [19] He Kaiming, Zhang Xiangyu, Ren Shaoqing, and Sun Jian. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 770778.Google ScholarGoogle ScholarCross RefCross Ref
  20. [20] Hospedales Timothy M., Antoniou Antreas, Micaelli Paul, and Storkey Amos J.. 2020. Meta-Learning in neural networks: A survey. CoRR abs/2004.05439. Retrieved from https://arxiv.org/abs/2004.05439.Google ScholarGoogle Scholar
  21. [21] Hou Ruibing, Chang Hong, Ma Bingpeng, Shan Shiguang, and Chen Xilin. 2019. Cross attention network for few-shot classification. In Proceedings of the 33rd Annual Conference on Neural Information Processing Systems. 40054016.Google ScholarGoogle Scholar
  22. [22] Huang Gao, Liu Zhuang, Maaten Laurens Van Der, and Weinberger Kilian Q.. 2017. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 47004708.Google ScholarGoogle ScholarCross RefCross Ref
  23. [23] Jamal Muhammad Abdullah and Qi Guo-Jun. 2019. Task agnostic meta-learning for few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1171911727.Google ScholarGoogle ScholarCross RefCross Ref
  24. [24] Jiang Zihang, Kang Bingyi, Zhou Kuangqi, and Feng Jiashi. 2020. Few-shot classification via adaptive attention. arXiv:2008.02465. Retrieved from https://arxiv.org/abs/2008.02465.Google ScholarGoogle Scholar
  25. [25] Kim Jongmin, Kim Taesup, Kim Sungwoong, and Yoo Chang D.. 2019. Edge-labeling graph neural network for few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1120.Google ScholarGoogle ScholarCross RefCross Ref
  26. [26] Krizhevsky Alex, Sutskever Ilya, and Hinton Geoffrey E.. 2012. Imagenet classification with deep convolutional neural networks. In Proceedings of the 26th Annual Conference on Neural Information Processing Systems. 11061114.Google ScholarGoogle Scholar
  27. [27] Larsson Gustav, Maire Michael, and Shakhnarovich Gregory. 2016. Learning representations for automatic colorization. In Proceedings of the European Conference on Computer Vision. Springer, 577593.Google ScholarGoogle ScholarCross RefCross Ref
  28. [28] Lee Kwonjoon, Maji Subhransu, Ravichandran Avinash, and Soatto Stefano. 2019. Meta-learning with differentiable convex optimization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1065710665.Google ScholarGoogle ScholarCross RefCross Ref
  29. [29] Li Aoxue, Huang Weiran, Lan Xu, Feng Jiashi, Li Zhenguo, and Wang Liwei. 2020. Boosting few-shot learning with adaptive margin loss. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1257312581.Google ScholarGoogle ScholarCross RefCross Ref
  30. [30] Li Suichan, Chen Dapeng, Liu Bin, Yu Nenghai, and Zhao Rui. 2019. Memory-based neighbourhood embedding for visual recognition. In Proceedings of the IEEE International Conference on Computer Vision. 61016110.Google ScholarGoogle ScholarCross RefCross Ref
  31. [31] Li Wenbin, Wang Lei, Xu Jinglin, Huo Jing, Gao Yang, and Luo Jiebo. 2019. Revisiting local descriptor based image-to-class measure for few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 72607268.Google ScholarGoogle ScholarCross RefCross Ref
  32. [32] Li Yicong, Yang Xun, Shang Xindi, and Chua Tat-Seng. 2021. Interventional video relation detection. In Proceedings of the ACM International Conference on Multimedia. 40914099.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. [33] Li Zhenguo, Zhou Fengwei, Chen Fei, and Li Hang. 2017. Meta-SGD: Learning to learn quickly for few shot learning. CoRR abs/1707.09835. Retrieved from http://arxiv.org/abs/1707.09835.Google ScholarGoogle Scholar
  34. [34] Lifchitz Yann, Avrithis Yannis, Picard Sylvaine, and Bursuc Andrei. 2019. Dense classification and implanting for few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 92589267.Google ScholarGoogle ScholarCross RefCross Ref
  35. [35] Lin Tsung-Yi, Dollár Piotr, Girshick Ross, He Kaiming, Hariharan Bharath, and Belongie Serge. 2017. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 21172125.Google ScholarGoogle ScholarCross RefCross Ref
  36. [36] Liu Xueliang, Yang Xun, Wang Meng, and Hong Richang. 2020. Deep neighborhood component analysis for visual similarity modeling. ACM Trans. Intell. Syst. Technol. 11, 3 (2020), 115.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. [37] Liu Yanbin, Lee Juho, Park Minseop, Kim Saehoon, Yang Eunho, Hwang Sung Ju, and Yang Yi. 2019. Learning to propagate labels: Transductive propagation network for few-shot learning. In Proceedings of the IEEE International Conference on Computer Vision. 61016110.Google ScholarGoogle Scholar
  38. [38] Mangla Puneet, Singh Mayank, Sinha Abhishek, Kumari Nupur, Balasubramanian Vineeth N., and Krishnamurthy Balaji. 2020. Charting the right manifold: Manifold mixup for few-shot learning. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision. IEEE, 22072216.Google ScholarGoogle ScholarCross RefCross Ref
  39. [39] Meng Lei, Chen Long, Yang Xun, Tao Dacheng, Zhang Hanwang, Miao Chunyan, and Chua Tat-Seng. 2019. Learning using privileged information for food recognition. In Proceedings of the 27th ACM International Conference on Multimedia. 557565.Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. [40] Mishra Nikhil, Rohaninejad Mostafa, Chen Xi, and Abbeel Pieter. 2018. A simple neural attentive meta-learner. In Proceedings of the 6th International Conference on Learning Representations.Google ScholarGoogle Scholar
  41. [41] Nichol Alex, Achiam Joshua, and Schulman John. 2018. On first-order meta-learning algorithms. CoRR abs/1803.02999. Retrieved from http://arxiv.org/abs/1803.02999.Google ScholarGoogle Scholar
  42. [42] Oreshkin Boris, López Pau Rodríguez, and Lacoste Alexandre. 2018. Tadam: Task dependent adaptive metric for improved few-shot learning. In Advances in Neural Information Processing Systems. 719729.Google ScholarGoogle Scholar
  43. [43] Raghu Aniruddh, Raghu Maithra, Bengio Samy, and Vinyals Oriol. 2019. Rapid learning or feature reuse? towards understanding the effectiveness of maml. arXiv:1909.09157. Retrieved from https://arxiv.org/abs/1909.09157.Google ScholarGoogle Scholar
  44. [44] Ravi Sachin and Larochelle Hugo. 2017. Optimization as a model for few-shot learning. In Proceedings of the 5th International Conference on Learning Representations.Google ScholarGoogle Scholar
  45. [45] Ren Mengye, Triantafillou Eleni, Ravi Sachin, Snell Jake, Swersky Kevin, Tenenbaum Joshua B., Larochelle Hugo, and Zemel Richard S.. 2018. Meta-Learning for semi-supervised few-shot classification. In Proceedings of the 5th International Conference on Learning Representations.Google ScholarGoogle Scholar
  46. [46] Rodríguez Pau, Laradji Issam, Drouin Alexandre, and Lacoste Alexandre. 2020. Embedding propagation: Smoother manifold for few-shot classification. In Proceedings of the European Conference on Computer Vision. Springer, 121138.Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. [47] Russakovsky Olga, Deng Jia, Su Hao, Krause Jonathan, Satheesh Sanjeev, Ma Sean, Huang Zhiheng, Karpathy Andrej, Khosla Aditya, Bernstein Michael, et al. 2015. Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115, 3 (2015), 211252.Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. [48] Rusu Andrei A., Rao Dushyant, Sygnowski Jakub, Vinyals Oriol, Pascanu Razvan, Osindero Simon, and Hadsell Raia. 2019. Meta-Learning with latent embedding optimization. In Proceedings of the International Conference on Learning Representations.Google ScholarGoogle Scholar
  49. [49] Santoro Adam, Bartunov Sergey, Botvinick Matthew, Wierstra Daan, and Lillicrap Timothy P.. 2016. Meta-Learning with memory-augmented neural networks. In Proceedings of the 33rd International Conference on Machine Learning, Vol. 48. 18421850.Google ScholarGoogle Scholar
  50. [50] Satorras Victor Garcia and Estrach Joan Bruna. 2018. Few-Shot learning with graph neural networks. In Proceedings of the 6th International Conference on Learning Representations.Google ScholarGoogle Scholar
  51. [51] Shang Xindi, Di Donglin, Xiao Junbin, Cao Yu, Yang Xun, and Chua Tat-Seng. 2019. Annotating objects and relations in user-generated videos. In Proceedings of the 2019 on International Conference on Multimedia Retrieval. 279287.Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. [52] Shi Lei, Zhang Yifan, Cheng Jian, and Lu Hanqing. 2019. Skeleton-based action recognition with directed graph neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 79127921.Google ScholarGoogle ScholarCross RefCross Ref
  53. [53] Simon Christian, Koniusz Piotr, Nock Richard, and Harandi Mehrtash. 2020. Adaptive subspaces for few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 41364145.Google ScholarGoogle ScholarCross RefCross Ref
  54. [54] Snell Jake, Swersky Kevin, and Zemel Richard. 2017. Prototypical networks for few-shot learning. In Proceedings of the 31st Annual Conference on Neural Information Processing Systems. 40774087.Google ScholarGoogle Scholar
  55. [55] Snell Jake and Zemel Richard S.. 2021. Bayesian few-shot classification with one-vs-each Pólya-Gamma augmented gaussian processes. In Proceedings of the 9th International Conference on Learning Representations.Google ScholarGoogle Scholar
  56. [56] Su Jong-Chyi, Maji Subhransu, and Hariharan Bharath. 2020. When does self-supervision improve few-shot learning? In Proceedings of the European Conference on Computer Vision. Springer, 645666.Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. [57] Sun Qianru, Liu Yaoyao, Chua Tat-Seng, and Schiele Bernt. 2019. Meta-transfer learning for few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 403412.Google ScholarGoogle ScholarCross RefCross Ref
  58. [58] Sung Flood, Yang Yongxin, Zhang Li, Xiang Tao, Torr Philip H. S., and Hospedales Timothy M.. 2018. Learning to compare: Relation network for few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 11991208.Google ScholarGoogle ScholarCross RefCross Ref
  59. [59] Szegedy Christian, Vanhoucke Vincent, Ioffe Sergey, Shlens Jonathon, and Wojna Zbigniew. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 28182826.Google ScholarGoogle ScholarCross RefCross Ref
  60. [60] Tan Mingxing and Le Quoc. 2019. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the 36th International Conference on Machine Learning. Vol. 97, 61056114.Google ScholarGoogle Scholar
  61. [61] Tan Yi, Hao Yanbin, He Xiangnan, Wei Yinwei, and Yang Xun. 2021. Selective dependency aggregation for action classification. In Proceedings of the 29th ACM International Conference on Multimedia. 592601.Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. [62] Tian Yonglong, Wang Yue, Krishnan Dilip, Tenenbaum Joshua B., and Isola Phillip. 2020. Rethinking few-shot image classification: A good embedding is all you need? arXiv:2003.11539. Retrieved from https://arxiv.org/abs/2003.11539.Google ScholarGoogle Scholar
  63. [63] Tseng Hung-Yu, Lee Hsin-Ying, Huang Jia-Bin, and Yang Ming-Hsuan. 2020. Cross-Domain few-shot classification via learned feature-wise transformation. In Proceedings of the International Conference on Learning Representations, 2020.Google ScholarGoogle Scholar
  64. [64] Vaswani Ashish, Shazeer Noam, Parmar Niki, Uszkoreit Jakob, Jones Llion, Gomez Aidan N., Kaiser Lukasz, and Polosukhin Illia. 2017. Attention is all you need. In Proceedings of the 31st Annual Conference on Neural Information Processing Systems. 59986008.Google ScholarGoogle Scholar
  65. [65] Vinyals Oriol, Blundell Charles, Lillicrap Timothy, Kavukcuoglu Koray, and Wierstra Daan. 2016. Matching networks for one shot learning. In Proceedings of the 30th Annual Conference on Neural Information Processing Systems. 36303638.Google ScholarGoogle Scholar
  66. [66] Wang Yu-Xiong, Girshick Ross, Hebert Martial, and Hariharan Bharath. 2018. Low-shot learning from imaginary data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 72787286.Google ScholarGoogle ScholarCross RefCross Ref
  67. [67] Wertheimer Davis, Tang Luming, and Hariharan Bharath. 2021. Few-Shot classification with feature map reconstruction networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2021. 80128021.Google ScholarGoogle ScholarCross RefCross Ref
  68. [68] Wu Ziyang, Li Yuwei, Guo Lihua, and Jia Kui. 2019. Parn: Position-aware relation networks for few-shot learning. In Proceedings of the IEEE International Conference on Computer Vision. 66596667.Google ScholarGoogle ScholarCross RefCross Ref
  69. [69] Xu Hui, Wang Jiaxing, Li Hao, Ouyang Deqiang, and Shao Jie. 2021. Unsupervised meta-learning for few-shot learning. Pattern Recogn. 116 (2021), 107951.Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. [70] Xu Hui, Zhang Chong, Wang Jiaxing, Ouyang Deqiang, Zheng Yu, and Shao Jie. 2021. Exploring parameter space with structured noise for meta-reinforcement learning. In Proceedings of the 29th International Conference on International Joint Conferences on Artificial Intelligence. 31533159.Google ScholarGoogle Scholar
  71. [71] Yang Xun, Feng Fuli, Ji Wei, Wang Meng, and Chua Tat-Seng. 2021. Deconfounded video moment retrieval with causal intervention. In ACM SIGIR Conference on Research and Development in Information Retrieval. 110.Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. [72] Yang Xun, Wang Shanshan, Dong Jian, Dong Jianfeng, Wang Meng, and Chua Tat-Seng. 2022. Video moment retrieval with cross-modal neural architecture search. IEEE Transactions on Image Processing. vol. 31. 12041216.Google ScholarGoogle ScholarCross RefCross Ref
  73. [73] Yang Xun, Zhou Peicheng, and Wang Meng. 2018. Person reidentification via structural deep metric learning. IEEE Transactions on Neural Networks and Learning Systems 30, 10 (2018), 29872998.Google ScholarGoogle Scholar
  74. [74] Ye Han-Jia, Hu Hexiang, Zhan De-Chuan, and Sha Fei. 2020. Few-shot learning via embedding adaptation with set-to-set functions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 88088817.Google ScholarGoogle ScholarCross RefCross Ref
  75. [75] Yoon Jaesik, Kim Taesup, Dia Ousmane, Kim Sungwoong, Bengio Yoshua, and Ahn Sungjin. 2018. Bayesian model-agnostic meta-learning. In Proceedings of the 32nd Annual Conference on Neural Information Processing Systems. 73437353.Google ScholarGoogle ScholarDigital LibraryDigital Library
  76. [76] Zhang Dong, Zhang Hanwang, Tang Jinhui, Hua Xian-Sheng, and Sun Qianru. 2021. Self-Regulation for semantic segmentation. In Proceedings of the IEEE International Conference on Computer Vision. 69536963.Google ScholarGoogle ScholarCross RefCross Ref
  77. [77] Zhang Zhizheng, Lan Cuiling, Zeng Wenjun, Chen Zhibo, and Chang Shih-Fu. 2020. Uncertainty-Aware few-shot image classification. arXiv:2010.04525. Retrieved from https://arxiv.org/abs/2010.04525.Google ScholarGoogle Scholar
  78. [78] Zhou Yuanen, Wang Meng, Liu Daqing, Hu Zhenzhen, and Zhang Hanwang. 2020. More grounded image captioning by distilling image-text matching model. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 47774786.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Revisiting Local Descriptor for Improved Few-Shot Classification

              Recommendations

              Comments

              Login options

              Check if you have access through your login credentials or your institution to get full access on this article.

              Sign in

              Full Access

              • Published in

                cover image ACM Transactions on Multimedia Computing, Communications, and Applications
                ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 18, Issue 2s
                June 2022
                383 pages
                ISSN:1551-6857
                EISSN:1551-6865
                DOI:10.1145/3561949
                • Editor:
                • Abdulmotaleb El Saddik
                Issue’s Table of Contents

                Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

                Publisher

                Association for Computing Machinery

                New York, NY, United States

                Publication History

                • Published: 6 October 2022
                • Online AM: 18 February 2022
                • Accepted: 18 January 2022
                • Revised: 18 December 2021
                • Received: 20 October 2021
                Published in tomm Volume 18, Issue 2s

                Permissions

                Request permissions about this article.

                Request Permissions

                Check for updates

                Qualifiers

                • research-article
                • Refereed

              PDF Format

              View or Download as a PDF file.

              PDF

              eReader

              View online with eReader.

              eReader

              Full Text

              View this article in Full Text.

              View Full Text

              HTML Format

              View this article in HTML Format .

              View HTML Format
              About Cookies On This Site

              We use cookies to ensure that we give you the best experience on our website.

              Learn more

              Got it!