skip to main content
research-article

Moment is Important: Language-Based Video Moment Retrieval via Adversarial Learning

Published:16 February 2022Publication History
Skip Abstract Section

Abstract

The newly emerging language-based video moment retrieval task aims at retrieving a target video moment from an untrimmed video given a natural language as the query. It is more applicable in reality since it is able to accurately localize a specific video moment, as compared to traditional whole video retrieval. In this work, we propose a novel solution to thoroughly investigate the language-based video moment retrieval issue under the adversarial learning. The key of our solution is to formulate the language-based video moment retrieval task as an adversarial learning problem with two tightly connected components. Specifically, a reinforcement learning is employed as a generator to produce a set of possible video moments. Meanwhile, a multi-task learning is utilized as a discriminator, which integrates inter-modal and intra-modal in a unified framework by employing a sequential update strategy. Finally, the generator and the discriminator are mutually reinforced in the adversarial learning, which is able to jointly optimize the performance of both video moment ranking and video moment localization. Extensive experimental results on two challenging benchmarks, i.e., Charades-STA and TACoS datasets, have well demonstrated the effectiveness and rationality of our proposed solution. Meanwhile, on the larger and unbiased datasets, i.e., ActivityNet Captions and ActivityNet-CD, our proposed framework exhibits excellent robustness.

REFERENCES

  1. [1] Hendricks Lisa Anne, Wang Oliver, Shechtman Eli, Sivic Josef, Darrell Trevor, and Russell Bryan. 2017. Localizing moments in video with natural language. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, 58035812.Google ScholarGoogle ScholarCross RefCross Ref
  2. [2] Cao Da, He Xiangnan, Miao Lianhai, An Yahui, Yang Chao, and Hong Richang. 2018. Attentive group recommendation. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, 645654. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. [3] Cao Da, He Xiangnan, Miao Lianhai, Xiao Guangyi, Chen Hao, and Xu Jiao. 2019. Social-enhanced attentive group recommendation. IEEE Transactions on Knowledge and Data Engineering (2019).Google ScholarGoogle Scholar
  4. [4] Cao Da, Yu Zhiwang, Zhang Hanling, Fang Jiansheng, Nie Liqiang, and Tian Qi. 2019. Video-based cross-modal recipe retrieval. In Proceedings of the ACM International Conference on Multimedia. ACM, 16851693. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. [5] Cao Da, Zeng Yawen, Liu Meng, He Xiangnan, Wang Meng, and Qin Zhen. 2020. STRONG: Spatio-temporal reinforcement learning for cross-modal video moment localization. In Proceedings of the ACM International Conference on Multimedia. ACM, 41624170. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. [6] Cao Da, Zeng Yawen, Wei Xiaochi, Nie Liqiang, Hong Richang, and Qin Zeng. 2020. Adversarial video moment retrieval by jointly modeling ranking and localization. In Proceedings of the ACM International Conference on Multimedia. ACM, 898906. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. [7] Chen Jingyuan, Chen Xinpeng, Ma Lin, Jie Zequn, and Chua Tat-Seng. 2018. Temporally grounding natural sentence in video. In Proceedings of the Empirical Methods in Natural Language Processing. ACL, 162171.Google ScholarGoogle ScholarCross RefCross Ref
  8. [8] Chen Jingyuan, Ma Lin, Chen Xinpeng, Jie Zequn, and Luo Jiebo. 2019. Localizing natural language in videos. In Proceedings of the AAAI Conference on Artificial Intelligence. AAAI, 81758182. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. [9] Chen Jing-Jing, Ngo Chong-Wah, Feng Fu-Li, and Chua Tat-Seng. 2018. Deep understanding of cooking procedure for cross-modal recipe retrieval. In Proceedings of the ACM International Conference on Multimedia. ACM, 10201028. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. [10] Chen Shaoxiang and Jiang Yu-Gang. 2019. Semantic proposal for activity localization in videos via sentence query. In Proceedings of the AAAI Conference on Artificial Intelligence. AAAI, 81998206. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. [11] Chen Zhenfang, Ma Lin, Luo Wenhan, Tang Peng, and Wong Kwan-Yee K.. 2020. Look closer to ground better: Weakly-supervised temporal grounding of sentence in video. arXiv preprint arXiv:2001.09308 (2020), 110.Google ScholarGoogle Scholar
  12. [12] Ding Jingtao, Quan Yuhan, He Xiangnan, Li Yong, and Jin Depeng. 2019. Reinforced negative sampling for recommendation with exposure data. In Proceedings of the International Joint Conference on Artificial Intelligence. AAAI, 22302236. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. [13] Gao Jiyang, Sun Chen, Yang Zhenheng, and Nevatia Ram. 2017. TALL: Temporal activity localization via language query. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, 52675275.Google ScholarGoogle ScholarCross RefCross Ref
  14. [14] Guo Longteng, Liu Jing, Wang Yuhang, Luo Zhonghua, Wen Wei, and Lu Hanqing. 2017. Sketch-based image retrieval using generative adversarial networks. In Proceedings of the ACM International Conference on Multimedia. ACM, 12671268. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. [15] Guo Yike and Farooq Faisal. 2018. Modeling task relationships in multi-task learning with multi-gate mixture-of-experts. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 19301939. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. [16] He Dongliang, Zhao Xiang, Huang Jizhou, Li Fu, Liu Xiao, and Wen Shilei. 2019. Read, watch, and move: Reinforcement learning for temporally grounding natural language descriptions in videos. In Proceedings of the AAAI Conference on Artificial Intelligence. AAAI, 83938400. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. [17] He Xiangnan, He Zhankui, Du Xiaoyu, and Chua Tat-Seng. 2018. Adversarial personalized ranking for recommendation. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, 355364. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. [18] Henderson Peter, Islam Riashat, Bachman Philip, Pineau Joelle, Precup Doina, and Meger David. 2018. Deep reinforcement learning that matters. In Proceedings of the AAAI Conference on Artificial Intelligence. AAAI, 32073214. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. [19] Hu Ronghang, Xu Huazhe, Rohrbach Marcus, Feng Jiashi, Saenko Kate, and Darrell Trevor. 2016. Natural language object retrieval. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 45554564.Google ScholarGoogle ScholarCross RefCross Ref
  20. [20] Jiang Bin, Huang Xin, Yang Chao, and Yuan Junsong. 2019. Cross-modal video moment retrieval with spatial and language-temporal attention. In Proceedings of the ACM SIGMM International Conference on Multimedia Retrieval. ACM, 217225. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. [21] Liang Shangsong. 2019. Unsupervised semantic generative adversarial networks for expert retrieval. In Proceedings of the International Conference on World Wide Web. ACM, 10391050. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. [22] Lillicrap Timothy P., Hunt Jonathan J., Pritzel Alexander, Heess Nicolas, Erez Tom, Tassa Yuval, Silver David, and Wierstra Daan. 2015. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971 (2015).Google ScholarGoogle Scholar
  23. [23] Lin Zhijie, Zhao Zhou, Zhang Zhu, Wang Qi, and Liu Huasheng. 2020. Weakly-supervised video moment retrieval via semantic completion network. In Proceedings of the AAAI Conference on Artificial Intelligence. AAAI, 1153911546.Google ScholarGoogle ScholarCross RefCross Ref
  24. [24] Lin Zhijie, Zhao Zhou, Zhang Zhu, Zhang Zijian, and Cai Deng. 2020. Moment retrieval via cross-modal interaction networks with query reconstruction. IEEE Transactions on Image Processing 29 (2020), 37503762.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. [25] Liu Meng, Wang Xiang, Nie Liqiang, He Xiangnan, Chen Baoquan, and Chua Tat-Seng. 2018. Attentive moment retrieval in videos. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, 1524. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. [26] Liu Meng, Wang Xiang, Nie Liqiang, Tian Qi, Chen Baoquan, and Chua Tat-Seng. 2018. Cross-modal moment localization in videos. In Proceedings of the ACM International Conference on Multimedia. ACM, 843851. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. [27] Mazaheri Amir, Gong Boqing, and Shah Mubarak. 2018. Learning a multi-concept video retrieval model with multiple latent variables. ACM Trans. Multimedia Comput. Commun. Appl. 14, 2 (2018), 46:1–46:21. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. [28] Meera Hahn, Asim Kadav, James M. Rehg, and Hans Peter Graf. 2019. Tripping through time: Efficient localization of activities in videos. arXiv preprint arXiv:1904-09936 (2019), 113.Google ScholarGoogle Scholar
  29. [29] Misra Ishan, Shrivastava Abhinav, Gupta Abhinav, and Hebert Martial. 2016. Cross-stitch networks for multi-task learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 39944003.Google ScholarGoogle ScholarCross RefCross Ref
  30. [30] Mithun Niluthpol Chowdhury, Paul Sujoy, and Roy-Chowdhury Amit K.. 2019. Weakly supervised video moment retrieval from text queries. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 1159211601.Google ScholarGoogle ScholarCross RefCross Ref
  31. [31] Mnih Volodymyr, Badia Adria Puigdomenech, Mirza Mehdi, Graves Alex, Lillicrap Timothy, Harley Tim, Silver David, and Kavukcuoglu Koray. 2016. Asynchronous methods for deep reinforcement learning. In Proceedings of the International Conference on Machine Learning. ACM, 19281937. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. [32] Parisi German Ignacio, Kemker Ronald, Part Jose L., Kanan Christopher, and Wermter Stefan. 2019. Continual lifelong learning with neural networks: A review. Neural Networks 113 (2019), 5471.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. [33] Peng Yuxin and Qi Jinwei. 2019. CM-GANs: Cross-modal generative adversarial networks for common representation learning. ACM Trans. Multimedia Comput. Commun. Appl. 15, 1 (2019), 22:1–22:24. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. [34] Ranjay Krishna, Kenji Hata, Frederic Ren, Li Fei-Fei, and Juan Carlos Niebles. 2017. Dense-captioning events in videos. In Proceedings of the IEEE Conference on Computer Vision. IEEE, 706715.Google ScholarGoogle Scholar
  35. [35] Regneri Michaela, Rohrbach Marcus, Wetzel Dominikus, Thater Stefan, Schiele Bernt, and Pinkal Manfred. 2013. Grounding action descriptions in videos. Transactions of the Association for Computational Linguistics 1 (2013), 2536.Google ScholarGoogle ScholarCross RefCross Ref
  36. [36] Rohrbach Marcus, Regneri Michaela, Andriluka Mykhaylo, Amin Sikandar, Pinkal Manfred, and Schiele Bernt. 2012. Script data for attribute-based recognition of composite activities. In Proceedings of the European Conference on Computer Vision. Springer, 144157. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. [37] Salvador Amaia, Hynes Nicholas, Aytar Yusuf, Marin Javier, Ofli Ferda, Weber Ingmar, and Torralba Antonio. 2017. Learning cross-modal embeddings for cooking recipes and food images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 30203028.Google ScholarGoogle ScholarCross RefCross Ref
  38. [38] Seff Ari, Beatson Alex, Suo Daniel, and Liu Han. 2017. Continual learning in generative adversarial nets. arXiv preprint arXiv:1705.08395 (2017).Google ScholarGoogle Scholar
  39. [39] Shamai Gil, Slossberg Ron, and Kimmel Ron. 2019. Synthesizing facial photometries and corresponding geometries using generative adversarial networks. ACM Trans. Multimedia Comput. Commun. Appl. 15, 3s (2019), 87:1–87:24. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. [40] Shen Ling, Hong Richang, Zhang Haoran, Tian Xinmei, and Wang Meng. 2019. Video retrieval with similarity-preserving deep temporal hashing. ACM Trans. Multimedia Comput. Commun. Appl. 15, 4 (2019), 109:1–109:16. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. [41] Sigurdsson Gunnar A., Varol Gül, Wang Xiaolong, Farhadi Ali, Laptev Ivan, and Gupta Abhinav. 2016. Hollywood in homes: Crowdsourcing data collection for activity understanding. In Proceedings of the European Conference on Computer Vision. Springer, 510526.Google ScholarGoogle ScholarCross RefCross Ref
  42. [42] Song Jingkuan, He Tao, Gao Lianli, Xu Xing, Hanjalic Alan, and Shen Heng Tao. 2018. Binary generative adversarial networks for image retrieval. In Proceedings of the AAAI Conference on Artificial Intelligence. AAAI, 394401.Google ScholarGoogle Scholar
  43. [43] Rendle Steffen, Freudenthaler Christoph, Gantner Zeno, and Schmidt-Thieme Lars. 2009. BPR: Bayesian personalized ranking from implicit feedback. In Proceedings of the Conference on Uncertainty in Artificial Intelligence. AUAI Press, 452461. Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. [44] Sun Yu, Wang Shuohuan, Li Yu-Kun, Feng Shikun, Tian Hao, Wu Hua, and Wang Haifeng. 2020. ERNIE 2.0: A continual pre-training framework for language understanding. In Proceedings of the AAAI Conference on Artificial Intelligence. AAAI, 89688975.Google ScholarGoogle ScholarCross RefCross Ref
  45. [45] Tang Hongyan, Liu Junning, Zhao Ming, and Gong Xudong. 2020. Progressive layered extraction (PLE): A novel multi-task learning (MTL) model for personalized recommendations. In Proceedings of the ACM Conference on Recommender Systems. ACM, 269278. Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. [46] Thanh-Tung Hoang, Tran Truyen, and Venkatesh Svetha. 2018. On catastrophic forgetting and mode collapse in generative adversarial networks. arXiv preprint arXiv:1807.04015 (2018).Google ScholarGoogle Scholar
  47. [47] Wang Bokun, Yang Yang, Xu Xing, Hanjalic Alan, and Shen Heng Tao. 2017. Adversarial cross-modal retrieval. In Proceedings of the ACM International Conference on Multimedia. ACM, 154162. Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. [48] Wang Cheng, Yang Haojin, and Meinel Christoph. 2018. Image captioning with deep bidirectional LSTMs and multi-task learning. ACM Trans. Multimedia Comput. Commun. Appl. 14, 2s (2018), 40:1–40:20. Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. [49] Wang Hao, Zha Zheng-Jun, Chen Xuejin, Xiong Zhiwei, and Luo Jiebo. 2020. Dual path interaction network for video moment localization. In Proceedings of the ACM International Conference on Multimedia. ACM, 41164124. Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. [50] Wang Jun, Yu Lantao, Zhang Weinan, Gong Yu, Xu Yinghui, Wang Benyou, Zhang Peng, and Zhang Dell. 2017. IRGAN: A minimax game for unifying generative and discriminative information retrieval models. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, 515524. Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. [51] Wang Weining, Huang Yan, and Wang Liang. 2019. Language-driven temporal activity localization: A semantic matching reinforcement learning model. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 334343.Google ScholarGoogle ScholarCross RefCross Ref
  52. [52] Wang Xiang, Xu Yaokun, He Xiangnan, Cao Yixin, Wang Meng, and Chua Tat-Seng. 2020. Reinforced negative sampling over knowledge graph for recommendation. In Proceedings of the International Conference on World Wide Web. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. [53] Wang Zirui, Dai Zihang, Póczos Barnabás, and Carbonell Jaime. 2019. Characterizing and avoiding negative transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 1129311302.Google ScholarGoogle ScholarCross RefCross Ref
  54. [54] Wang Zitai, Xu Qianqian, Ma Ke, Jiang Yangbangyan, Cao Xiaochun, and Huang Qingming. 2019. Adversarial preference learning with pairwise comparisons. In Proceedings of the ACM International Conference on Multimedia. ACM, 656664. Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. [55] Wu Jie, Li Guanbin, Han Xiaoguang, and Lin Liang. 2020. Reinforcement learning for weakly supervised temporal grounding of natural language in untrimmed videos. In Proceedings of the ACM International Conference on Multimedia. ACM, 12831291. Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. [56] Wu Yiling, Wang Shuhui, Song Guoli, and Huang Qingming. 2020. Augmented adversarial training for cross-modal retrieval. IEEE Transactions on Multimedia (2020).Google ScholarGoogle Scholar
  57. [57] Xiao Shaoning, Chen Long, Zhang Songyang, Ji Wei, Shao Jian, Ye Lu, and Xiao Jun. 2021. Boundary proposal network for two-stage natural language video localization. In Proceedings of the AAAI Conference on Artificial Intelligence. AAAI, 29862994.Google ScholarGoogle ScholarCross RefCross Ref
  58. [58] Xu Huijuan, He Kun, Plummer Bryan A., Sigal Leonid, Sclaroff Stan, and Saenko Kate. 2019. Multilevel language and vision integration for text-to-clip retrieval. In Proceedings of the AAAI Conference on Artificial Intelligence. AAAI, 90629069. Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. [59] Yang Wenfei, Zhang Tianzhu, Zhang Yongdong, and Wu Feng. [n.d.]. Local correspondence network for weakly supervised temporal sentence grounding. IEEE ([n. d.]).Google ScholarGoogle Scholar
  60. [60] Yuan Yitian, Lan Xiaohan, Chen Long, Liu Wei, Wang Xin, and Zhu Wenwu. 2021. A closer look at temporal sentence grounding in videos: Datasets and metrics. arXiv preprint arXiv:2101.09028v2 (2021), 110.Google ScholarGoogle Scholar
  61. [61] Zeng Runhao, Xu Haoming, Huang Wenbing, Chen Peihao, Tan Mingkui, and Gan Chuang. 2020. Dense regression network for video grounding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 1028410293.Google ScholarGoogle ScholarCross RefCross Ref
  62. [62] Zeng Yawen, Cao Da, Wei Xiaochi, Liu Meng, Zhao Zhou, and Qin Zheng. 2021. Multi-modal relational graph for cross-modal video moment retrieval. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 22152224.Google ScholarGoogle ScholarCross RefCross Ref
  63. [63] Zhang Da, Dai Xiyang, Wang Xin, Wang Yuan-Fang, and Davis Larry S.. 2019. MAN: Moment alignment network for natural language moment retrieval via iterative graph adjustment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 12471257.Google ScholarGoogle ScholarCross RefCross Ref
  64. [64] Zhang Hao, Sun Aixin, Jing Wei, Zhen Liangli, Zhou Joey Tianyi, and Goh Rick Siow Mong. 2021. Natural language video localization: A revisit in span-based question answering framework. arXiv preprint arXiv:2102.13558 (2021), 110.Google ScholarGoogle Scholar
  65. [65] Zhang Jian and Peng Yuxin. 2019. Multi-pathway generative adversarial hashing for unsupervised cross-modal retrieval. IEEE Transactions on Multimedia 22, 1 (2019), 174187.Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. [66] Zhang Songyang, Peng Houwen, Fu Jianlong, and Luo Jiebo. 2020. Learning 2D temporal adjacent networks for moment localization with natural language. In Proceedings of the AAAI Conference on Artificial Intelligence. AAAI, 1287012877.Google ScholarGoogle ScholarCross RefCross Ref
  67. [67] Zhang Songyang, Su Jinsong, and Luo Jiebo. 2019. Exploiting temporal relationships in video moment localization with natural language. In Proceedings of the ACM International Conference on Multimedia. ACM, 12301238. Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. [68] Zhang Zhu, Lin Zhijie, Zhao Zhou, and Xiao Zhenxin. 2019. Cross-modal interaction networks for query-based moment retrieval in videos. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, 655664. Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. [69] Zhao Rui-Wei, Zhang Qi, Wu Zuxuan, Li Jianguo, and Jiang Yu-Gang. 2019. Visual content recognition by exploiting semantic feature map with attention and multi-task learning. ACM Trans. Multimedia Comput. Commun. Appl. 15, 1s (2019), 6:1–6:22. Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. [70] Zhou Fan, Yin Ruiyang, Zhang Kunpeng, Trajcevski Goce, Zhong Ting, and Wu Jin. 2019. Adversarial point-of-interest recommendation. In Proceedings of the International Conference on World Wide Web. ACM, 34623468. Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. [71] Zhu Bin, Ngo Chong-Wah, Chen Jingjing, and Hao Yanbin. 2019. R2GAN: Cross-modal recipe retrieval with generative adversarial network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 1147711486.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Moment is Important: Language-Based Video Moment Retrieval via Adversarial Learning

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM Transactions on Multimedia Computing, Communications, and Applications
          ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 18, Issue 2
          May 2022
          494 pages
          ISSN:1551-6857
          EISSN:1551-6865
          DOI:10.1145/3505207
          Issue’s Table of Contents

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 16 February 2022
          • Accepted: 1 July 2021
          • Revised: 1 June 2021
          • Received: 1 January 2021
          Published in tomm Volume 18, Issue 2

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Refereed

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Full Text

        View this article in Full Text.

        View Full Text

        HTML Format

        View this article in HTML Format .

        View HTML Format
        About Cookies On This Site

        We use cookies to ensure that we give you the best experience on our website.

        Learn more

        Got it!