skip to main content
research-article

Egocentric Early Action Prediction via Adversarial Knowledge Distillation

Published:06 February 2023Publication History
Skip Abstract Section

Abstract

Egocentric early action prediction aims to recognize actions from the first-person view by only observing a partial video segment, which is challenging due to the limited context information of the partial video. In this article, to tackle the egocentric early action prediction problem, we propose a novel multi-modal adversarial knowledge distillation framework. In particular, our approach involves a teacher network to learn the enhanced representation of the partial video by considering the future unobserved video segment, and a student network to mimic the teacher network to produce the powerful representation of the partial video and based on that predicting the action label. To promote the knowledge distillation between the teacher and the student network, we seamlessly integrate adversarial learning with latent and discriminative knowledge regularizations encouraging the learned representations of the partial video to be more informative and discriminative toward the action prediction. Finally, we devise a multi-modal fusion module toward comprehensively predicting the action label. Extensive experiments on two public egocentric datasets validate the superiority of our method over the state-of-the-art methods. We have released the codes and involved parameters to benefit other researchers.1

REFERENCES

  1. [1] Cai Yijun, Li Haoxin, Hu Jian-Fang, and Zheng Wei-Shi. 2019. Action knowledge transfer for action prediction with partial videos. In Proceedings of the AAAI Conference on Artificial Intelligence. 81188125.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. [2] Cao Yue, Liu Bin, Long Mingsheng, and Wang Jianmin. 2018. HashGAN: Deep learning to hash with pair conditional Wasserstein GAN. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 12871296.Google ScholarGoogle ScholarCross RefCross Ref
  3. [3] Chen Xiaolin, Song Xuemeng, Peng Guozhen, Feng Shanshan, and Nie Liqiang. 2021. Adversarial-enhanced hybrid graph network for user identity linkage. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, New York, NY, 10841093.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. [4] Chen Xinyuan, Xu Chang, Yang Xiaokang, and Tao Dacheng. 2018. Attention-GAN for object transfiguration in wild images. In Proceedings of the European Conference on Computer Vision. 164180.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. [5] Cheraghian Ali, Rahman Shafin, Fang Pengfei, Roy Soumava Kumar, Petersson Lars, and Harandi Mehrtash. 2021. Semantic-aware knowledge distillation for few-shot class-incremental learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 25342543.Google ScholarGoogle ScholarCross RefCross Ref
  6. [6] Damen Dima, Doughty Hazel, Farinella Giovanni Maria, Fidler Sanja, Furnari Antonino, Kazakos Evangelos, Moltisanti Davide, et al. 2018. Scaling egocentric vision: The EPIC-KITCHENS dataset. In Proceedings of the European Conference on Computer Vision. 720736.Google ScholarGoogle Scholar
  7. [7] Dessalene Eadom, Devaraj Chinmaya, Maynord Michael, Fermuller Cornelia, and Aloimonos Yiannis. 2021. Forecasting action through contact representations from first person video. IEEE Transactions on Pattern Analysis and Machine Intelligence (2021), 112.Google ScholarGoogle ScholarCross RefCross Ref
  8. [8] Fathi Alireza, Farhadi Ali, and Rehg James M.. 2011. Understanding egocentric activities. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, Los Alamitos, CA, 407414.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. [9] Furnari Antonino, Battiato Sebastiano, and Farinella Giovanni Maria. 2018. Leveraging uncertainty to rethink loss functions and evaluation measures for egocentric action anticipation. In Proceedings of the European Conference on Computer Vision. 117.Google ScholarGoogle Scholar
  10. [10] Furnari Antonino and Farinella Giovanni Maria. 2019. What would you expect? Anticipating egocentric actions with rolling-unrolling LSTMs and modality attention. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, Los Alamitos, CA, 62526261.Google ScholarGoogle ScholarCross RefCross Ref
  11. [11] Furnari Antonino and Farinella Giovanni Maria. 2021. Rolling-unrolling LSTMs for action anticipation from first-person video. IEEE Transactions on Pattern Analysis and Machine Intelligence 43 (2021), 40214036.Google ScholarGoogle ScholarCross RefCross Ref
  12. [12] Gammulle Harshala, Denman Simon, Sridharan Sridha, and Fookes Clinton. 2019. Predicting the future: A jointly learnt model for action anticipation. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, Los Alamitos, CA, 55625571.Google ScholarGoogle ScholarCross RefCross Ref
  13. [13] Gan Chuang, Gong Boqing, Liu Kun, Su Hao, and Guibas Leonidas J.. 2018. Geometry guided convolutional neural networks for self-supervised video representation learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 55895597.Google ScholarGoogle ScholarCross RefCross Ref
  14. [14] Gan Chuang, Yao Ting, Yang Kuiyuan, Yang Yi, and Mei Tao. 2016. You lead, we exceed: Labor-free video concept learning by jointly exploiting web videos and images. In Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 923932.Google ScholarGoogle ScholarCross RefCross Ref
  15. [15] Garcia Nuno C., Morerio Pietro, and Murino Vittorio. 2018. Modality distillation with multiple stream networks for action recognition. In Proceedings of the European Conference on Computer Vision. 116.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. [16] Goodfellow Ian, Pouget-Abadie Jean, Mirza Mehdi, Xu Bing, Warde-Farley David, Ozair Sherjil, Courville Aaron, and Bengio Yoshua. 2014. Generative adversarial nets. In Advances in Neural Information Processing Systems. MIT, Cambridge, MA 26722680.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. [17] Graves Alex and Schmidhuber Jürgen. 2005. Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Networks 18 (2005), 602610.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. [18] Hinton Geoffrey, Vinyals Oriol, and Dean Jeff. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015).Google ScholarGoogle Scholar
  19. [19] Hochreiter Sepp and Schmidhuber Jürgen. 1997. Long short-term memory. Neural Computation 9, 8 (1997), 17351780.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. [20] Hu Hengtong, Xie Lingxi, Hong Richang, and Tian Qi. 2020. Creating something from nothing: Unsupervised knowledge distillation for cross-modal hashing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 31233132.Google ScholarGoogle ScholarCross RefCross Ref
  21. [21] Hu Tao, Sarkar Kripasindhu, Liu Lingjie, Zwicker Matthias, and Theobalt Christian. 2021. EgoRenderer: Rendering human avatars from egocentric camera images. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, Los Alamitos, CA, 1452814538.Google ScholarGoogle ScholarCross RefCross Ref
  22. [22] Huang Min, Su Song-Zhi, Zhang Hong-Bo, Cai Guo-Rong, Gong Dongying, Cao Donglin, and Li Shao-Zi. 2018. Multifeature selection for 3D human action recognition. ACM Transactions on Multimedia Computing, Communications, and Applications 14 (2018), 118.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. [23] Huang Shao, Wang Weiqiang, He Shengfeng, and Lau Rynson W. H.. 2017. Egocentric hand detection via dynamic region growing. ACM Transactions on Multimedia Computing, Communications, and Applications 14 (2017), 117.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. [24] Huang Yingfan, Bi HuiKun, Li Zhaoxin, Mao Tianlu, and Wang Zhaoqi. 2019. STGAT: Modeling spatial-temporal interactions for human trajectory prediction. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, Los Alamitos, CA, 62726281.Google ScholarGoogle ScholarCross RefCross Ref
  25. [25] Huang Yi, Yang Xiaoshan, Gao Junyu, Sang Jitao, and Xu Changsheng. 2020. Knowledge-driven egocentric multimodal activity recognition. ACM Transactions on Multimedia Computing, Communications, and Applications 16 (2020), 1133.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. [26] Jiang Hao and Ithapu Vamsi Krishna. 2021. Egocentric pose estimation from human vision span. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, Los Alamitos, CA, 1100611014.Google ScholarGoogle ScholarCross RefCross Ref
  27. [27] Kazakos Evangelos, Nagrani Arsha, Zisserman Andrew, and Damen Dima. 2019. EPIC-Fusion: Audio-visual temporal binding for egocentric action recognition. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, Los Alamitos, CA, 54925501.Google ScholarGoogle ScholarCross RefCross Ref
  28. [28] Khrulkov Valentin, Mirvakhabova Leyla, Oseledets Ivan, and Babenko Artem. 2021. Latent transformations via NeuralODEs for GAN-based image editing. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, Los Alamitos, CA, 1442814437.Google ScholarGoogle ScholarCross RefCross Ref
  29. [29] Kim Kwanyoung, Park Dongwon, Kim Kwang In, and Chun Se Young. 2021. Task-aware variational adversarial active learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 81668175.Google ScholarGoogle ScholarCross RefCross Ref
  30. [30] Kong Yu, Tao Zhiqiang, and Fu Yun. 2017. Deep sequential context networks for action prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 14731481.Google ScholarGoogle ScholarCross RefCross Ref
  31. [31] Kong Yu, Tao Zhiqiang, and Fu Yun. 2018. Adversarial action prediction networks. IEEE Transactions on Pattern Analysis and Machine Intelligence 42 (2018), 539553.Google ScholarGoogle ScholarCross RefCross Ref
  32. [32] Li Haoxin, Cai Yijun, and Zheng Wei-Shi. 2019. Deep dual relation modeling for egocentric interaction recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 79327941.Google ScholarGoogle ScholarCross RefCross Ref
  33. [33] Li Sheng, Li Kang, and Fu Yun. 2018. Early recognition of 3D human actions. ACM Transactions on Multimedia Computing, Communications, and Applications 14 (2018), 121.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. [34] Li Yin, Liu Miao, and Rehg James M.. 2018. In the eye of beholder: Joint learning of gaze and actions in first person video. In Proceedings of the European Conference on Computer Vision. 619635.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. [35] Li Yin, Liu Mian, and Rehg James M.. 2021. In the eye of the beholder: Gaze and actions in first person video. IEEE Transactions on Pattern Analysis and Machine Intelligence 2021 (2021), 116.Google ScholarGoogle Scholar
  36. [36] Li Yin, Ye Zhefan, and Rehg James M.. 2015. Delving into egocentric actions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 287295.Google ScholarGoogle ScholarCross RefCross Ref
  37. [37] Liu Fan, Cheng Zhiyong, Sun Changchang, Wang Yinglong, Nie Liqiang, and Kankanhalli Mohan. 2019. User diverse preference modeling by multimodal attentive metric learning. In Proceedings of the 27th ACM International Conference on Multimedia. ACM, New York, NY, 15261534.Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. [38] Lu Minlong, Li Ze-Nian, Wang Yueming, and Pan Gang. 2019. Deep attention network for egocentric action recognition. IEEE Transactions on Image Processing 28, 8 (2019), 37033713.Google ScholarGoogle ScholarCross RefCross Ref
  39. [39] Ma Minghuang, Fan Haoqi, and Kitani Kris M.. 2016. Going deeper into first-person activity recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 18941903.Google ScholarGoogle ScholarCross RefCross Ref
  40. [40] Ma Shugao, Sigal Leonid, and Sclaroff Stan. 2016. Learning activity progression in LSTMs for activity detection and early detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 19421950.Google ScholarGoogle ScholarCross RefCross Ref
  41. [41] Maaten Laurens van der and Hinton Geoffrey. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research 9 (2008), 25792605.Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. [42] Nagarajan Tushar, Feichtenhofer Christoph, and Grauman Kristen. 2019. Grounded human-object interaction hotspots from video. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, Los Alamitos, CA, 86888697.Google ScholarGoogle ScholarCross RefCross Ref
  43. [43] Pan Boxiao, Cai Haoye, Huang De-An, Lee Kuan-Hui, Gaidon Adrien, Adeli Ehsan, and Niebles Juan Carlos. 2020. Spatio-temporal graph for video captioning with knowledge distillation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 10879–10870.Google ScholarGoogle ScholarCross RefCross Ref
  44. [44] Peng Zhimao, Li Zechao, Zhang Junge, Li Yan, Qi Guo-Jun, and Tang Jinhui. 2019. Few-shot image recognition with knowledge transfer. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, Los Alamitos, CA, 441449.Google ScholarGoogle ScholarCross RefCross Ref
  45. [45] Possas Rafael, Caceres Sheila Pinto, and Ramos Fabio. 2018. Egocentric activity recognition on a budget. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 59675976.Google ScholarGoogle ScholarCross RefCross Ref
  46. [46] Pumarola Albert, Agudo Antonio, Sanfeliu Alberto, and Moreno-Noguer Francesc. 2018. Unsupervised person image synthesis in arbitrary poses. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 86208628.Google ScholarGoogle ScholarCross RefCross Ref
  47. [47] Ren Xiaofeng and Gu Chunhui. 2010. Figure-ground segmentation improves handled object recognition in egocentric video. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 31373144.Google ScholarGoogle ScholarCross RefCross Ref
  48. [48] Shahbazi Mohamad, Huang Zhiwu, Paudel Danda Pani, Chhatkuli Ajad, and Gool Luc Van. 2021. Efficient conditional GAN transfer with knowledge propagation across classes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 1216712176.Google ScholarGoogle ScholarCross RefCross Ref
  49. [49] Shamai Gil, Slossberg Ron, and Kimmel Ron. 2019. Synthesizing facial photometries and corresponding geometries using generative adversarial networks. ACM Transactions on Multimedia Computing, Communications, and Applications 15 (2019), 124.Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. [50] Shen Yang, Ni Bingbing, Li Zefan, and Zhuang Ning. 2018. Egocentric activity prediction via event modulated attention. In Proceedings of the European Conference on Computer Vision. 197212.Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. [51] Shi Yuge, Fernando Basura, and Hartley Richard. 2018. Action anticipation with RBF kernelized feature mapping RNN. In Proceedings of the European Conference on Computer Vision. 301317.Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. [52] Song Xuemeng, Feng Fuli, Han Xianjing, Yang Xin, Liu Wei, and Nie Liqiang. 2018. Neural compatibility modeling with attentive knowledge distillation. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, New York, NY, 514.Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. [53] Sudhakaran Swathikiran, Escalera Sergio, and Lanz Oswald. 2019. LSTA: Long short-term attention for egocentric action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 99549963.Google ScholarGoogle ScholarCross RefCross Ref
  54. [54] Sudhakaran Swathikiran and Lanz Oswald. 2018. Attention is all we need: Nailing down object-centric attention for egocentric activity recognition. arXiv preprint arXiv:1807.11794 (2018).Google ScholarGoogle Scholar
  55. [55] Valverde Francisco Rivera, Hurtado Juana Valeria, and Valada Abhinav. 2021. There is more than meets the eye: Self-supervised multi-object detection and tracking with sound by distilling multimodal knowledge. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 1161211621.Google ScholarGoogle ScholarCross RefCross Ref
  56. [56] Wang Chaoyang, Kong Chen, and Lucey Simon. 2019. Distill knowledge from NRSfM for weakly supervised 3D pose learning. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, Los Alamitos, CA, 743752.Google ScholarGoogle ScholarCross RefCross Ref
  57. [57] Wang Jun, Yu Lantao, Zhang Weinan, Gong Yu, Xu Yinghui, Wang Benyou, Zhang Peng, and Zhang Dell. 2017. IRGAN: A minimax game for unifying generative and discriminative information retrieval models. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, New York, NY, 515524.Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. [58] Wang Lichen, Ding Zhengming, Tao Zhiqiang, Liu Yunyu, and Fu Yun. 2019. Generative multi-view human action recognition. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, Los Alamitos, CA, 62126221.Google ScholarGoogle ScholarCross RefCross Ref
  59. [59] Wang Xionghui, Hu Jian-Fang, Lai Jian-Huang, Zhang Jianguo, and Zheng Wei-Shi. 2019. Progressive teacher-student learning for early action prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 35563565.Google ScholarGoogle ScholarCross RefCross Ref
  60. [60] Wang Xueping, Wang Yunhong, and Li Weixin. 2019. U-Net conditional GANs for photo-realistic and identity-preserving facial expression synthesis. ACM Transactions on Multimedia Computing, Communications, and Applications 15 (2019), 123.Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. [61] Wang Xiaohan, Zhu Linchao, Wang Heng, and Yang Yi. 2021. Interactive prototype learning for egocentric action recognition. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, Los Alamitos, CA, 81688177.Google ScholarGoogle ScholarCross RefCross Ref
  62. [62] Xiao Tete, Fan Quanfu, Gutfreund Dan, Monfort Mathew, Oliva Aude, and Zhou Bolei. 2019. Reasoning about human-object interactions through dual attention networks. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, Los Alamitos, CA, 39193928.Google ScholarGoogle ScholarCross RefCross Ref
  63. [63] Yang Xin, Song Xuemeng, Han Xianjing, Nie Jie, and Nie Liqiang. 2020. Generative attribute manipulation scheme for flexible fashion search. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, New York, NY, 941950.Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. [64] Ye Jun, Hu Hao, Qi Guo-Jun, and Hua Kien A.. 2017. A temporal order modeling approach to human action recognition from multimodal sensor data. ACM Transactions on Multimedia Computing, Communications, and Applications 13 (2017), 122.Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. [65] Yu Cong, Hu Yang, Chen Yan, and Zeng Bing. 2019. Personalized fashion design. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, Los Alamitos, CA, 90469055.Google ScholarGoogle ScholarCross RefCross Ref
  66. [66] Yu Ruichi, Li Ang, Morariu Vlad I., and Davis Larry S.. 2017. Visual relationship detection with internal and external linguistic knowledge distillation. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, Los Alamitos, CA, 19741982.Google ScholarGoogle ScholarCross RefCross Ref
  67. [67] Yu Yi, Srivastava Abhishek, and Canales Simon. 2021. Conditional LSTM-GAN for melody generation from lyrics. ACM Transactions on Multimedia Computing, Communications, and Applications 17 (2021), 120.Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. [68] Zaki Hasan F. M., Shafait Faisal, and Mian Ajmal. 2017. Modeling sub-event dynamics in first-person action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 72537262.Google ScholarGoogle ScholarCross RefCross Ref
  69. [69] Zhang Chenrui and Yuxin Peng. 2018. Better and faster: Knowledge transfer from multiple self-supervised learning tasks via graph distillation for video classification. In Proceedings of the International Joint Conference on Artificial Intelligence. 11351141.Google ScholarGoogle ScholarCross RefCross Ref
  70. [70] Zhang Jingyi, Wei Zhen, Duta Ionut Cosmin, Shen Fumin, Liu Li, Zhu Fan, Xu Xing, Shao Ling, and Shen Heng Tao. 2019. Generative reconstructive hashing for incomplete video analysis. In Proceedings of the ACM International Conference on Multimedia. ACM, New York, NY, 845854.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Egocentric Early Action Prediction via Adversarial Knowledge Distillation

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Multimedia Computing, Communications, and Applications
      ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 19, Issue 2
      March 2023
      540 pages
      ISSN:1551-6857
      EISSN:1551-6865
      DOI:10.1145/3572860
      • Editor:
      • Abdulmotaleb El Saddik
      Issue’s Table of Contents

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 6 February 2023
      • Online AM: 16 June 2022
      • Accepted: 7 June 2022
      • Revised: 4 April 2022
      • Received: 9 November 2021
      Published in tomm Volume 19, Issue 2

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Refereed

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    View Full Text

    HTML Format

    View this article in HTML Format .

    View HTML Format
    About Cookies On This Site

    We use cookies to ensure that we give you the best experience on our website.

    Learn more

    Got it!