skip to main content
research-article

Learning from Temporal Spatial Cubism for Cross-Dataset Skeleton-based Action Recognition

Authors Info & Claims
Published:16 February 2022Publication History
Skip Abstract Section

Abstract

Rapid progress and superior performance have been achieved for skeleton-based action recognition recently. In this article, we investigate this problem under a cross-dataset setting, which is a new, pragmatic, and challenging task in real-world scenarios. Following the unsupervised domain adaptation (UDA) paradigm, the action labels are only available on a source dataset, but unavailable on a target dataset in the training stage. Different from the conventional adversarial learning-based approaches for UDA, we utilize a self-supervision scheme to reduce the domain shift between two skeleton-based action datasets. Our inspiration is drawn from Cubism, an art genre from the early 20th century, which breaks and reassembles the objects to convey a greater context. By segmenting and permuting temporal segments or human body parts, we design two self-supervised learning classification tasks to explore the temporal and spatial dependency of a skeleton-based action and improve the generalization ability of the model. We conduct experiments on six datasets for skeleton-based action recognition, including three large-scale datasets (NTU RGB+D, PKU-MMD, and Kinetics) where new cross-dataset settings and benchmarks are established. Extensive results demonstrate that our method outperforms state-of-the-art approaches. The source codes of our model and all the compared methods are available at https://github.com/shanice-l/st-cubism.

REFERENCES

  1. [1] Cao Zhe, Hidalgo Gines, Simon Tomas, Wei Shih-En, and Sheikh Yaser. 2021. OpenPose: Realtime multi-person 2D pose estimation using part affinity fields. IEEE Trans. Pattern Anal. Mach. Intell. 43, 1 (2021), 172186.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. [2] Carlucci Fabio Maria, D’Innocente Antonio, Bucci Silvia, Caputo Barbara, and Tommasi Tatiana. 2019. Domain generalization by solving jigsaw puzzles. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 22292238.Google ScholarGoogle ScholarCross RefCross Ref
  3. [3] Carreira João and Zisserman Andrew. 2017. Quo vadis, action recognition? A new model and the kinetics dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 47244733.Google ScholarGoogle ScholarCross RefCross Ref
  4. [4] Chen Min-Hung, Kira Zsolt, Alregib Ghassan, Yoo Jaekwon, Chen Ruxin, and Zheng Jian. 2019. Temporal attentive alignment for large-scale video domain adaptation. In Proceedings of the IEEE International Conference on Computer Vision. 63206329.Google ScholarGoogle ScholarCross RefCross Ref
  5. [5] Chen Ting, Kornblith Simon, Norouzi Mohammad, and Hinton Geoffrey E.. 2020. A simple framework for contrastive learning of visual representations. In Proceedings of the International Conference on Machine Learning. 15971607.Google ScholarGoogle Scholar
  6. [6] Chen Xinyang, Wang Sinan, Long Mingsheng, and Wang Jianmin. 2019. Transferability vs. discriminability: Batch spectral penalization for adversarial domain adaptation. In Proceedings of the International Conference on Machine Learning. 10811090.Google ScholarGoogle Scholar
  7. [7] Choi Jinwoo, Sharma Gaurav, Schulter Samuel, and Huang Jia-Bin. 2020. Shuffle and attend: Video domain adaptation. In Proceedings of the European Conference on Computer Vision, Vol. 12357. 678695.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. [8] Choutas Vasileios, Weinzaepfel Philippe, Revaud Jérôme, and Schmid Cordelia. 2018. PoTion: Pose MoTion representation for action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 70247033.Google ScholarGoogle ScholarCross RefCross Ref
  9. [9] Ding Zhengming, Li Sheng, Shao Ming, and Fu Yun. 2018. Graph adaptive knowledge transfer for unsupervised domain adaptation. In Proceedings of the European Conference on Computer Vision. 3652.Google ScholarGoogle ScholarCross RefCross Ref
  10. [10] Doersch Carl, Gupta Abhinav, and Efros Alexei A.. 2015. Unsupervised visual representation learning by context prediction. In Proceedings of the IEEE International Conference on Computer Vision. 14221430. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. [11] Doersch Carl and Zisserman Andrew. 2017. Multi-task self-supervised visual learning. In Proceedings of the IEEE International Conference on Computer Vision. 20702079.Google ScholarGoogle ScholarCross RefCross Ref
  12. [12] Dwibedi Debidatta, Aytar Yusuf, Tompson Jonathan, Sermanet Pierre, and Zisserman Andrew. 2019. Temporal cycle-consistency learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 18011810.Google ScholarGoogle ScholarCross RefCross Ref
  13. [13] Fernando Basura, Bilen Hakan, Gavves Efstratios, and Gould Stephen. 2017. Self-supervised video representation learning with odd-one-out networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 57295738.Google ScholarGoogle ScholarCross RefCross Ref
  14. [14] Gan Chuang, Gong Boqing, Liu Kun, Su Hao, and Guibas Leonidas J.. 2018. Geometry guided convolutional neural networks for self-supervised video representation learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 55895597.Google ScholarGoogle ScholarCross RefCross Ref
  15. [15] Ganin Yaroslav, Ustinova Evgeniya, Ajakan Hana, Germain Pascal, Larochelle Hugo, Laviolette François, Marchand Mario, and Lempitsky Victor S.. 2016. Domain-Adversarial training of neural networks. J. Mach. Learn. Res. 17 (2016), 59:1–59:35. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. [16] Gidaris Spyros, Singh Praveer, and Komodakis Nikos. 2018. Unsupervised representation learning by predicting image rotations. In Proceedings of the International Conference on Learning Representations.Google ScholarGoogle Scholar
  17. [17] Gong Boqing, Grauman Kristen, and Sha Fei. 2013. Connecting the dots with landmarks: Discriminatively learning domain-invariant features for unsupervised domain adaptation. In Proceedings of the International Conference on Machine Learning. 222230. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. [18] Han Fei, Reily Brian, Hoff William, and Zhang Hao. 2017. Space-time representation of people based on 3D skeletal data: A review. Comput. Vis. Image Underst. 158 (2017), 85105. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. [19] He Kaiming, Fan Haoqi, Wu Yuxin, Xie Saining, and Girshick Ross B.. 2020. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 97269735.Google ScholarGoogle ScholarCross RefCross Ref
  20. [20] He Kaiming, Zhang Xiangyu, Ren Shaoqing, and Sun Jian. 2015. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. In Proceedings of the IEEE International Conference on Computer Vision. 10261034. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. [21] Huang Jiayuan, Smola Alexander J., Gretton Arthur, Borgwardt Karsten M., and Schölkopf Bernhard. 2006. Correcting sample selection bias by unlabeled data. In Proceedings of the International Conference on Neural Information Processing Systems. 601608. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. [22] Jamal Arshad, Namboodiri Vinay P., Deodhare Dipti, and Venkatesh K. S.. 2018. Deep domain adaptation in action space. In Proceedings of the British Machine Vision Conference. 264.Google ScholarGoogle Scholar
  23. [23] Jin Ying, Wang Ximei, Long Mingsheng, and Wang Jianmin. 2020. Minimum class confusion for versatile domain adaptation. In Proceedings of the European Conference on Computer Vision, Vedaldi Andrea, Bischof Horst, Brox Thomas, and Frahm Jan-Michael (Eds.). 464480.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. [24] Kim Dahun, Cho Donghyeon, and Kweon In So. 2019. Self-supervised video representation learning with space-time cubic puzzles. In Proceedings of the AAAI Conference on Artificial Intelligence. 85458552. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. [25] Koniusz Piotr, Cherian Anoop, and Porikli Fatih. 2016. Tensor representations via kernel linearization for action recognition from 3D skeletons. In Proceedings of the European Conference on Computer Vision. 3753.Google ScholarGoogle ScholarCross RefCross Ref
  26. [26] Lai Zihang, Lu Erika, and Xie Weidi. 2020. MAST: A memory-augmented self-supervised tracker. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 64786487.Google ScholarGoogle ScholarCross RefCross Ref
  27. [27] Larsson Gustav, Maire Michael, and Shakhnarovich Gregory. 2017. Colorization as a proxy task for visual understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 840849.Google ScholarGoogle ScholarCross RefCross Ref
  28. [28] Lee Hsin-Ying, Huang Jia-Bin, Singh Maneesh, and Yang Ming-Hsuan. 2017. Unsupervised representation learning by sorting sequences. In Proceedings of the IEEE International Conference on Computer Vision. 667676.Google ScholarGoogle ScholarCross RefCross Ref
  29. [29] Li Chao, Zhong Qiaoyong, Xie Di, and Pu Shiliang. 2018. Co-occurrence feature learning from skeleton data for action recognition and detection with hierarchical aggregation. In Proceedings of the International Joint Conference on Artificial Intelligence. 786792. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. [30] Li Maosen, Chen Siheng, Chen Xu, Zhang Ya, Wang Yanfeng, and Tian Qi. 2019. Actional-structural graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.Google ScholarGoogle ScholarCross RefCross Ref
  31. [31] Lin Lilang, Song Sijie, Yang Wenhan, and Liu Jiaying. 2020. MS2L: Multi-task self-supervised learning for skeleton based action recognition. In Proceedings of the ACM International Conference on Multimedia. 24902498. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. [32] Liu Jun, Shahroudy Amir, Perez Mauricio, Wang Gang, Duan Ling-Yu, and Kot Alex C.. 2020. NTU RGB+D 120: A large-scale benchmark for 3D human activity understanding. IEEE Trans. Pattern Anal. Mach. Intell. 42, 10 (2020), 26842701.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. [33] Liu Jiaying, Song Sijie, Liu Chunhui, Li Yanghao, and Hu Yueyu. 2020. A benchmark dataset and comparison study for multi-modal human action analytics. ACM Trans. Multim. Comput. Commun. Appl. 16, 2 (2020), 41:1–41:24. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. [34] Liu Mengyuan, Liu Hong, and Chen Chen. 2017. Enhanced skeleton visualization for view invariant human action recognition. Pattern Recognit. 68 (2017), 346362. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. [35] Long Mingsheng, Cao Yue, Wang Jianmin, and Jordan Michael I.. 2015. Learning transferable features with deep adaptation networks. In Proceedings of the International Conference on Machine Learning. 97105. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. [36] Long Mingsheng, Cao Zhangjie, Wang Jianmin, and Jordan Michael I.. 2018. Conditional adversarial domain adaptation. In Proceedings of the International Conference on Neural Information Processing Systems. 16471657. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. [37] Long Mingsheng, Zhu Han, Wang Jianmin, and Jordan Michael I.. 2017. Deep transfer learning with joint adaptation networks. In Proceedings of the International Conference on Machine Learning. 22082217. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. [38] Maaten Laurens van der and Hinton Geoffrey. 2008. Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 11 (2008), 25792605.Google ScholarGoogle Scholar
  39. [39] Misra Ishan, Zitnick C. Lawrence, and Hebert Martial. 2016. Shuffle and learn: Unsupervised learning using temporal order verification. In Proceedings of the European Conference on Computer Vision. 527544.Google ScholarGoogle ScholarCross RefCross Ref
  40. [40] Noroozi Mehdi and Favaro Paolo. 2016. Unsupervised learning of visual representations by solving jigsaw puzzles. In Proceedings of the European Conference on Computer Vision. 6984.Google ScholarGoogle ScholarCross RefCross Ref
  41. [41] Pan Sinno Jialin, Tsang Ivor W., Kwok James T., and Yang Qiang. 2011. Domain adaptation via transfer component analysis. IEEE Trans. Neural Netw. 22, 2 (2011), 199210. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. [42] Pathak Deepak, Krähenbühl Philipp, Donahue Jeff, Darrell Trevor, and Efros Alexei A.. 2016. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 25362544.Google ScholarGoogle ScholarCross RefCross Ref
  43. [43] Saito Kuniaki, Watanabe Kohei, Ushiku Yoshitaka, and Harada Tatsuya. 2018. Maximum classifier discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 37233732.Google ScholarGoogle ScholarCross RefCross Ref
  44. [44] Shahroudy Amir, Liu Jun, Ng Tian-Tsong, and Wang Gang. 2016. NTU RGB+D: A large scale dataset for 3D human activity analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 10101019.Google ScholarGoogle ScholarCross RefCross Ref
  45. [45] Shi Lei, Zhang Yifan, Cheng Jian, and Lu Hanqing. 2019. Skeleton-based action recognition with directed graph neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 79127921.Google ScholarGoogle ScholarCross RefCross Ref
  46. [46] Shi Lei, Zhang Yifan, Cheng Jian, and Lu Hanqing. 2019. Two-stream adaptive graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1202612035.Google ScholarGoogle ScholarCross RefCross Ref
  47. [47] Si Chenyang, Chen Wentao, Wang Wei, Wang Liang, and Tan Tieniu. 2019. An attention enhanced graph convolutional LSTM network for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 12271236. Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. [48] Simonyan Karen and Zisserman Andrew. 2014. Two-stream convolutional networks for action recognition in videos. In Proceedings of the International Conference on Neural Information Processing Systems. 568576. Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. [49] Song Sijie, Lan Cuiling, Xing Junliang, Zeng Wenjun, and Liu Jiaying. 2017. An end-to-end spatio-temporal attention model for human action recognition from skeleton data. In Proceedings of the AAAI Conference on Artificial Intelligence. 42634270. Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. [50] Sultani Waqas and Saleemi Imran. 2014. Human action recognition across datasets by foreground-weighted histogram decomposition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 764771. Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. [51] Sun Yu, Tzeng Eric, Darrell Trevor, and Efros Alexei A.. 2019. Unsupervised domain adaptation through self-supervision. CoRR abs/1909.11825 (2019).Google ScholarGoogle Scholar
  52. [52] Sun Yu, Wang Xiaolong, Liu Zhuang, Miller John, Efros Alexei A., and Hardt Moritz. 2020. Test-time training with self-supervision for generalization under distribution shifts. In Proceedings of the International Conference on Machine Learning. 92299248.Google ScholarGoogle Scholar
  53. [53] Tang Yansong, Lu Jiwen, Wang Zian, Yang Ming, and Zhou Jie. 2019. Learning semantics-preserving attention and contextual interaction for group activity recognition. IEEE Trans. Image Process. 28, 10 (2019), 49975012.Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. [54] Tang Yansong, Lu Jiwen, and Zhou Jie. 2021. Comprehensive instructional video analysis: The COIN dataset and performance evaluation. IEEE Trans. Pattern Anal. Mach. Intell. 43, 9 (2021), 3138–3153. https://doi.org/10.1109/TPAMI.2020.2980824Google ScholarGoogle ScholarCross RefCross Ref
  55. [55] Tang Yansong, Ni Zanlin, Zhou Jiahuan, Zhang Danyang, Lu Jiwen, Wu Ying, and Zhou Jie. 2020. Uncertainty-aware score distribution learning for action quality assessment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 98369845.Google ScholarGoogle ScholarCross RefCross Ref
  56. [56] Tang Yansong, Tian Yi, Lu Jiwen, Li Peiyang, and Zhou Jie. 2018. Deep progressive reinforcement learning for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 53235332.Google ScholarGoogle ScholarCross RefCross Ref
  57. [57] Tang Yansong, Wei Yi, Yu Xumin, Lu Jiwen, and Zhou Jie. 2020. Graph interaction networks for relation transfer in human activity videos. IEEE Trans. Circuits Syst. Video Technol. 30, 9 (2020), 28722886.Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. [58] Tas Yusuf and Koniusz Piotr. 2018. CNN-based action recognition and supervised domain adaptation on 3D body skeletons via kernel feature maps. In Proceedings of the British Machine Vision Conference. 158.Google ScholarGoogle Scholar
  59. [59] Vemulapalli Raviteja, Arrate Felipe, and Chellappa Rama. 2014. Human action recognition by representing 3D skeletons as points in a lie group. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 588595. Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. [60] Verma Pratishtha, Sah Animesh, and Srivastava Rajeev. 2020. Deep learning-based multi-modal approach using RGB and skeleton sequences for human activity recognition. Multimed. Syst. 26 (2020), 671685.Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. [61] Wang Jiangliu, Jiao Jianbo, Bao Linchao, He Shengfeng, Liu Yunhui, and Liu Wei. 2019. Self-supervised spatio-temporal representation learning for videos by predicting motion and appearance statistics. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 40064015.Google ScholarGoogle ScholarCross RefCross Ref
  62. [62] Wang Jiang, Liu Zicheng, Wu Ying, and Yuan Junsong. 2012. Mining actionlet ensemble for action recognition with depth cameras. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 12901297. Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. [63] Wang Limin, Xiong Yuanjun, Wang Zhe, Qiao Yu, Lin Dahua, Tang Xiaoou, and Gool Luc Van. 2016. Temporal segment networks: Towards good practices for deep action recognition. In Proceedings of the European Conference on Computer Vision. 2036.Google ScholarGoogle ScholarCross RefCross Ref
  64. [64] Wang Pichao, Li Wanqing, Ogunbona Philip, Wan Jun, and Escalera Sergio. 2018. RGB-D-based human motion recognition with deep learning: A survey. Comput. Vis. Image Underst. 171 (2018), 118139.Google ScholarGoogle ScholarCross RefCross Ref
  65. [65] Wang Pei, Yuan Chunfeng, Hu Weiming, Li Bing, and Zhang Yanning. 2016. Graph based skeleton motion representation and similarity measurement for action recognition. In Proceedings of the European Conference on Computer Vision. 370385.Google ScholarGoogle ScholarCross RefCross Ref
  66. [66] Wang Xiaolong, Jabri Allan, and Efros Alexei A.. 2019. Learning correspondence from the cycle-consistency of time. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 25662576.Google ScholarGoogle ScholarCross RefCross Ref
  67. [67] Weng Junwu, Weng Chaoqun, and Yuan Junsong. 2017. Spatio-temporal naive-Bayes nearest-neighbor for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 41714180.Google ScholarGoogle ScholarCross RefCross Ref
  68. [68] Yan Sijie, Xiong Yuanjun, and Lin Dahua. 2018. Spatial temporal graph convolutional networks for skeleton-based action recognition. In Proceedings of the AAAI Conference on Artificial Intelligence. 74447452. Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. [69] Yu Gang, Liu Zicheng, and Yuan Junsong. 2014. Discriminative orderlet mining for real-time recognition of human-object interaction. In Proceedings of the Asian Conference on Computer Vision. 5065.Google ScholarGoogle Scholar
  70. [70] Yun Kiwon, Honorio Jean, Chattopadhyay Debaleena, Berg Tamara L., and Samaras Dimitris. 2012. Two-person interaction detection using body-pose features and multiple instance learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2835.Google ScholarGoogle ScholarCross RefCross Ref
  71. [71] Zhang Pengfei, Lan Cuiling, Xing Junliang, Zeng Wenjun, Xue Jianru, and Zheng Nanning. 2017. View adaptive recurrent neural networks for high performance human action recognition from skeleton data. In Proceedings of the IEEE International Conference on Computer Vision. 21362145.Google ScholarGoogle ScholarCross RefCross Ref
  72. [72] Zhang Pengfei, Xue Jianru, Lan Cuiling, Zeng Wenjun, Gao Zhanning, and Zheng Nanning. 2018. Adding attentiveness to the neurons in recurrent neural networks. In Proceedings of the European Conference on Computer Vision. 136152.Google ScholarGoogle ScholarCross RefCross Ref
  73. [73] Zhang Richard, Isola Phillip, and Efros Alexei A.. 2016. Colorful image colorization. In Proceedings of the European Conference on Computer Vision. 649666.Google ScholarGoogle ScholarCross RefCross Ref
  74. [74] Zolfaghari Mohammadreza, Oliveira Gabriel L., Sedaghat Nima, and Brox Thomas. 2017. Chained multi-stream networks exploiting pose, motion, and appearance for action classification and detection. In Proceedings of the IEEE International Conference on Computer Vision. 29232932.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Learning from Temporal Spatial Cubism for Cross-Dataset Skeleton-based Action Recognition

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Multimedia Computing, Communications, and Applications
      ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 18, Issue 2
      May 2022
      494 pages
      ISSN:1551-6857
      EISSN:1551-6865
      DOI:10.1145/3505207
      Issue’s Table of Contents

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 16 February 2022
      • Revised: 1 June 2021
      • Accepted: 1 June 2021
      • Received: 1 January 2021
      Published in tomm Volume 18, Issue 2

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Refereed

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    View Full Text

    HTML Format

    View this article in HTML Format .

    View HTML Format
    About Cookies On This Site

    We use cookies to ensure that we give you the best experience on our website.

    Learn more

    Got it!