skip to main content
research-article

Exploring Relations in Untrimmed Videos for Self-Supervised Learning

Authors Info & Claims
Published:25 January 2022Publication History
Skip Abstract Section

Abstract

Existing video self-supervised learning methods mainly rely on trimmed videos for model training. They apply their methods and verify the effectiveness on trimmed video datasets including UCF101 and Kinetics-400, among others. However, trimmed datasets are manually annotated from untrimmed videos. In this sense, these methods are not truly unsupervised. In this article, we propose a novel self-supervised method, referred to as Exploring Relations in Untrimmed Videos (ERUV), which can be straightforwardly applied to untrimmed videos (real unlabeled) to learn spatio-temporal features. ERUV first generates single-shot videos by shot change detection. After that, some designed sampling strategies are used to model relations for video clips. The strategies are saved as our self-supervision signals. Finally, the network learns representations by predicting the category of relations between the video clips. ERUV is able to compare the differences and similarities of video clips, which is also an essential procedure for video-related tasks. We validate our learned models with action recognition, video retrieval, and action similarity labeling tasks with four kinds of 3D convolutional neural networks. Experimental results show that ERUV is able to learn richer representations with untrimmed videos, and it outperforms state-of-the-art self-supervised methods with significant margins.

REFERENCES

  1. [1] Arandjelovic Relja and Zisserman Andrew. 2017. Look, listen and learn. In Proceedings of the IEEE International Conference on Computer Vision. 609617.Google ScholarGoogle ScholarCross RefCross Ref
  2. [2] Arandjelovic Relja and Zisserman Andrew. 2018. Objects that sound. In Proceedings of the European Conference on Computer Vision. 435451.Google ScholarGoogle ScholarCross RefCross Ref
  3. [3] Benaim Sagie, Ephrat Ariel, Lang Oran, Mosseri Inbar, Freeman William T., Rubinstein Michael, Irani Michal, and Dekel Tali. 2020. SpeedNet: Learning the speediness in videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 99229931.Google ScholarGoogle ScholarCross RefCross Ref
  4. [4] Büchler Uta, Brattoli Biagio, and Ommer Bjorn. 2018. Improving spatiotemporal self-supervision by deep reinforcement learning. In Proceedings of the European Conference on Computer Vision. 770786.Google ScholarGoogle ScholarCross RefCross Ref
  5. [5] Chen Ting, Kornblith Simon, Norouzi Mohammad, and Hinton Geoffrey. 2020. A simple framework for contrastive learning of visual representations. In Proceedings of the International Conference on Machine Learning. 15971607.Google ScholarGoogle Scholar
  6. [6] Chen Yudi, Wang Wei, Zhou Yu, Yang Fei, Yang Dongbao, and Wang Weiping. 2021. Self-training for domain adaptive scene text detection. In Proceedings of the International Conference on Pattern Recognition. IEEE, Los Alamitos, CA, 850857.Google ScholarGoogle ScholarCross RefCross Ref
  7. [7] Chen Yudi, Zhou Yu, Yang Dongbao, and Wang Weiping. 2019. Constrained relation network for character detection in scene images. In Proceedings of the Pacific Rim International Conference on Artificial Intelligence. 137149.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. [8] Pathak Deepak, Krähenbühl Philipp, Donahuef Jeff, Trevor Darrell, and Efros Alexei A.. 2016. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 25362544.Google ScholarGoogle ScholarCross RefCross Ref
  9. [9] Doersch Carl, Gupta Abhinav, and Efros Alexei A.. 2015. Unsupervised visual representation learning by context prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 14221430. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. [10] Dosovitskiy Alexey, Fischer Philipp, Ilg Eddy, Hausser Philip, Hazirbas Caner, Golkov Vladimir, Smagt Patrick Van Der, Cremers Daniel, and Brox Thomas. 2015. FlowNet: Learning optical flow with convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision. 27582766. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. [11] Feichtenhofer Christoph, Fan Haoqi, Malik Jitendra, and He Kaiming. 2019. Slowfast networks for video recognition. In Proceedings of the IEEE International Conference on Computer Vision. 62026211.Google ScholarGoogle ScholarCross RefCross Ref
  12. [12] Felzenszwalb Pedro F., Girshick Ross B., McAllester David, and Ramanan Deva. 2009. Object detection with discriminatively trained part-based models. IEEE Transactions on Pattern Analysis and Machine Intelligence 32, 9 (2009), 16271645. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. [13] Fernando Basura, Bilen Hakan, Gavves Efstratios, and Gould Stephen. 2017. Self-supervised video representation learning with odd-one-out networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 36363645.Google ScholarGoogle ScholarCross RefCross Ref
  14. [14] Gan Chuang, Gong Boqing, Liu Kun, Su Hao, and Guibas Leonidas J.. 2018. Geometry guided convolutional neural networks for self-supervised video representation learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 55895597.Google ScholarGoogle ScholarCross RefCross Ref
  15. [15] Gidaris Spyros, Singh Praveer, and Komodakis Nikos. 2018. Unsupervised representation learning by predicting image rotations. arXiv preprint arXiv:1803.07728 (2018).Google ScholarGoogle Scholar
  16. [16] Han Tengda, Xie Weidi, and Zisserman Andrew. 2020. Self-supervised co-training for video representation learning. arXiv preprint arXiv:2010.09709 (2020).Google ScholarGoogle Scholar
  17. [17] He Kaiming, Fan Haoqi, Wu Yuxin, Xie Saining, and Girshick Ross. 2020. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 97299738.Google ScholarGoogle ScholarCross RefCross Ref
  18. [18] Hu Han, Gu Jiayuan, Zhang Zheng, Dai Jifeng, and Wei Yichen. 2018. Relation networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 35883597.Google ScholarGoogle ScholarCross RefCross Ref
  19. [19] Jhuang H., Garrote H., Poggio E., Serre T., and Hmdb T.. 2011. A large video database for human motion recognition. In Proceedings of the IEEE International Conference on Computer Vision, Vol. 4. 6. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. [20] Jiang Yu-Gang, Liu Jingen, Zamir A. Roshan, Toderici George, Laptev Ivan, Shah Mubarak, and Sukthankar Rahul. 2014. THUMOS challenge: Action recognition with a large number of classes. http://crcv.ucf.edu/THUMOS14/Google ScholarGoogle Scholar
  21. [21] Jing Longlong, Yang Xiaodong, Liu Jingen, and Tian Yingli. 2018. Self-supervised spatiotemporal feature learning via video rotation prediction. arXiv preprint arXiv:1811.11387 (2018).Google ScholarGoogle Scholar
  22. [22] Kay Will, Carreira Joao, Simonyan Karen, Zhang Brian, Hillier Chloe, Vijayanarasimhan Sudheendra, Viola Fabio, et al. 2017. The Kinetics human action video dataset. arXiv preprint arXiv:1705.06950 (2017).Google ScholarGoogle Scholar
  23. [23] Kim Dahun, Cho Donghyeon, and Kweon In So. 2019. Self-supervised video representation learning with space-time cubic puzzles. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. 85458552. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. [24] Kliper-Gross Orit, Hassner Tal, and Wolf Lior. 2011. The action similarity labeling challenge. IEEE Transactions on Pattern Analysis and Machine Intelligence 34, 3 (2011), 615621. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. [25] Kolesnikov Alexander, Zhai Xiaohua, and Beyer Lucas. 2019. Revisiting self-supervised visual representation learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 19201929.Google ScholarGoogle ScholarCross RefCross Ref
  26. [26] Kong Quan, Wei Wenpeng, Deng Ziwei, Yoshinaga Tomoaki, and Murakami Tomokazu. 2020. Cycle-contrast for self-supervised video representation learning. arXiv preprint arXiv:2010.14810 (2020).Google ScholarGoogle Scholar
  27. [27] Krizhevsky Alex, Sutskever Ilya, and Hinton Geoffrey E.. 2012. ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems. 10971105. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. [28] Larsson Gustav, Maire Michael, and Shakhnarovich Gregory. 2017. Colorization as a proxy task for visual understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 68746883.Google ScholarGoogle ScholarCross RefCross Ref
  29. [29] Lee Hsin-Ying, Huang Jia-Bin, Singh Maneesh, and Yang Ming-Hsuan. 2017. Unsupervised representation learning by sorting sequences. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 667676.Google ScholarGoogle ScholarCross RefCross Ref
  30. [30] Lee Wonhee, Na Joonil, and Kim Gunhee. 2019. Multi-task self-supervised object detection via recycling of bounding box annotations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 49844993.Google ScholarGoogle ScholarCross RefCross Ref
  31. [31] Li Xiaoni, Zhou Yucan, Zhou Yu, and Wang Weiping. 2021. MMF: Multi-task multi-structure fusion for hierarchical image classification. arXiv preprint arXiv:2107.00808 (2021).Google ScholarGoogle Scholar
  32. [32] Luo Dezhao, Liu Chang, Zhou Yu, Yang Dongbao, Ma Can, Ye Qixiang, and Wang Weiping. 2020. Video cloze procedure for self-supervised spatio-temporal learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 1170111708.Google ScholarGoogle ScholarCross RefCross Ref
  33. [33] Misra Ishan and Maaten Laurens van der. 2020. Self-supervised learning of pretext-invariant representations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 67076717.Google ScholarGoogle ScholarCross RefCross Ref
  34. [34] Misra Ishan, Zitnick C Lawrence, and Hebert Martial. 2016. Shuffle and learn: Unsupervised learning using temporal order verification. In Proceedings of the European Conference on Computer Vision. 527544.Google ScholarGoogle ScholarCross RefCross Ref
  35. [35] Noroozi Mehdi and Favaro Paolo. 2016. Unsupervised learning of visual representations by solving jigsaw puzzles. In Proceedings of the European Conference on Computer Vision. 6984.Google ScholarGoogle ScholarCross RefCross Ref
  36. [36] Pan Tian, Song Yibing, Yang Tianyu, Jiang Wenhao, and Liu Wei. 2021. VideoMoCo: Contrastive video representation learning with temporally adversarial examples. arXiv preprint arXiv:2103.05905 (2021).Google ScholarGoogle Scholar
  37. [37] Pathak Deepak, Krahenbuhl Philipp, Donahue Jeff, Darrell Trevor, and Efros Alexei A.. 2016. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 25362544.Google ScholarGoogle ScholarCross RefCross Ref
  38. [38] Piergiovanni A. J. and Ryoo Michael S.. 2019. Representation flow for action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 99459953.Google ScholarGoogle ScholarCross RefCross Ref
  39. [39] Qian Rui, Meng Tianjian, Gong Boqing, Yang Ming-Hsuan, Wang Huisheng, Belongie Serge, and Cui Yin. 2020. Spatiotemporal contrastive video representation learning. arXiv preprint arXiv:2008.03800 (2020).Google ScholarGoogle Scholar
  40. [40] Qiao Zhi, Qin Xugong, Zhou Yu, Yang Fei, and Wang Weiping. 2021. Gaussian constrained attention network for scene text recognition. In Proceedings of the International Conference on Pattern Recognition. IEEE, Los Alamitos, CA, 33283335.Google ScholarGoogle ScholarCross RefCross Ref
  41. [41] Qiao Zhi, Zhou Yu, Yang Dongbao, Zhou Yucan, and Wang Weiping. 2020. Seed: Semantics enhanced encoder-decoder framework for scene text recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1352813537.Google ScholarGoogle ScholarCross RefCross Ref
  42. [42] Qin Xugong, Zhou Yu, Guo Youhui, Wu Dayan, and Wang Weiping. 2021. FC2RN: A fully convolutional corner refinement network for accurate multi-oriented scene text detection. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing. IEEE, Los Alamitos, CA, 43504354.Google ScholarGoogle ScholarCross RefCross Ref
  43. [43] Qin Xugong, Zhou Yu, Yang Dongbao, and Wang Weiping. 2019. Curved text detection in natural scene images with semi-and weakly-supervised learning. In Proceedings of the International Conference on Document Analysis and Recognition. IEEE, Los Alamitos, CA, 559564.Google ScholarGoogle ScholarCross RefCross Ref
  44. [44] Russakovsky Olga, Deng Jia, Su Hao, Krause Jonathan, Satheesh Sanjeev, Ma Sean, Huang Zhiheng, et al. 2015. ImageNet large scale visual recognition challenge. International Journal of Computer Vision 115, 3 (2015), 211252. Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. [45] Simonyan Karen and Zisserman Andrew. 2014. Two-stream convolutional networks for action recognition in videos. In Advances in Neural Information Processing Systems. 568576. Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. [46] Soomro Khurram, Zamir Amir Roshan, and Shah Mubarak. 2012. UCF101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402 (2012).Google ScholarGoogle Scholar
  47. [47] Sun Shuyang, Kuang Zhanghui, Sheng Lu, Ouyang Wanli, and Zhang Wei. 2018. Optical flow guided feature: A fast and robust motion representation for video action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 13901399.Google ScholarGoogle ScholarCross RefCross Ref
  48. [48] Tran Du, Bourdev Lubomir, Fergus Rob, Torresani Lorenzo, and Paluri Manohar. 2015. Learning spatiotemporal features with 3D convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision. 44894497. Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. [49] Tran Du, Wang Heng, Torresani Lorenzo, Ray Jamie, LeCun Yann, and Paluri Manohar. 2018. A closer look at spatiotemporal convolutions for action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 64506459.Google ScholarGoogle ScholarCross RefCross Ref
  50. [50] Vondrick Carl, Pirsiavash Hamed, and Torralba Antonio. 2016. Generating videos with scene dynamics. In Advances in Neural Information Processing Systems. 613621. Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. [51] Wang Jiangliu, Jiao Jianbo, Bao Linchao, He Shengfeng, Liu Yunhui, and Liu Wei. 2019. Self-supervised spatio-temporal representation learning for videos by predicting motion and appearance statistics. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 40064015.Google ScholarGoogle ScholarCross RefCross Ref
  52. [52] Wang Jiangliu, Jiao Jianbo, and Liu Yun-Hui. 2020. Self-supervised video representation learning by pace prediction. In Proceedings of the European Conference on Computer Vision. 504521.Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. [53] Wang Limin, Li Wei, Li Wen, and Gool Luc Van. 2018. Appearance-and-relation networks for video classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 14301439.Google ScholarGoogle ScholarCross RefCross Ref
  54. [54] Wang Limin, Qiao Yu, and Tang Xiaoou. 2015. Action recognition with trajectory-pooled deep-convolutional descriptors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 43054314.Google ScholarGoogle ScholarCross RefCross Ref
  55. [55] Wang Limin, Xiong Yuanjun, Lin Dahua, and Gool Luc Van. 2017. UntrimmedNets for weakly supervised action recognition and detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 43254334.Google ScholarGoogle ScholarCross RefCross Ref
  56. [56] Wang Limin, Xiong Yuanjun, Wang Zhe, Qiao Yu, Lin Dahua, Tang Xiaoou, and Gool Luc Van. 2016. Temporal segment networks: Towards good practices for deep action recognition. In Proceedings of the European Conference on Computer Vision. 2036.Google ScholarGoogle ScholarCross RefCross Ref
  57. [57] Wang Yuxin, Xie Hongtao, Zha Zhengjun, Tian Youliang, Fu Zilong, and Zhang Yongdong. 2020. R-Net: A relationship network for efficient and accurate scene text detection. IEEE Transactions on Multimedia 23 (2020), 13161329.Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. [58] Wei Donglai, Lim Joseph J., Zisserman Andrew, and Freeman William T.. 2018. Learning and using the arrow of time. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 80528060.Google ScholarGoogle ScholarCross RefCross Ref
  59. [59] Xie Hongtao, Fang Shancheng, Zha Zheng-Jun, Yang Yating, Li Yan, and Zhang Yongdong. 2019. Convolutional attention networks for scene text recognition. ACM Transactions on Multimedia Computing, Communications, and Applications 15, 1s (2019), 117. Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. [60] Xie Saining, Sun Chen, Huang Jonathan, Tu Zhuowen, and Murphy Kevin. 2018. Rethinking spatiotemporal feature learning: Speed-accuracy trade-offs in video classification. In Proceedings of the European Conference on Computer Vision. 305321.Google ScholarGoogle ScholarCross RefCross Ref
  61. [61] Xu Dejing, Xiao Jun, Zhao Zhou, Shao Jian, Xie Di, and Zhuang Yueting. 2019. Self-supervised spatiotemporal learning via video clip order prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1033410343.Google ScholarGoogle ScholarCross RefCross Ref
  62. [62] Yang Dongbao, Zhou Yu, and Wang Weiping. 2021. Multi-view correlation distillation for incremental object detection. arXiv preprint arXiv:2107.01787 (2021).Google ScholarGoogle Scholar
  63. [63] Yang Dongbao, Zhou Yu, Wu Dayan, Ma Can, Yang Fei, and Wang Weiping. 2020. Two-level residual distillation based triple network for incremental object detection. arXiv preprint arXiv:2007.13428 (2020).Google ScholarGoogle Scholar
  64. [64] Yao Yuan, Liu Chang, Luo Dezhao, Zhou Yu, and Ye Qixiang. 2020. Video playback rate perception for self-supervised spatio-temporal representation learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 65486557.Google ScholarGoogle ScholarCross RefCross Ref
  65. [65] Yeung Serena, Russakovsky Olga, Mori Greg, and Fei-Fei Li. 2016. End-to-end learning of action detection from frame glimpses in videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 26782687.Google ScholarGoogle ScholarCross RefCross Ref
  66. [66] Zagoruyko Sergey and Komodakis Nikos. 2016. Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer. arXiv preprint arXiv:1612.03928 (2016).Google ScholarGoogle Scholar
  67. [67] Zhang Bowen, Wang Limin, Wang Zhe, Qiao Yu, and Wang Hanli. 2016. Real-time action recognition with enhanced motion vector CNNs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 27182726.Google ScholarGoogle ScholarCross RefCross Ref
  68. [68] Zhang Yifei, Liu Chang, Zhou Yu, Wang Wei, Wang Weiping, and Ye Qixiang. 2021. Progressive cluster purification for unsupervised feature learning. In Proceedings of the International Conference on Pattern Recognition. IEEE, Los Alamitos, CA, 84768483.Google ScholarGoogle ScholarCross RefCross Ref
  69. [69] Zhang Yifei, Zhou Yu, and Wang Weiping. 2021. Exploring instance relations for unsupervised feature embedding. arXiv preprint arXiv:2105.03341 (2021).Google ScholarGoogle Scholar
  70. [70] Zhao Yiru, Deng Bing, Shen Chen, Liu Yao, Lu Hongtao, and Hua Xian-Sheng. 2017. Spatio-temporal autoencoder for video anomaly detection. In Proceedings of the 25th ACM International Conference on Multimedia. 19331941. Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. [71] Zhong Bineng, Bai Bing, Li Jun, Zhang Yulun, and Fu Yun. 2018. Hierarchical tracking by reinforcement learning-based searching and coarse-to-fine verifying. IEEE Transactions on Image Processing 28, 5 (2018), 23312341.Google ScholarGoogle ScholarCross RefCross Ref
  72. [72] Zhou Bolei, Andonian Alex, Oliva Aude, and Torralba Antonio. 2018. Temporal relational reasoning in videos. In Proceedings of the European Conference on Computer Vision. 803818.Google ScholarGoogle ScholarCross RefCross Ref
  73. [73] Zhou Qinqin, Zhong Bineng, Lan Xiangyuan, Sun Gan, Zhang Yulun, Zhang Baochang, and Ji Rongrong. 2020. Fine-grained spatial alignment model for person re-identification with focal triplet loss. IEEE Transactions on Image Processing 29 (2020), 75787589.Google ScholarGoogle ScholarDigital LibraryDigital Library
  74. [74] Zhou Yucan, Wang Yu, Cai Jianfei, Zhou Yu, Hu Qinghua, and Wang Weiping. 2020. Expert training: Task hardness aware meta-learning for few-shot classification. arXiv preprint arXiv:2007.06240 (2020).Google ScholarGoogle Scholar

Index Terms

  1. Exploring Relations in Untrimmed Videos for Self-Supervised Learning

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Multimedia Computing, Communications, and Applications
      ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 18, Issue 1s
      February 2022
      352 pages
      ISSN:1551-6857
      EISSN:1551-6865
      DOI:10.1145/3505206
      Issue’s Table of Contents

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 25 January 2022
      • Accepted: 1 June 2021
      • Revised: 1 May 2021
      • Received: 1 January 2021
      Published in tomm Volume 18, Issue 1s

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Refereed

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    View Full Text

    HTML Format

    View this article in HTML Format .

    View HTML Format
    About Cookies On This Site

    We use cookies to ensure that we give you the best experience on our website.

    Learn more

    Got it!