skip to main content
research-article

Tri-training for Dependency Parsing Domain Adaptation

Authors Info & Claims
Published:13 December 2021Publication History
Skip Abstract Section

Abstract

In recent years, the research on dependency parsing focuses on improving the accuracy of the domain-specific (in-domain) test datasets and has made remarkable progress. However, there are innumerable scenarios in the real world that are not covered by the dataset, namely, the out-of-domain dataset. As a result, parsers that perform well on the in-domain data usually suffer from significant performance degradation on the out-of-domain data. Therefore, to adapt the existing in-domain parsers with high performance to a new domain scenario, cross-domain transfer learning methods are essential to solve the domain problem in parsing. This paper examines two scenarios for cross-domain transfer learning: semi-supervised and unsupervised cross-domain transfer learning. Specifically, we adopt a pre-trained language model BERT for training on the source domain (in-domain) data at the subword level and introduce self-training methods varied from tri-training for these two scenarios. The evaluation results on the NLPCC-2019 shared task and universal dependency parsing task indicate the effectiveness of the adopted approaches on cross-domain transfer learning and show the potential of self-learning to cross-lingual transfer learning.

REFERENCES

  1. [1] Ahmad Wasi, Zhang Zhisong, Ma Xuezhe, Hovy Eduard, Chang Kai-Wei, and Peng Nanyun. 2019. On difficulties of cross-lingual transfer with order differences: A case study on dependency parsing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Association for Computational Linguistics, Minneapolis, Minnesota, 24402452. DOI: https://doi.org/10.18653/v1/N19-1253Google ScholarGoogle ScholarCross RefCross Ref
  2. [2] Andor Daniel, Alberti Chris, Weiss David, Severyn Aliaksei, Presta Alessandro, Ganchev Kuzman, Petrov Slav, and Collins Michael. 2016. Globally normalized transition-based neural networks. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, 24422452. DOI: https://doi.org/10.18653/v1/P16-1231Google ScholarGoogle ScholarCross RefCross Ref
  3. [3] Angeli Gabor, Johnson Premkumar Melvin Jose, and Manning Christopher D.. 2015. Leveraging linguistic structure for open domain information extraction. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, 344354. DOI: https://doi.org/10.3115/v1/P15-1034Google ScholarGoogle ScholarCross RefCross Ref
  4. [4] Blum Avrim and Mitchell Tom M.. 1998. Combining labeled and unlabeled data with co-training. In Proceedings of the Eleventh Annual Conference on Computational Learning Theory, COLT 1998, Bartlett Peter L. and Mansour Yishay (Eds.), ACM, Madison, Wisconsin, 92100. DOI: https://doi.org/10.1145/279943.279962 Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. [5] Bowman Samuel R., Gauthier Jon, Rastogi Abhinav, Gupta Raghav, Manning Christopher D., and Potts Christopher. 2016. A fast unified model for parsing and sentence understanding. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, 14661477. DOI: https://doi.org/10.18653/v1/P16-1139Google ScholarGoogle ScholarCross RefCross Ref
  6. [6] Chen Danqi and Manning Christopher. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Doha, Qatar, 740750. DOI: https://doi.org/10.3115/v1/D14-1082Google ScholarGoogle ScholarCross RefCross Ref
  7. [7] Chen Kehai, Wang Rui, Utiyama Masao, Liu Lemao, Tamura Akihiro, Sumita Eiichiro, and Zhao Tiejun. 2017. Neural machine translation with source dependency representation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Copenhagen, Denmark, 28462852. DOI: https://doi.org/10.18653/v1/D17-1304Google ScholarGoogle ScholarCross RefCross Ref
  8. [8] Chen Wenliang and Zhang Min. 2015. Semi-Supervised Dependency Parsing. Springer, Singapore. DOI: https://doi.org/10.1007/978-981-287-552-5_4Google ScholarGoogle ScholarCross RefCross Ref
  9. [9] Clark Kevin, Luong Minh-Thang, Manning Christopher D., and Le Quoc. 2018. Semi-Supervised sequence modeling with cross-view training. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Brussels, 19141925. DOI: https://doi.org/10.18653/v1/D18-1217Google ScholarGoogle ScholarCross RefCross Ref
  10. [10] Dozat Timothy and Manning Christopher D.. 2017. Deep biaffine attention for neural dependency parsing. In Proceedings of the 5th International Conference on Learning Representations 2017. OpenReview.net, Toulon, France. Retrieved from https://openreview.net/forum?id=Hk95PK9le.Google ScholarGoogle Scholar
  11. [11] Goldberg Yoav and Nivre Joakim. 2012. A dynamic oracle for arc-eager dependency parsing. In Proceedings of COLING 2012. The COLING 2012 Organizing Committee, Mumbai, India, 959976. Retrieved from https://www.aclweb.org/anthology/C12-1059.Google ScholarGoogle Scholar
  12. [12] Guo Jiang, Che Wanxiang, Yarowsky David, Wang Haifeng, and Liu Ting. 2015. Cross-lingual dependency parsing based on distributed representations. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, 12341244. DOI: https://doi.org/10.3115/v1/P15-1119Google ScholarGoogle ScholarCross RefCross Ref
  13. [13] Hashimoto Kazuma, Xiong Caiming, Tsuruoka Yoshimasa, and Socher Richard. 2017. A joint many-task model: Growing a neural network for multiple NLP tasks. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Copenhagen, Denmark, 19231933. DOI: https://doi.org/10.18653/v1/D17-1206Google ScholarGoogle ScholarCross RefCross Ref
  14. [14] Hatori Jun, Matsuzaki Takuya, Miyao Yusuke, and Tsujii Jun’ichi. 2012. Incremental joint approach to word segmentation, POS tagging, and dependency parsing in Chinese. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Jeju Island, Korea, 10451053. DOI: https://www.aclweb.org/anthology/P12-1110 Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. [15] He Shexia, Li Zuchao, Zhao Hai, and Bai Hongxiao. 2018. Syntax for semantic role labeling, To Be, Or not to be. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Melbourne, Australia, 20612071. DOI: https://doi.org/10.18653/v1/P18-1192Google ScholarGoogle ScholarCross RefCross Ref
  16. [16] Hochreiter Sepp and Schmidhuber Jürgen. 1997. Long short-term memory. Neural Computation 9, 8 (1997), 17351780. DOI: https://doi.org/10.1162/neco.1997.9.8.1735 Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. [17] Howard Jeremy and Ruder Sebastian. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Melbourne, Australia, 328339. DOI: https://doi.org/10.18653/v1/P18-1031Google ScholarGoogle ScholarCross RefCross Ref
  18. [18] Kiperwasser Eliyahu and Goldberg Yoav. 2016. Simple and accurate dependency parsing using bidirectional LSTM feature representations. Transactions of the Association for Computational Linguistics 4 (2016), 313327. DOI: https://doi.org/10.1162/tacl_a_00101Google ScholarGoogle ScholarCross RefCross Ref
  19. [19] Koo Terry and Collins Michael. 2010. Efficient third-order dependency parsers. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Uppsala, Sweden, 111. https://www.aclweb.org/anthology/P10-1001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. [20] Kurita Shuhei, Kawahara Daisuke, and Kurohashi Sadao. 2017. Neural joint model for transition-based Chinese syntactic analysis. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Vancouver, Canada, 12041214. DOI: https://doi.org/10.18653/v1/P17-1111Google ScholarGoogle ScholarCross RefCross Ref
  21. [21] Levy Omer and Goldberg Yoav. 2014. Dependency-Based word embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, Baltimore, Maryland, 302308. DOI: https://doi.org/10.3115/v1/P14-2050Google ScholarGoogle ScholarCross RefCross Ref
  22. [22] Li Zuchao, Cai Jiaxun, He Shexia, and Zhao Hai. 2018. Seq2seq dependency parsing. In Proceedings of the 27th International Conference on Computational Linguistics. Association for Computational Linguistics, Santa Fe, New Mexico, 32033214. Retrieved from https://www.aclweb.org/anthology/C18-1271.Google ScholarGoogle Scholar
  23. [23] Li Zhenghua, Peng Xue, Zhang Min, Wang Rui, and Si Luo. 2019. Semi-supervised domain adaptation for dependency parsing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Florence, Italy, 23862395. DOI: https://doi.org/10.18653/v1/P19-1229Google ScholarGoogle ScholarCross RefCross Ref
  24. [24] Li Zhenghua, Zhang Min, and Chen Wenliang. 2014. Ambiguity-aware ensemble training for semi-supervised dependency parsing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Baltimore, Maryland, 457467. DOI: https://doi.org/10.3115/v1/P14-1043Google ScholarGoogle ScholarCross RefCross Ref
  25. [25] Ma Xuezhe, Hu Zecong, Liu Jingzhou, Peng Nanyun, Neubig Graham, and Hovy Eduard. 2018. Stack-Pointer networks for dependency parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Melbourne, Australia, 14031414. DOI: https://doi.org/10.18653/v1/P18-1130Google ScholarGoogle ScholarCross RefCross Ref
  26. [26] Ma Xuezhe and Xia Fei. 2014. Unsupervised dependency parsing with transferring distribution via parallel guidance and entropy regularization. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Baltimore, Maryland, 13371348. DOI: https://doi.org/10.3115/v1/P14-1126Google ScholarGoogle ScholarCross RefCross Ref
  27. [27] Ma Xuezhe and Zhao Hai. 2012. Fourth-Order dependency parsing. In Proceedings of COLING 2012: Posters. The COLING 2012 Organizing Committee, Mumbai, India, 785796. Retrieved from https://www.aclweb.org/anthology/C12-2077.Google ScholarGoogle Scholar
  28. [28] Marcus Mitchell P., Santorini Beatrice, and Marcinkiewicz Mary Ann. 1993. Building a large annotated corpus of english: The penn treebank. Computational Linguistics 19, 2 (1993), 313330. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. [29] McDonald Ryan and Pereira Fernando. 2006. Online learning of approximate dependency parsing algorithms. In Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, Trento, Italy, 8188. Retrieved from https://www.aclweb.org/anthology/E06-1011.Google ScholarGoogle Scholar
  30. [30] McDonald Ryan, Petrov Slav, and Hall Keith. 2011. Multi-Source transfer of delexicalized dependency parsers. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Edinburgh, Scotland, UK., 6272. Retrieved from https://www.aclweb.org/anthology/D11-1006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. [31] Mikolov Tomas, Grave Edouard, Bojanowski Piotr, Puhrsch Christian, and Joulin Armand. 2018. Advances in pre-training distributed word representations. In Proceedings of the 11th International Conference on Language Resources and Evaluation, LREC 2018, Calzolari Nicoletta, Choukri Khalid, Cieri Christopher, Declerck Thierry, Goggi Sara, Hasida Kôiti, Isahara Hitoshi, Maegaard Bente, Mariani Joseph, Mazo Hélène, Moreno Asunción, Odijk Jan, Piperidis Stelios, and Tokunaga Takenobu (Eds.), European Language Resources Association (ELRA), Miyazaki, Japan, 5255. Retrieved from http://www.lrec-conf.org/proceedings/lrec2018/summaries/721.html.Google ScholarGoogle Scholar
  32. [32] Peng Xue, Li Zhenghua, Zhang Min, Wang Rui, Zhang Yue, and Si Luo. 2019. Overview of the NLPCC 2019 shared task: Cross-domain dependency parsing. Natural Language Processing and Chinese Computing - 8th CCF International Conference, NLPCC 2019, Dunhuang, China, October 9–14, 2019, Proceedings, Part II 11839 (2019), 760771. DOI: https://doi.org/10.1007/978-3-030-32236-6_69Google ScholarGoogle Scholar
  33. [33] Peters Matthew, Neumann Mark, Iyyer Mohit, Gardner Matt, Clark Christopher, Lee Kenton, and Zettlemoyer Luke. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Association for Computational Linguistics, New Orleans, Louisiana, 22272237. DOI: https://doi.org/10.18653/v1/N18-1202Google ScholarGoogle ScholarCross RefCross Ref
  34. [34] Peters Matthew, Neumann Mark, Iyyer Mohit, Gardner Matt, Clark Christopher, Lee Kenton, and Zettlemoyer Luke. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Association for Computational Linguistics, New Orleans, Louisiana, 22272237. DOI: https://doi.org/10.18653/v1/N18-1202Google ScholarGoogle ScholarCross RefCross Ref
  35. [35] Rotman Guy and Reichart Roi. 2019. Deep contextualized self-training for low resource dependency parsing. Transactions of the Association for Computational Linguistics 7 (March 2019), 695713. DOI: https://doi.org/10.1162/tacl_a_00294Google ScholarGoogle ScholarCross RefCross Ref
  36. [36] Ruder Sebastian and Plank Barbara. 2018. Strong baselines for neural semi-supervised learning under domain shift. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Melbourne, Australia, 10441054. DOI: https://doi.org/10.18653/v1/P18-1096Google ScholarGoogle ScholarCross RefCross Ref
  37. [37] Sun Kailai, Li Zuchao, and Zhao Hai. 2021. Cross-lingual Universal Dependency Parsing Only from One Monolingual Treebank. arxiv:cs.CL/2012.13163. Retrieved from https://arxiv.org/abs/2012.13163.Google ScholarGoogle Scholar
  38. [38] Täckström Oscar, McDonald Ryan, and Uszkoreit Jakob. 2012. Cross-lingual word clusters for direct transfer of linguistic structure. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Montréal, Canada, 477487. Retrieved from https://www.aclweb.org/anthology/N12-1052. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. [39] Vaswani Ashish, Shazeer Noam, Parmar Niki, Uszkoreit Jakob, Jones Llion, Gomez Aidan N., Kaiser Lukasz, and Polosukhin Illia. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems. NIPS, Long Beach, 60006010. Retrieved from http://papers.nips.cc/paper/7181-attention-is-all-you-need. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. [40] Wu Shijie and Dredze Mark. 2019. Beto, Bentz, Becas: The surprising cross-lingual effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Association for Computational Linguistics, Hong Kong, China, 833844. DOI: https://doi.org/10.18653/v1/D19-1077Google ScholarGoogle ScholarCross RefCross Ref
  41. [41] Xia Zhentao, Wang Likai, Qu Weiguang, Zhou Junsheng, and Gu Yanhui. 2020. Neural network based deep transfer learning for cross-domain dependency parsing. In Proceedings of the Artificial Intelligence and Security, Sun Xingming, Wang Jinwei, and Bertino Elisa (Eds.), Springer Singapore, Singapore, 549558.Google ScholarGoogle ScholarCross RefCross Ref
  42. [42] Xiao Min and Guo Yuhong. 2014. Distributed word representation learning for cross-lingual dependency parsing. In Proceedings of the 18th Conference on Computational Natural Language Learning. Association for Computational Linguistics, Ann Arbor, Michigan, 119129. DOI: https://doi.org/10.3115/v1/W14-1613Google ScholarGoogle ScholarCross RefCross Ref
  43. [43] Yan Hang, Qiu Xipeng, and Huang Xuanjing. 2020. A graph-based model for joint Chinese word segmentation and dependency parsing. Transactions of the Association for Computational Linguistics 8 (2020), 7892. DOI: https://doi.org/10.1162/tacl_a_00301Google ScholarGoogle ScholarCross RefCross Ref
  44. [44] Yarowsky David. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Cambridge, Massachusetts, 189196. DOI: https://doi.org/10.3115/981658.981684 Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. [45] Yosinski Jason, Clune Jeff, Bengio Yoshua, and Lipson Hod. 2014. How transferable are features in deep neural networks? In Proceedings of the Advances in Neural Information Processing Systems 27, Z. Ghahramani, M. Welling, C. Cortes, Lawrence N. D., and Weinberger K. Q. (Eds.), Curran Associates, Inc., Montreal, Quebec, Canada, 33203328. Retrieved from http://papers.nips.cc/paper/5347-how-transferable-are-features-in-deep-neural-networks.pdf. Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. [46] Yu Juntao, Elkaref Mohab, and Bohnet Bernd. 2015. Domain adaptation for dependency parsing via self-training. In Proceedings of the 14th International Conference on Parsing Technologies. Association for Computational Linguistics, Bilbao, Spain, 110. DOI: https://doi.org/10.18653/v1/W15-2201Google ScholarGoogle ScholarCross RefCross Ref
  47. [47] Zeman Daniel, Hajič Jan, Popel Martin, Potthast Martin, Straka Milan, Ginter Filip, Nivre Joakim, and Petrov Slav. 2018. CoNLL 2018 Shared Task: Multilingual parsing from raw text to universal dependencies. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. Association for Computational Linguistics, Brussels, Belgium, 121. DOI: https://doi.org/10.18653/v1/K18-2001Google ScholarGoogle Scholar
  48. [48] Zhang Hao, Huang Liang, Zhao Kai, and McDonald Ryan. 2013. Online learning for inexact hypergraph search. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Seattle, Washington, 908913. Retrieved from https://www.aclweb.org/anthology/D13-1093.Google ScholarGoogle Scholar
  49. [49] Zhang Hao and McDonald Ryan. 2012. Generalized higher-order dependency parsing with cube pruning. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Association for Computational Linguistics, Jeju Island, Korea, 320331. Retrieved from https://www.aclweb.org/anthology/D12-1030. Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. [50] Zhang Meishan, Zhang Yue, Che Wanxiang, and Liu Ting. 2014. Character-Level Chinese dependency parsing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Baltimore, Maryland, 13261336. DOI: https://doi.org/10.3115/v1/P14-1125Google ScholarGoogle ScholarCross RefCross Ref
  51. [51] Zhang Yue and Clark Stephen. 2008. A tale of two parsers: Investigating and combining graph-based and transition-based dependency parsing. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Honolulu, Hawaii, 562571. Retrieved from https://www.aclweb.org/anthology/D08-1059. Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. [52] Zhang Yi and Wang Rui. 2009. Cross-Domain dependency parsing using a deep linguistic grammar. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. Association for Computational Linguistics, Suntec, Singapore, 378386. https://www.aclweb.org/anthology/P09-1043. Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. [53] Zhang Zhuosheng, Zhao Hai, Ling Kangwei, Li Jiangtong, Li Zuchao, He Shexia, and Fu Guohong. 2019. Effective subword segmentation for text comprehension. IEEE/ACM Transactions on Audio, Speech, and Language Processing 27, 11 (Nov 2019), 16641674. DOI: https://doi.org/10.1109/TASLP.2019.2922537 Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. [54] Zhou Zhi-Hua and Li Ming. 2005. Tri-training: Exploiting unlabeled data using three classifiers. IEEE Transactions on Knowledge and Data Engineering 17, 11 (Nov 2005), 15291541. DOI: https://doi.org/10.1109/TKDE.2005.186 Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Tri-training for Dependency Parsing Domain Adaptation

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image ACM Transactions on Asian and Low-Resource Language Information Processing
        ACM Transactions on Asian and Low-Resource Language Information Processing  Volume 21, Issue 3
        May 2022
        413 pages
        ISSN:2375-4699
        EISSN:2375-4702
        DOI:10.1145/3505182
        Issue’s Table of Contents

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 13 December 2021
        • Accepted: 1 August 2021
        • Revised: 1 May 2021
        • Received: 1 December 2020
        Published in tallip Volume 21, Issue 3

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Refereed
      • Article Metrics

        • Downloads (Last 12 months)89
        • Downloads (Last 6 weeks)7

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Full Text

      View this article in Full Text.

      View Full Text

      HTML Format

      View this article in HTML Format .

      View HTML Format
      About Cookies On This Site

      We use cookies to ensure that we give you the best experience on our website.

      Learn more

      Got it!