skip to main content
research-article

Roman-Urdu-Parl: Roman-Urdu and Urdu Parallel Corpus for Urdu Language Understanding

Authors Info & Claims
Published:18 January 2022Publication History
Skip Abstract Section

Abstract

Availability of corpora is a basic requirement for conducting research in a particular language. Unfortunately, for a morphologically rich language like Urdu, despite being used by over a 100 million people around the globe, the dearth of corpora is a major reason for the lack of attention and advancement in research. To this end, we present the first-ever large-scale publicly available Roman-Urdu parallel corpus, Roman-Urdu-Parl, with 6.37 million sentence-pairs. It is a huge corpus collected from diverse sources, annotated using crowd-sourcing techniques, and also assured for quality. It has a total of 92.76 million Roman-Urdu words, 92.85 million Urdu words, Roman-Urdu vocabulary of 42.9 K words, and Urdu vocabulary of 43.8 K words. Roman-Urdu-Parl has been built to ensure that it not only captures the morphological and linguistic features of the language but also the heterogeneity and variations arising due to demographic conditions. We validate the authenticity and quality of our corpus by using it to address two natural language processing research problems, that is, on learning word embeddings and building a machine transliteration system. Our contribution of the corpus leads to exceptional results in both settings, for example, our machine transliteration system sets a new state-of-the-art with a Bilingual Evaluation Understudy (BLEU) score of 84.67. We believe that Roman-Urdu-Parl can serve as fuel for igniting and advancing works in many research areas related to the Urdu language.

REFERENCES

  1. [1] Adams Oliver, Makarucha Adam, Neubig Graham, Bird Steven, and Cohn Trevor. 2017. Cross-lingual word embeddings for low-resource language modeling. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers. 937947.Google ScholarGoogle ScholarCross RefCross Ref
  2. [2] Aharoni Roee, Johnson Melvin, and Firat Orhan. 2019. Massively multilingual neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol. 1. 3874–3884.Google ScholarGoogle Scholar
  3. [3] Ahmed Tafseer. 2009. Roman to Urdu transliteration using wordlist. In Proceedings of the Conference on Language and Technology. 305309.Google ScholarGoogle Scholar
  4. [4] Alam Mehreen and Hussain Sibt ul. 2017. Sequence to sequence networks for Roman-Urdu to Urdu transliteration. In Proceedings of the Multi-topic Conference (INMIC), 2017 International. IEEE, 17.Google ScholarGoogle ScholarCross RefCross Ref
  5. [5] Allen Carl and Hospedales Timothy. 2019. Analogies explained: Towards understanding word embeddings. International Conference on Machine Learning 97 (2019), 223–231.Google ScholarGoogle Scholar
  6. [6] Aqlan Fares, Fan Xiaoping, Alqwbani Abdullah, and Al-Mansoub Akram. 2019. Improved Arabic–Chinese Machine Translation with Linguistic Input Features. Future Internet 11, 1 (2019), 22.Google ScholarGoogle ScholarCross RefCross Ref
  7. [7] Azarbonyad Hosein, Shakery Azadeh, and Faili Heshaam. 2014. Learning to exploit different translation resources for cross language information retrieval. International Journal of Information & Communication Technology Research 6, 1 (2014), 55–68.Google ScholarGoogle Scholar
  8. [8] Azpeitia Andoni, Etchegoyhen Thierry, and Garcia Eva Martınez. 2018. Extracting parallel sentences from comparable corpora with STACC variants. In Proceedings of the 11th Workshop on Building and Using Comparable Corpora. 4852.Google ScholarGoogle Scholar
  9. [9] Bahdanau Dzmitry, Kyunghyun, and Bengio Yoshua. 2014. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015 3 (2015).Google ScholarGoogle Scholar
  10. [10] Bögel Tina. 2012. Urdu - Roman Transliteration via Finite State Transducers. In Proceedings of the 10th International Workshop on Finite State Methods and Natural Language Processing. Association for Computational Linguistics, Donostia–San Sebastian, 2529. Retrieved from https://www.aclweb.org/anthology/W12-6204.Google ScholarGoogle Scholar
  11. [11] Bojanowski Piotr, Grave Edouard, Joulin Armand, and Mikolov Tomas. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics 5 (2017), 135146.Google ScholarGoogle ScholarCross RefCross Ref
  12. [12] Chakravarthi Bharathi Raja, Arcan Mihael, and McCrae John P.. 2018. Improving wordnets for under-resourced languages using machine translation. In Proceedings of the 9th Global WordNet Conference. 78.Google ScholarGoogle Scholar
  13. [13] Chatterjee Rajen, Federmann Christian, Negri Matteo, and Turchi Marco. 2019. Findings of the WMT 2019 shared task on automatic post-editing. In Proceedings of the 4th Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2). 1128.Google ScholarGoogle ScholarCross RefCross Ref
  14. [14] Cho Kyunghyun, Merrienboer Bart van, Gülçehre Çaglar, Bougares Fethi, Schwenk Holger, and Bengio Yoshua. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2014), 1724–1734.Google ScholarGoogle Scholar
  15. [15] Chu Chenhui, Nakazawa Toshiaki, and Kurohashi Sadao. 2015. Parallel sentence extraction based on unsupervised bilingual lexicon extraction from comparable corpora. Journal of Natural Language Processing 22, 3 (2015), 139170.Google ScholarGoogle ScholarCross RefCross Ref
  16. [16] Condon Sherri, Hernandez Luis, Parvaz Dan, Khan Mohammad S., and Jahed Hazrat. 2012. Producing Data for Under-Resourced Languages: A Dari-English Parallel Corpus of Multi-Genre Text. In Proceedings of the 10th Conference of the Association for Machine Translation in the Americas: Government MT User Program.Google ScholarGoogle Scholar
  17. [17] David A., Maxwell M., Browne E., and Lynn N.. 2009. Urdu Morphology. Maryland: Centre for Advanced Study of Language, University of Maryland (2009).Google ScholarGoogle Scholar
  18. [18] Denoyer Ludovic and Gallinari Patrick. 2006. The wikipedia xml corpus. In Proceedings of the International Workshop of the Initiative for the Evaluation of XML Retrieval. Springer, 1219.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. [19] Devlin Jacob, Chang Ming-Wei, Lee Kenton, and Toutanova Kristina. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies 1 (2019), 4171–4186.Google ScholarGoogle Scholar
  20. [20] Everingham Mark, Gool Luc Van, Williams Christopher KI, Winn John, and Zisserman Andrew. 2010. The pascal visual object classes (voc) challenge. International Journal of Computer Vision 88, 2 (2010), 303338.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. [21] Federico Marcello, Cattelan Alessandro, and Trombetti Marco. 2012. Measuring user productivity in machine translation enhanced computer assisted translation. In Proceedings of the 10th Conference of the Association for Machine Translation in the Americas. AMTA Madison, WI, 4456.Google ScholarGoogle Scholar
  22. [22] Fraisse Amel, Jenn Ronald, and Fishkin Shelley Fisher. 2018. Building multilingual parallel corpora for under-resourced languages using translated fictional texts. Sustaining Knowledge Diversity in the Digital Age (2018), 39.Google ScholarGoogle Scholar
  23. [23] Graën Johannes, Batinić Dolores, and Volk Martin. 2014. Cleaning the Europarl corpus for linguistic applications. In Proceedings of the Konvens.Google ScholarGoogle Scholar
  24. [24] Green Spence, Heer Jeffrey, and Manning Christopher D.. 2013. The efficacy of human post-editing for language translation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 439448.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. [25] Guzmán Francisco, Chen Peng-Jen, Ott Myle, Pino Juan, Lample Guillaume, Koehn Philipp, Chaudhary Vishrav, and Ranzato Marc’Aurelio. 2019. Two new evaluation datasets for low-resource machine translation: Nepali-English and Sinhala-English. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP’19), Vol. 9.Google ScholarGoogle Scholar
  26. [26] Hamdan Hussam, Bellot Patrice, and Bechet Frederic. 2015. Sentiment lexicon-based features for sentiment analysis in short text. Research in Computing Science 90 (2015), 217226.Google ScholarGoogle ScholarCross RefCross Ref
  27. [27] Hangya Viktor, Braune Fabienne, Kalasouskaya Yuliya, and Fraser Alexander. 2018. Unsupervised parallel sentence extraction from comparable corpora. In Proceedings of the International Workshop on Spoken Language Translation.Google ScholarGoogle Scholar
  28. [28] Hemmati Nima, Faili Heshaam, and Maleki Jalal. 2017. Multiple system combination for PersoArabic-Latin transliteration. In Proceedings of the International Conference on Computational Linguistics and Intelligent Text Processing. Springer, 469481.Google ScholarGoogle Scholar
  29. [29] Jawaid Bushra, Kamran Amir, and Bojar Ondrej. 2014. A Tagged Corpus and a Tagger for Urdu.. In Proceedings of the 9th International Conference on Language Resources and Evaluation. 29382943.Google ScholarGoogle Scholar
  30. [30] Kaiser Ł. ukasz and Bengio Samy. 2016. Can active memory replace attention? In Proceedings of the Advances in Neural Information Processing Systems 29, Lee D. D., Sugiyama M., Luxburg U. V., Guyon I., and Garnett R. (Eds.), Curran Associates, Inc., 37813789. Retrieved from http://papers.nips.cc/paper/6295-can-active-memory-replace-attention.pdf.Google ScholarGoogle Scholar
  31. [31] Kalchbrenner Nal, Espeholt Lasse, Simonyan Karen, Oord Aaron van den, Graves Alex, and Kavukcuoglu Koray. 2016. Neural machine translation in linear time.Google ScholarGoogle Scholar
  32. [32] Kathuria Pulkit and Shirai Kiyoaki. 2012. Word sense disambiguation based on example sentences in dictionary and automatically acquired from parallel corpus. In Proceedings of the International Conference on NLP. Springer, 210221.Google ScholarGoogle ScholarCross RefCross Ref
  33. [33] Kazakov Dimitar and Shahid Ahmad R. 2013. Using parallel corpora for word sense disambiguation. In Proceedings of the International Conference Recent Advances in Natural Language Processing RANLP 2013. 336341.Google ScholarGoogle Scholar
  34. [34] Koehn Philipp. 2005. Europarl: A parallel corpus for statistical machine translation. In Proceedings of the 10th Machine Translation Summit, Vol. 5. 7986.Google ScholarGoogle Scholar
  35. [35] Alex Krizhevsky and Geoffrey Hinton. 2009. Learning multiple layers of features from tiny images. Technical Report. Citeseer.Google ScholarGoogle Scholar
  36. [36] Krizhevsky Alex, Sutskever Ilya, and Hinton Geoffrey E. 2012. Imagenet classification with deep convolutional neural networks. In Proceedings of the Advances in Neural Information Processing Systems. 10971105.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. [37] Kunchukuttan Anoop, Mehta Pratik, and Bhattacharyya Pushpak. 2017. The iit bombay english-hindi parallel corpus. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC’18) 1.Google ScholarGoogle Scholar
  38. [38] Lample Guillaume and Conneau Alexis. 2019. Cross-lingual language model pretraining. Advances in Neural Information Processing Systems 32 (2019), 7059–7069.Google ScholarGoogle Scholar
  39. [39] Lample Guillaume, Ott Myle, Conneau Alexis, Denoyer Ludovic, and Ranzato Marc’Aurelio. 2018. Phrase-based & neural unsupervised machine translation. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing 18 (2018), 5039–5049.Google ScholarGoogle Scholar
  40. [40] Läubli Samuel, Fishel Mark, Massey Gary, Ehrensberger-Dow Maureen, and Volk Martin. 2013. Assessing post-editing efficiency in a realistic translation environment. In Proceedings of MT Summit XIV Workshop on Post-editing Technology and Practice. 8391.Google ScholarGoogle Scholar
  41. [41] Lecun Y., Bottou L., Bengio Y., and Haffner P.. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE 86, 11 (Nov 1998), 22782324. DOI: https://doi.org/10.1109/5.726791Google ScholarGoogle ScholarCross RefCross Ref
  42. [42] Lefever Els, Hoste Véronique, and Cock Martine De. 2011. Parasense or how to use parallel corpora for word sense disambiguation. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers-Volume 2. ACL, 317322.Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. [43] Lefever Els, Hoste Véronique, and Cock Martine De. 2011. Using parallel corpora for word sense disambiguation. In Proceedings of the 23rd Benelux conference on Artificial Intelligence. 407408.Google ScholarGoogle Scholar
  44. [44] Lin Tsung-Yi, Maire Michael, Belongie Serge, Hays James, Perona Pietro, Ramanan Deva, Dollár Piotr, and Zitnick C. Lawrence. 2014. Microsoft coco: Common objects in context. In Proceedings of the European conference on computer vision. Springer, 740755.Google ScholarGoogle ScholarCross RefCross Ref
  45. [45] Luong Minh-Thang, Pham Hieu, and Manning Christopher D.. 2015. Effective approaches to attention-based neural machine translation. Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing 15 (2015), 1412–1421.Google ScholarGoogle Scholar
  46. [46] Luong Minh-Thang, Sutskever Ilya, Le Quoc V., Vinyals Oriol, and Zaremba Wojciech. 2014. Addressing the rare word problem in neural machine translation. Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing 1 (2015), 11–19.Google ScholarGoogle Scholar
  47. [47] Maas Andrew L., Daly Raymond E., Pham Peter T., Huang Dan, Ng Andrew Y., and Potts Christopher. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies-volume 1. ACL, 142150.Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. [48] Maaten Laurens van der and Hinton Geoffrey. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research 9, 86 (2008), 25792605.Google ScholarGoogle Scholar
  49. [49] Malik M. K., Ahmed T., Sulger Sebastian, Bögel Tina, Gulzar Atif, Raza Ghulam, Hussain Sarmad, and Butt M.. 2010. Transliterating Urdu for a broad-coverage Urdu/Hindi LFG grammar. In Proceedings of the 7th International Conference on Language Resources and Evaluation.Google ScholarGoogle Scholar
  50. [50] Shervin Malmasi and Mark Dras. 2015. Automatic language identification for Persian and Dari texts. In Proceedings of the 14th International Conference of the Pacific Association for Computational Linguistics. 5964.Google ScholarGoogle Scholar
  51. [51] Mikolov Tomas, Chen Kai, Corrado Greg, and Dean Jeffrey. 2013. Efficient estimation of word representations in vector space. In 1st International Conference on Learning Representations, (ICLR’13) 1.Google ScholarGoogle Scholar
  52. [52] Miller George A.. 1995. WordNet: a lexical database for English. Communications of the ACM 38, 11 (1995), 3941.Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. [53] Nie Jian-Yun, Simard Michel, Isabelle Pierre, and Durand Richard. 1999. Cross-language information retrieval based on parallel texts and automatic mining of parallel texts from the Web. In Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval. ACM, 7481.Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. [54] Win Pa Pa, Ye Kyaw Thu, Andrew Finch, and Eiichiro Sumita. 2016. A study of statistical machine translation methods for under resourced languages. Procedia Computer Science 81 (2016), 250257.Google ScholarGoogle ScholarCross RefCross Ref
  55. [55] Papineni Kishore, Roukos Salim, Ward Todd, and Zhu Wei-Jing. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics. ACL, 311318.Google ScholarGoogle Scholar
  56. [56] Pennington Jeffrey, Socher Richard, and Manning Christopher. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. 15321543.Google ScholarGoogle ScholarCross RefCross Ref
  57. [57] Peters Matthew E., Neumann Mark, Iyyer Mohit, Gardner Matt, Clark Christopher, Lee Kenton, and Zettlemoyer Luke. 2018. Deep contextualized word representations. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies 1 (2018), 2227–2237.Google ScholarGoogle Scholar
  58. [58] Pilevar M. T., Faili H., and Pilevar A. H.. 2011. Tep: Tehran english-persian parallel corpus. In Proceedings of the International Conference on Intelligent Text Processing and Computational Linguistics. 6879.Google ScholarGoogle ScholarCross RefCross Ref
  59. [59] Post Matt, Callison-Burch Chris, and Osborne Miles. 2012. Constructing parallel corpora for six indian languages via crowdsourcing. In Proceedings of the 7th Workshop on Statistical Machine Translation. ACL, 401409.Google ScholarGoogle Scholar
  60. [60] Ramesh Sree Harsha and Sankaranarayanan Krishna Prasad. 2018. Neural machine translation for low resource languages using bilingual lexicon induced from comparable corpora. NAACL HLT 2018 1 (2018), 112.Google ScholarGoogle Scholar
  61. [61] Steinberger Ralf, Pouliquen Bruno, Widiger Anna, Ignat Camelia, Erjavec Tomaz, Tufis Dan, and Varga Dániel. 2006. The JRC-Acquis: A multilingual aligned parallel corpus with 20+ languages. In Proceedings of the 5th International Conference on Language Resources and Evaluation.Google ScholarGoogle Scholar
  62. [62] Sutskever Ilya, Vinyals Oriol, and Le Quoc V.. 2014. Sequence to sequence learning with neural networks. In Proceedings of the Advances in neural information processing systems. 31043112.Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. [63] Tian Liang, Wong Derek F, Chao Lidia S, Quaresma Paulo, Oliveira Francisco, and Yi Lu. 2014. UM-Corpus: A large English-Chinese parallel corpus for statistical machine Translation. In Proceedings of the 9th International Conference on Language Resources and Evaluation. 18371842.Google ScholarGoogle Scholar
  64. [64] Tiedemann Jörg. 2012. Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation, Vol. 2012. 22142218.Google ScholarGoogle Scholar
  65. [65] Unar Salahuddin, Jalbani Akhtar, Jawaid Muhammad, Shaikh Mohsin, and Chandio Asghar. 2018. Artificial Urdu text detection and localization from individual video frames. Mehran University Research Journal of Engineering and Technology 37, 2 (2018), 429438. DOI: https://doi.org/10.22581/muet1982.1802.18Google ScholarGoogle ScholarCross RefCross Ref
  66. [66] Ashish Vaswani, Samy Bengio, Eugene Brevdo, Francois Chollet, Aidan N. Gomez, Stephan Gouws, Llion Jones, Łukasz Kaiser, Nal Kalchbrenner, Niki Parmar, Ryan Sepassi, Noam M. Shazeer, and Jakob Uszkoreit. 2018. Tensor2tensor for neural machine translation. Proceedings of the 13th Conference of the Association for Machine Translation in the Americas 1 (2018), 193–199.Google ScholarGoogle Scholar
  67. [67] Vaswani Ashish, Shazeer Noam, Parmar Niki, Uszkoreit Jakob, Jones Llion, Gomez Aidan N., Kaiser Łukasz, and Polosukhin Illia. 2017. Attention is all you need. In Proceedings of the Advances in Neural Information Processing Systems. 60006010.Google ScholarGoogle Scholar
  68. [68] Wołk Krzysztof and Marasek Krzysztof. 2014. Building subject-aligned comparable corpora and mining it for truly parallel sentence pairs. Procedia Technology 18 (2014), 126132.Google ScholarGoogle ScholarCross RefCross Ref
  69. [69] Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, ukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv:1609.08144. Retrieved from https://arxiv.org/abs/1609.08144.Google ScholarGoogle Scholar
  70. [70] Zhang Jinyi and Matsumoto Tadahiro. 2019. Corpus Augmentation by Sentence Segmentation for Low-Resource Neural Machine Translation.Google ScholarGoogle Scholar
  71. [71] Zhou Zhong, Sperber Matthias, and Waibel Alex. 2018. Massively Parallel Cross-Lingual Learning in Low-Resource Target Language Translation. Proceedings of the Third Conference on Machine Translation 3 (2018), 232–243.Google ScholarGoogle Scholar
  72. [72] Zia Haris Bin, Raza Agha Ali, and Athar Awais. 2018. PronouncUR: An Urdu Pronunciation Lexicon Generator. Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018) 11.Google ScholarGoogle Scholar
  73. [73] Ziemski Michał, Junczys-Dowmunt Marcin, and Pouliquen Bruno. 2016. The united nations parallel corpus v1. 0. In Proceedings of the 10th International Conference on Language Resources and Evaluation. 35303534.Google ScholarGoogle Scholar
  74. [74] Zoph Barret, Yuret Deniz, May Jonathan, and Knight Kevin. 2016. Transfer learning for low-resource neural machine translation. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (2016), 1568–1575.Google ScholarGoogle Scholar

Index Terms

  1. Roman-Urdu-Parl: Roman-Urdu and Urdu Parallel Corpus for Urdu Language Understanding

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM Transactions on Asian and Low-Resource Language Information Processing
          ACM Transactions on Asian and Low-Resource Language Information Processing  Volume 21, Issue 1
          January 2022
          442 pages
          ISSN:2375-4699
          EISSN:2375-4702
          DOI:10.1145/3494068
          Issue’s Table of Contents

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 18 January 2022
          • Accepted: 1 May 2021
          • Revised: 1 May 2020
          • Received: 1 January 2020
          Published in tallip Volume 21, Issue 1

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Refereed

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Full Text

        View this article in Full Text.

        View Full Text

        HTML Format

        View this article in HTML Format .

        View HTML Format
        About Cookies On This Site

        We use cookies to ensure that we give you the best experience on our website.

        Learn more

        Got it!