skip to main content
research-article

Linguistically Driven Multi-Task Pre-Training for Low-Resource Neural Machine Translation

Published:19 January 2022Publication History
Skip Abstract Section

Abstract

In the present study, we propose novel sequence-to-sequence pre-training objectives for low-resource machine translation (NMT): Japanese-specific sequence to sequence (JASS) for language pairs involving Japanese as the source or target language, and English-specific sequence to sequence (ENSS) for language pairs involving English. JASS focuses on masking and reordering Japanese linguistic units known as bunsetsu, whereas ENSS is proposed based on phrase structure masking and reordering tasks. Experiments on ASPEC Japanese–English & Japanese–Chinese, Wikipedia Japanese–Chinese, News English–Korean corpora demonstrate that JASS and ENSS outperform MASS and other existing language-agnostic pre-training methods by up to +2.9 BLEU points for the Japanese–English tasks, up to +7.0 BLEU points for the Japanese–Chinese tasks and up to +1.3 BLEU points for English–Korean tasks. Empirical analysis, which focuses on the relationship between individual parts in JASS and ENSS, reveals the complementary nature of the subtasks of JASS and ENSS. Adequacy evaluation using LASER, human evaluation, and case studies reveals that our proposed methods significantly outperform pre-training methods without injected linguistic knowledge and they have a larger positive impact on the adequacy as compared to the fluency.

REFERENCES

  1. [1] Araabi Ali and Monz Christof. 2020. Optimizing transformer for low-resource neural machine translation. In Proceedings of the 28th International Conference on Computational Linguistics. International Committee on Computational Linguistics, Barcelona, Spain (Online), 34293435.Google ScholarGoogle ScholarCross RefCross Ref
  2. [2] Artetxe Mikel and Schwenk Holger. 2019. Massively multilingual sentence embeddings for zero-shot cross-lingual transfer and beyond. Transactions of the Association for Computational Linguistics 7 (March 2019), 597610.Google ScholarGoogle ScholarCross RefCross Ref
  3. [3] Bahdanau Dzmitry, Cho Kyunghyun, and Bengio Yoshua. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the 3rd International Conference on Learning Representations (ICLR). (San Diego, CA).Google ScholarGoogle Scholar
  4. [4] Chu Chenhui, Nakazawa Toshiaki, and Kurohashi Sadao. 2014. Constructing a Chinese—Japanese parallel corpus from Wikipedia. In Proceedings of the 9th International Conference on Language Resources and Evaluation (LREC’14). European Language Resources Association (ELRA), Reykjavik, Iceland, 642647.Google ScholarGoogle Scholar
  5. [5] Chu Chenhui, Nakazawa Toshiaki, and Kurohashi Sadao. 2016. Integrated parallel sentence and fragment extraction from comparable corpora: A case study on Chinese-Japanese Wikipedia. ACM Trans. Asian Low Resour. Lang. Inf. Process. 15, 2 (2016), 10:1–10:22. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. [6] Conneau Alexis and Lample Guillaume. 2019. Cross-lingual language model pretraining. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, (NeurIPS 2019), (December 8-14, 2019, Vancouver, BC, Canada). 70577067. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. [7] Dabre Raj, Fujita Atsushi, and Chu Chenhui. 2019. Exploiting multilingualism through multistage fine-tuning for low-resource neural machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics, Hong Kong, China, 14101416.Google ScholarGoogle ScholarCross RefCross Ref
  8. [8] Devlin Jacob, Chang Ming-Wei, Lee Kenton, and Toutanova Kristina. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Association for Computational Linguistics, Minneapolis, Minnesota, 41714186.Google ScholarGoogle Scholar
  9. [9] Dong Daxiang, Wu Hua, He Wei, Yu Dianhai, and Wang Haifeng. 2015. Multi-task learning for multiple language translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, 17231732.Google ScholarGoogle ScholarCross RefCross Ref
  10. [10] Edunov Sergey, Ott Myle, Auli Michael, and Grangier David. 2018. Understanding back-translation at scale. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Brussels, Belgium, 489500.Google ScholarGoogle ScholarCross RefCross Ref
  11. [11] Han Dan, Sudoh Katsuhito, Wu Xianchao, Duh Kevin, Tsukada Hajime, and Nagata Masaaki. 2012. Head finalization reordering for Chinese-to-Japanese machine translation. In Proceedings of the 6thWorkshop on Syntax, Semantics and Structure in Statistical Translation. Association for Computational Linguistics, Jeju, Republic of Korea, 5766. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. [12] Hoang Vu Cong Duy, Koehn Philipp, Haffari Gholamreza, and Cohn Trevor. 2018. Iterative back-translation for neural machine translation. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation. Association for Computational Linguistics, Melbourne, Australia, 1824.Google ScholarGoogle ScholarCross RefCross Ref
  13. [13] Hoshino Sho, Miyao Yusuke, Sudoh Katsuhito, and Nagata Masaaki. 2013. Two-stage pre-ordering for Japanese-to-English statistical machine translation. In Proceedings of the 6th International Joint Conference on Natural Language Processing. Asian Federation of Natural Language Processing, Nagoya, Japan, 10621066.Google ScholarGoogle Scholar
  14. [14] Isozaki Hideki, Sudoh Katsuhito, Tsukada Hajime, and Duh Kevin. 2010. Head finalization: A simple reordering rule for SOV languages. In Proceedings of the Joint 5th Workshop on Statistical Machine Translation and Metrics (MATR). Association for Computational Linguistics, Uppsala, Sweden, 244251. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. [15] Johnson Melvin, Schuster Mike, Le Quoc V., Krikun Maxim, Wu Yonghui, Chen Zhifeng, Thorat Nikhil, Viégas Fernanda, Wattenberg Martin, Corrado Greg, Hughes Macduff, and Dean Jeffrey. 2017. Google’s multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics 5 (2017), 339351.Google ScholarGoogle ScholarCross RefCross Ref
  16. [16] Joshi Mandar, Chen Danqi, Liu Yinhan, Weld Daniel S., Zettlemoyer Luke, and Levy Omer. 2020. SpanBERT: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics 8 (2020), 6477.Google ScholarGoogle ScholarCross RefCross Ref
  17. [17] Klein Guillaume, Kim Yoon, Deng Yuntian, Senellart Jean, and Rush Alexander. 2017. OpenNMT: Open-source toolkit for neural machine translation. In Proceedings of ACL 2017, System Demonstrations. Association for Computational Linguistics, Vancouver, Canada, 6772.Google ScholarGoogle ScholarCross RefCross Ref
  18. [18] Koehn Philipp. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Barcelona, Spain, 388395.Google ScholarGoogle Scholar
  19. [19] Komachi Mamoru, Matsumoto Yuji, and Nagata Masaaki. 2006. Phrase reordering for statistical machine translation based on predicate-argument structure. In Proceedings of the 2006 International Workshop on Spoken Language Translation (IWSLT 2006) (Keihanna Science City, Kyoto, Japan, November 27-28, 2006). 7782.Google ScholarGoogle Scholar
  20. [20] Kurohashi Sadao, Nakamura Toshihisa, Matsumoto Yuji, and Nagao Makoto. 1994. Improvements of Japanese morphological analyzer JUMAN. In Proceedings of the International Workshop on Sharable Natural Language Resources. 2228.Google ScholarGoogle Scholar
  21. [21] Lewis Mike, Liu Yinhan, Goyal Naman, Ghazvininejad Marjan, Mohamed Abdelrahman, Levy Omer, Stoyanov Veselin, and Zettlemoyer Luke. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 78717880.Google ScholarGoogle ScholarCross RefCross Ref
  22. [22] Lin Zehui, Pan Xiao, Wang Mingxuan, Qiu Xipeng, Feng Jiangtao, Zhou Hao, and Li Lei. 2020. Pre-training multilingual neural machine translation by leveraging alignment information. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Online, 26492663.Google ScholarGoogle ScholarCross RefCross Ref
  23. [23] Liu Yinhan, Gu Jiatao, Goyal Naman, Li Xian, Edunov Sergey, Ghazvininejad Marjan, Lewis Mike, and Zettlemoyer Luke. 2020. Multilingual denoising pre-training for neural machine translation. Transactions of the Association for Computational Linguistics 8 (2020), 726742.Google ScholarGoogle ScholarCross RefCross Ref
  24. [24] Mao Zhuoyuan, Cromieres Fabien, Dabre Raj, Song Haiyue, and Kurohashi Sadao. 2020. JASS: Japanese-specific sequence to sequence pre-training for neural machine translation. In Proceedings of the 12th Language Resources and Evaluation Conference. European Language Resources Association, Marseille, France, 36833691.Google ScholarGoogle Scholar
  25. [25] Micikevicius Paulius, Narang Sharan, Alben Jonah, Diamos Gregory F., Elsen Erich, García David, Ginsburg Boris, Houston Michael, Kuchaiev Oleksii, Venkatesh Ganesh, and Wu Hao. 2018. Mixed precision training. In Conference Track Proceedings of the 6th International Conference on Learning Representations (ICLR 2018) (Vancouver, BC, Canada, April 30 - May 3, 2018).Google ScholarGoogle Scholar
  26. [26] Morita Hajime, Kawahara Daisuke, and Kurohashi Sadao. 2015. Morphological analysis for unsegmented languages using recurrent neural network language model. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, 22922297.Google ScholarGoogle ScholarCross RefCross Ref
  27. [27] Murthy Rudra, Kunchukuttan Anoop, and Bhattacharyya Pushpak. 2019. Addressing word-order divergence in multilingual neural machine translation for extremely low resource languages. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Association for Computational Linguistics, Minneapolis, Minnesota, 38683873.Google ScholarGoogle ScholarCross RefCross Ref
  28. [28] Nakazawa Toshiaki, Mino Hideya, Goto Isao, Neubig Graham, Kurohashi Sadao, and Sumita Eiichiro. 2015. Overview of the 2nd workshop on Asian translation. In Proceedings of the 2nd Workshop on Asian Translation (WAT2015). Workshop on Asian Translation, Kyoto, Japan, 128.Google ScholarGoogle Scholar
  29. [29] Nakazawa Toshiaki, Sudoh Katsuhito, Higashiyama Shohei, Ding Chenchen, Dabre Raj, Mino Hideya, Goto Isao, Pa Win Pa, Kunchukuttan Anoop, and Kurohashi Sadao. 2018. Overview of the 5th workshop on Asian translation. In Proceedings of the 32nd Pacific Asia Conference on Language, Information and Computation: 5th Workshop on Asian Translation. Association for Computational Linguistics, Hong Kong.Google ScholarGoogle Scholar
  30. [30] Nakazawa Toshiaki, Yaguchi Manabu, Uchimoto Kiyotaka, Utiyama Masao, Sumita Eiichiro, Kurohashi Sadao, and Isahara Hitoshi. 2016. ASPEC: Asian scientific paper excerpt corpus. In Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC’16). European Language Resources Association (ELRA), Portorož, Slovenia, 22042208.Google ScholarGoogle Scholar
  31. [31] Papineni Kishore, Roukos Salim, Ward Todd, and Zhu Wei-Jing. 2002. Bleu: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Philadelphia, PA, 311318. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. [32] Park Jungyeul, Hong Jeen-Pyo, and Cha Jeong-Won. 2016. Korean language resources for everyone. In Proceedings of the 30th Pacific Asia Conference on Language, Information and Computation: Oral Papers. Seoul, South Korea, 4958.Google ScholarGoogle Scholar
  33. [33] Peters Matthew, Neumann Mark, Iyyer Mohit, Gardner Matt, Clark Christopher, Lee Kenton, and Zettlemoyer Luke. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Association for Computational Linguistics, New Orleans, LA, 22272237.Google ScholarGoogle ScholarCross RefCross Ref
  34. [34] Pollard Carl and Sag Ivan A.. 1988. Information-Based Syntax and Semantics: Vol. 1: Fundamentals. Center for the Study of Language and Information. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. [35] Pollard Carl and Sag Ivan A.. 1994. Head-Driven Phrase Structure Grammar. The University of Chicago Press, Chicago, IL.Google ScholarGoogle Scholar
  36. [36] Qi Weizhen, Yan Yu, Gong Yeyun, Liu Dayiheng, Duan Nan, Chen Jiusheng, Zhang Ruofei, and Zhou Ming. 2020. ProphetNet: Predicting future N-gram for sequence-to-sequence pre-training. In Findings of the Association for Computational Linguistics (EMNLP 2020). Association for Computational Linguistics, Online, 24012410.Google ScholarGoogle Scholar
  37. [37] Qi Ye, Sachan Devendra, Felix Matthieu, Padmanabhan Sarguna, and Neubig Graham. 2018. When and why are pre-trained word embeddings useful for neural machine translation? In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers). Association for Computational Linguistics, New Orleans, LA, 529535.Google ScholarGoogle ScholarCross RefCross Ref
  38. [38] Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf.Google ScholarGoogle Scholar
  39. [39] Raffel Colin, Shazeer Noam, Roberts Adam, Lee Katherine, Narang Sharan, Matena Michael, Zhou Yanqi, Li Wei, and Liu Peter J.. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 21 (2020), 140:1–140:67.Google ScholarGoogle Scholar
  40. [40] Ren Shuo, Wu Yu, Liu Shujie, Zhou Ming, and Ma Shuai. 2019. Explicit cross-lingual pre-training for unsupervised machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics, Hong Kong, China, 770779.Google ScholarGoogle ScholarCross RefCross Ref
  41. [41] Sennrich Rico and Haddow Barry. 2016. Linguistic input features improve neural machine translation. In Proceedings of the 1st Conference on Machine Translation: Volume 1, Research Papers. Association for Computational Linguistics, Berlin, Germany, 8391.Google ScholarGoogle ScholarCross RefCross Ref
  42. [42] Sennrich Rico, Haddow Barry, and Birch Alexandra. 2016. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, 8696.Google ScholarGoogle ScholarCross RefCross Ref
  43. [43] Sennrich Rico, Haddow Barry, and Birch Alexandra. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, 17151725.Google ScholarGoogle ScholarCross RefCross Ref
  44. [44] Sennrich Rico and Zhang Biao. 2019. Revisiting low-resource neural machine translation: A case study. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Florence, Italy, 211221.Google ScholarGoogle ScholarCross RefCross Ref
  45. [45] Siddhant Aditya, Bapna Ankur, Cao Yuan, Firat Orhan, Chen Mia, Kudugunta Sneha, Arivazhagan Naveen, and Wu Yonghui. 2020. Leveraging monolingual data with self-supervision for multilingual neural machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 28272835.Google ScholarGoogle ScholarCross RefCross Ref
  46. [46] Song Haiyue, Dabre Raj, Mao Zhuoyuan, Cheng Fei, Kurohashi Sadao, and Sumita Eiichiro. 2020. Pre-training via leveraging assisting languages for neural machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop. Association for Computational Linguistics, Online, 279285.Google ScholarGoogle ScholarCross RefCross Ref
  47. [47] Song Kaitao, Tan Xu, Qin Tao, Lu Jianfeng, and Liu Tie-Yan. 2019. MASS: Masked sequence to sequence pre-training for language generation. In Proceedings of the 36th International Conference on Machine Learning (ICML 2019) (9-15 June 2019, Long Beach, CA). 59265936.Google ScholarGoogle Scholar
  48. [48] Sun Yu, Wang Shuohuan, Li Yu-Kun, Feng Shikun, Tian Hao, Wu Hua, and Wang Haifeng. 2020. ERNIE 2.0: A continual pre-training framework for language understanding. In Proceedings of the 34th AAAI Conference on Artificial Intelligence (AAAI 2020), The 32nd Innovative Applications of Artificial Intelligence Conference (IAAI 2020), The 10th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI 2020) (New York, NY, February 7-12, 2020). 89688975.Google ScholarGoogle Scholar
  49. [49] Sutskever Ilya, Vinyals Oriol, and Le Quoc V.. 2014. Sequence-to-sequence learning with neural networks. In Proceedings of the 27th Neural Information Processing Systems Conference (NIPS). Montréal, Canada, 31043112. Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. [50] Vaswani Ashish, Shazeer Noam, Parmar Niki, Uszkoreit Jakob, Jones Llion, Gomez Aidan N., Kaiser Łukasz, and Polosukhin Illia. 2017. Attention is all you need. In Proceedings of the 30th Neural Information Processing Systems Conference (NIPS) (Long Beach, CA), 59986008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. [51] Wang Liang, Zhao Wei, Jia Ruoyu, Li Sujian, and Liu Jingming. 2019. Denoising based sequence-to-sequence pre-training for text generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics, Hong Kong, China, 40034015.Google ScholarGoogle ScholarCross RefCross Ref
  52. [52] Wang Yiren, Zhai ChengXiang, and Hassan Hany. 2020. Multi-task learning for multilingual neural machine translation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Online, 10221034.Google ScholarGoogle ScholarCross RefCross Ref
  53. [53] Yang Zhilin, Dai Zihang, Yang Yiming, Carbonell Jaime G., Salakhutdinov Ruslan, and Le Quoc V.. 2019. XLNet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019 (NeurIPS 2019) (December 8-14, 2019, Vancouver, BC, Canada). 57545764. Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. [54] Yang Zhen, Hu Bojie, Han Ambyera, Huang Shen, and Ju Qi. 2020. CSP: Code-switching pre-training for neural machine translation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Online, 26242636.Google ScholarGoogle ScholarCross RefCross Ref
  55. [55] Yankovskaya Elizaveta, Tättar Andre, and Fishel Mark. 2019. Quality estimation and translation metrics via pre-trained word and sentence embeddings. In Proceedings of the 4th Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2). Association for Computational Linguistics, Florence, Italy, 101105.Google ScholarGoogle ScholarCross RefCross Ref
  56. [56] Zhang Jiajun and Zong Chengqing. 2016. Exploiting source-side monolingual data in neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, 15351545.Google ScholarGoogle ScholarCross RefCross Ref
  57. [57] Zhang Zhuosheng, Wu Yuwei, Zhao Hai, Li Zuchao, Zhang Shuailiang, Zhou Xi, and Zhou Xiang. 2020. Semantics-aware BERT for language understanding. In Proceedings of the 34th AAAI Conference on Artificial Intelligence (AAAI 2020), the 32nd Innovative Applications of Artificial Intelligence Conference (IAAI 2020), the 10th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI 2020). (New York, NY, February 7-12, 2020). 96289635.Google ScholarGoogle Scholar
  58. [58] Zhou Chunting, Ma Xuezhe, Hu Junjie, and Neubig Graham. 2019. Handling syntactic divergence in low-resource machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics, Hong Kong, China, 13881394.Google ScholarGoogle ScholarCross RefCross Ref
  59. [59] Zhou Junru, Zhang Zhuosheng, Zhao Hai, and Zhang Shuailiang. 2020. LIMIT-BERT: Linguistics informed multi-task BERT. In Proceedings of the Findings of the Association for Computational Linguistics (EMNLP 2020). Association for Computational Linguistics, Online, 44504461.Google ScholarGoogle ScholarCross RefCross Ref
  60. [60] Zoph Barret, Yuret Deniz, May Jonathan, and Knight Kevin. 2016. Transfer learning for low-resource neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, 15681575.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Linguistically Driven Multi-Task Pre-Training for Low-Resource Neural Machine Translation

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Asian and Low-Resource Language Information Processing
      ACM Transactions on Asian and Low-Resource Language Information Processing  Volume 21, Issue 4
      July 2022
      464 pages
      ISSN:2375-4699
      EISSN:2375-4702
      DOI:10.1145/3511099
      Issue’s Table of Contents

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 19 January 2022
      • Accepted: 1 November 2021
      • Revised: 1 August 2021
      • Received: 1 March 2021
      Published in tallip Volume 21, Issue 4

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Refereed

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    View Full Text

    HTML Format

    View this article in HTML Format .

    View HTML Format
    About Cookies On This Site

    We use cookies to ensure that we give you the best experience on our website.

    Learn more

    Got it!