skip to main content
research-article

Hypernymy Detection for Low-resource Languages: A Study for Hindi, Bengali, and Amharic

Authors Info & Claims
Published:04 March 2022Publication History
Skip Abstract Section

Abstract

Numerous attempts for hypernymy relation (e.g., dog “is-a” animal) detection have been made for resourceful languages like English, whereas efforts made for low-resource languages are scarce primarily due to lack of gold-standard datasets and suitable distributional models. Therefore, we introduce four gold-standard datasets for hypernymy detection for each of the two languages, namely, Hindi and Bengali, and two gold-standard datasets for Amharic. Another major contribution of this work is to prepare distributional thesaurus (DT) embeddings for all three languages using three different network embedding methods (DeepWalk, role2vec, and M-NMF) for the first time on these languages and to show their utility for hypernymy detection. Posing this problem as a binary classification task, we experiment with supervised classifiers like Support Vector Machine, Random Forest, and so on, and we show that these classifiers fed with DT embeddings can obtain promising results while evaluated against proposed gold-standard datasets, specifically in an experimental setup that counteracts lexical memorization. We further incorporate DT embeddings and pre-trained fastText embeddings together using two different hybrid approaches, both of which produce an excellent performance. Additionally, we validate our methodology on gold-standard English datasets as well, where we reach a comparable performance to state-of-the-art models for hypernymy detection.

Skip Supplemental Material Section

Supplemental Material

REFERENCES

  1. Ahmed Nesreen K., Rossi Ryan A., Lee John Boaz, Willke Theodore L., Zhou Rong, Kong Xiangnan, and Eldardiry Hoda. 2019. role2vec: Role-based network embeddings. In Proceedings of the 1st International Workshop on Deep Learning on Graphs: Methods and Applications. 17.Google ScholarGoogle Scholar
  2. Aly Rami, Acharya Shantanu, Ossa Alexander, Köhn Arne, Biemann Chris, and Panchenko Alexander. 2019. Every child should have parents: A taxonomy refinement algorithm based on hyperbolic term embeddings. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 48114817.Google ScholarGoogle ScholarCross RefCross Ref
  3. Anke Luis Espinosa, Camacho-Collados Jose, Bovi Claudio Delli, and Saggion Horacio. 2016. Supervised distributional hypernym discovery via domain adaptation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. 424435.Google ScholarGoogle Scholar
  4. Bannour Nesrine, Dias Gaël, Chahir Youssef, and Akhmouch Houssam. 2020. Patch-based identification of lexical semantic relations. In Proceedings of the European Conference on Information Retrieval. Springer, 126140.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Baroni Marco, Bernardi Raffaella, Do Ngoc-Quynh, and Shan Chung-chieh. 2012. Entailment above the word level in distributional semantics. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics. 2332.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Baroni Marco and Lenci Alessandro. 2011. How we BLESSed distributional semantic evaluation. In Proceedings of the Workshop on GEometrical Models of Natural Language Semantics (GEMS’11). 110.Google ScholarGoogle Scholar
  7. Beckwith Richard, Fellbaum Christiane, Gross Derek, and Miller George A. 1991. Organized on Psycholinguistic Principles. Psychology Press, 211 pages.Google ScholarGoogle Scholar
  8. Bernier-Colborne Gabriel and Barriere Caroline. 2018. Crim at semeval-2018 task 9: A hybrid approach to hypernym discovery. In Proceedings of the 12th International Workshop on Semantic Evaluation. 725731.Google ScholarGoogle ScholarCross RefCross Ref
  9. Biemann Chris and Riedl Martin. 2013. Text: Now in 2D! a framework for lexical expansion with contextual similarity. J. Lang. Model. 1, 1 (2013), 5595.Google ScholarGoogle ScholarCross RefCross Ref
  10. Biran Or and McKeown Kathleen. 2013. Classifying taxonomic relations between pairs of Wikipedia articles. In Proceedings of the 6th International Joint Conference on Natural Language Processing. 788794.Google ScholarGoogle Scholar
  11. Bojanowski Piotr, Grave Edouard, Joulin Armand, and Mikolov Tomas. 2017. Enriching word vectors with subword information. Trans. Assoc. Comput. Linguist. 5 (2017), 135146.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Bowman Samuel, Angeli Gabor, Potts Christopher, and Manning Christopher D.. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. 632642.Google ScholarGoogle ScholarCross RefCross Ref
  13. Camacho-Collados Jose, Bovi Claudio Delli, Anke Luis Espinosa, Oramas Sergio, Pasini Tommaso, Santus Enrico, Shwartz Vered, Navigli Roberto, and Saggion Horacio. 2018. SemEval-2018 task 9: Hypernym discovery. In Proceedings of the 12th International Workshop on Semantic Evaluation. 712724.Google ScholarGoogle ScholarCross RefCross Ref
  14. Chang Haw-Shiuan, Wang Ziyun, Vilnis Luke, and McCallum Andrew. 2018. Distributional inclusion vector embedding for unsupervised hypernymy detection. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 485495.Google ScholarGoogle ScholarCross RefCross Ref
  15. Chen Hong-You, Lee Cheng-Syuan, Liao Keng-Te, and Lin Shou-De. 2018. Word relation autoencoder for unseen hypernym extraction using word embeddings. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. 48344839.Google ScholarGoogle ScholarCross RefCross Ref
  16. Collins Allan M. and Quillian M. Ross. 1972. Experiments on Semantic Memory and Language Comprehension.John Wiley & Sons.Google ScholarGoogle Scholar
  17. Conneau Alexis, Lample Guillaume, Ranzato Marc’Aurelio, Denoyer Ludovic, and Jégou Hervé. 2017. Word translation without parallel data. Retrieved from https://arXiv:1710.04087.Google ScholarGoogle Scholar
  18. Dagan Ido, Roth Dan, Sammons Mark, and Zanzotto Fabio Massimo. 2013. Recognizing textual entailment: Models and applications. Synth. Lect. Hum. Lang. Technol. 6, 4 (2013), 1220.Google ScholarGoogle ScholarCross RefCross Ref
  19. Firth John R.. 1957. A Synopsis of Linguistic Theory, 1930–1955. Basil Blackwell.Google ScholarGoogle Scholar
  20. Fu Ruiji, Guo Jiang, Qin Bing, Che Wanxiang, Wang Haifeng, and Liu Ting. 2014. Learning semantic hierarchies via word embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. 11991209.Google ScholarGoogle ScholarCross RefCross Ref
  21. Ganea Octavian, Becigneul Gary, and Hofmann Thomas. 2018. Hyperbolic entailment cones for learning hierarchical embeddings. In Proceedings of the International Conference on Machine Learning. 16461655.Google ScholarGoogle Scholar
  22. Glavaš Goran and Ponzetto Simone Paolo. 2017. Dual tensor model for detecting asymmetric lexico-semantic relations. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. 17571767.Google ScholarGoogle ScholarCross RefCross Ref
  23. Goldberg Yoav and Orwant Jon. 2013. A dataset of syntactic-ngrams over time from a very large corpus of English books. In Proceedings of the 2nd Joint Conference on Lexical and Computational Semantics (*SEM’13): Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity. 241247.Google ScholarGoogle Scholar
  24. Goldhahn Dirk, Eckart Thomas, and Quasthoff Uwe. 2012. Building large monolingual dictionaries at the leipzig corpora collection: From 100 to 200 languages. In Proceedings of the 8th International Conference on Language Resources and Evaluation, Vol. 29. 3143.Google ScholarGoogle Scholar
  25. Goyal Amit, Jagarlamudi Jagadeesh, III Hal Daumé, and Venkatasubramanian Suresh. 2010. Sketch techniques for scaling distributional similarity to the web. In Proceedings of the Workshop on GEometrical Models of Natural Language Semantics. Association for Computational Linguistics, 5156.Google ScholarGoogle Scholar
  26. Gupta Amit, Lebret Rémi, Harkous Hamza, and Aberer Karl. 2017. Taxonomy induction using hypernym subsequences. In Proceedings of the ACM Conference on Information and Knowledge Management. 13291338.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Harris Zellig S.. 1954. Distributional structure. Word 10, 2-3 (1954), 146162.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Hearst Marti A.. 1992. Automatic acquisition of hyponyms from large text corpora. In Proceedings of the 14th Conference on Computational Linguistics. 539545.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Held William and Habash Nizar. 2019. The effectiveness of simple hybrid systems for hypernym discovery. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 33623367.Google ScholarGoogle ScholarCross RefCross Ref
  30. Jana Abhik and Goyal Pawan. 2018. Can network embedding of distributional thesaurus be combined with word vectors for better representation? In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 463473.Google ScholarGoogle ScholarCross RefCross Ref
  31. Jana Abhik, Varimalla Nikhil Reddy, and Goyal Pawan. 2020. Using distributional thesaurus embedding for co-hyponymy detection. In Proceedings of the 12th Language Resources and Evaluation Conference. 57665771.Google ScholarGoogle Scholar
  32. Kilgarriff Adam, Rychly Pavel, Smrz Pavel, and Tugwell David. 2004. Itri-04-08 the sketch engine. Info. Technol. 105 (2004), 116.Google ScholarGoogle Scholar
  33. Kober Thomas, Weeds Julie, Bertolini Lorenzo, and Weir David. 2020. Data augmentation for hypernymy detection. Retrieved from https://arXiv:2005.01854.Google ScholarGoogle Scholar
  34. Kotlerman Lili, Dagan Ido, Szpektor Idan, and Zhitomirsky-Geffet Maayan. 2010. Directional distributional similarity for lexical inference. Natural Lang. Eng. 16, 4 (2010), 359.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Kozareva Zornitsa and Hovy Eduard. 2010. A semi-supervised method to learn and construct taxonomies using the web. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. 11101118.Google ScholarGoogle Scholar
  36. Kumar Ayush, Kohail Sarah, Ekbal Asif, and Biemann Chris. 2015. IIT-TUDA: System for sentiment analysis in indian languages using lexical acquisition. In Proceedings of the International Conference on Mining Intelligence and Knowledge Exploration. Springer, 684693.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Le Matthew, Roller Stephen, Papaxanthos Laetitia, Kiela Douwe, and Nickel Maximilian. 2019. Inferring concept hierarchies from text corpora via hyperbolic embeddings. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 32313241.Google ScholarGoogle ScholarCross RefCross Ref
  38. Levy Omer, Remus Steffen, Biemann Chris, and Dagan Ido. 2015. Do supervised distributional methods really learn lexical inference relations? In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 970976.Google ScholarGoogle ScholarCross RefCross Ref
  39. Lin Dekang. 1998. Automatic retrieval and clustering of similar words. In Proceedings of the 17th International Conference on Computational Linguistics. 768774.Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Luu Anh Tuan, Tay Yi, Hui Siu Cheung, and Ng See Kiong. 2016. Learning term embeddings for taxonomic relation identification using dynamic weighting neural network. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. 403413.Google ScholarGoogle Scholar
  41. Mao Yuning, Ren Xiang, Shen Jiaming, Gu Xiaotao, and Han Jiawei. 2018. End-to-end reinforcement learning for automatic taxonomy induction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. 24622472.Google ScholarGoogle ScholarCross RefCross Ref
  42. Navigli Roberto, Velardi Paola, and Faralli Stefano. 2011. A graph-based algorithm for inducing lexical taxonomies from scratch. In Proceedings of the 22nd International Joint Conference on Artificial Intelligence. 18721877.Google ScholarGoogle Scholar
  43. Nguyen Kim Anh, Köper Maximilian, Walde Sabine Schulte im, and Vu Ngoc Thang. 2017. Hierarchical embeddings for hypernymy detection and directionality. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. 233243.Google ScholarGoogle ScholarCross RefCross Ref
  44. Nickel Maximillian and Kiela Douwe. 2017. Poincaré embeddings for learning hierarchical representations. In Proceedings of the 31st Conference on Neural Information Processing Systems (NeurIPS’17). 63386347.Google ScholarGoogle Scholar
  45. Nickel Maximillian and Kiela Douwe. 2018. Learning continuous hierarchies in the lorentz model of hyperbolic geometry. In Proceedings of the International Conference on Machine Learning. 37793788.Google ScholarGoogle Scholar
  46. Panjwani Ritesh, Kanojia Diptesh, and Bhattacharyya Pushpak. 2018. pyiwn: A python-based API to access Indian language WordNets. In Proceedings of the 9th Global WordNet Conference (GWC’18), Vol. 382.Google ScholarGoogle Scholar
  47. Perozzi Bryan, Al-Rfou Rami, and Skiena Steven. 2014. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 701710.Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Rei Marek, Gerz Daniela, and Vulić Ivan. 2018. Scoring lexical entailment with a supervised directional similarity network. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. 638643.Google ScholarGoogle ScholarCross RefCross Ref
  49. Riedl Martin and Biemann Chris. 2013. Scaling to large\( ^3 \) data: An efficient and effective method to compute distributional thesauri. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. 884890.Google ScholarGoogle Scholar
  50. Rimell Laura. 2014. Distributional lexical entailment by topic coherence. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics. 511519.Google ScholarGoogle ScholarCross RefCross Ref
  51. Roller Stephen and Erk Katrin. 2016. Relations such as Hypernymy: Identifying and exploiting Hearst patterns in distributional vectors for lexical entailment. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Austin, TX, USA, 21632172.Google ScholarGoogle ScholarCross RefCross Ref
  52. Roller Stephen, Erk Katrin, and Boleda Gemma. 2014. Inclusive yet selective: Supervised distributional hypernymy detection. In Proceedings of the 25th International Conference on Computational Linguistics (COLING’14). 10251036.Google ScholarGoogle Scholar
  53. Roller Stephen, Kiela Douwe, and Nickel Maximilian. 2018. Hearst patterns revisited: Automatic hypernym detection from large text corpora. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Melbourne, Australia, 358363.Google ScholarGoogle ScholarCross RefCross Ref
  54. Rozemberczki Benedek, Kiss Oliver, and Sarkar Rik. 2020. An API Oriented Open-source Python Framework for Unsupervised Learning on Graphs. Retrieved from https://arXiv:2003.04819.Google ScholarGoogle Scholar
  55. Rychlỳ Pavel and Kilgarriff Adam. 2007. An efficient algorithm for building a distributional thesaurus (and other Sketch Engine developments). In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions. Association for Computational Linguistics, 4144.Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Sanchez Ivan and Riedel Sebastian. 2017. How well can we predict hypernyms from word embeddings? A dataset-centric analysis. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL’17), Vol. 2. 401407.Google ScholarGoogle ScholarCross RefCross Ref
  57. Santus Enrico, Lenci Alessandro, Chiu Tin-Shing, Lu Qin, and Huang Chu-Ren. 2016. Nine features in a random forest to learn taxonomical semantic relations. In Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC’16). 45574564.Google ScholarGoogle Scholar
  58. Santus Enrico, Lenci Alessandro, Lu Qin, and Walde Sabine Schulte Im. 2014. Chasing hypernyms in vector spaces with entropy. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics. 3842.Google ScholarGoogle Scholar
  59. Santus Enrico, Yung Frances, Lenci Alessandro, and Huang Chu-Ren. 2015. Evalution 1.0: An evolving semantic dataset for training and evaluation of distributional semantic models. In Proceedings of the 4th Workshop on Linked Data in Linguistics: Resources and Applications. 6469.Google ScholarGoogle ScholarCross RefCross Ref
  60. Shi Yu, Shen Jiaming, Li Yuchen, Zhang Naijing, He Xinwei, Lou Zhengzhi, Zhu Qi, Walker Matthew, Kim Myunghwan, and Han Jiawei. 2019. Discovering hypernymy in text-rich heterogeneous information network by exploiting context granularity. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management. 599608.Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. Shwartz Vered, Goldberg Yoav, and Dagan Ido. 2016. Improving hypernymy detection with an integrated path-based and distributional method. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. 23892398.Google ScholarGoogle ScholarCross RefCross Ref
  62. Shwartz Vered, Santus Enrico, and Schlechtweg Dominik. 2017. Hypernyms under Siege: Linguistically motivated artillery for hypernymy detection. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. 6575.Google ScholarGoogle ScholarCross RefCross Ref
  63. Snow Rion, Jurafsky Daniel, and Ng Andrew Y.. 2005. Learning syntactic patterns for automatic hypernym discovery. In Proceedings of the Conference on Neural Information Processing Systems (NeurIPS’05). 12971304.Google ScholarGoogle Scholar
  64. Snow Rion, Jurafsky Daniel, and Ng Andrew Y.. 2006. Semantic taxonomy induction from heterogenous evidence. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics. 801808.Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. Turney P. D. and Mohammad S. M.. 2015. Experiments with three approaches to recognizing lexical entailment. Natural Lang. Eng. 21, 3 (2015), 437476.Google ScholarGoogle ScholarCross RefCross Ref
  66. Upadhyay Shyam, Vyas Yogarshi, Carpuat Marine, and Roth Dan. 2018. Robust cross-lingual hypernymy detection using dependency context. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 607618.Google ScholarGoogle ScholarCross RefCross Ref
  67. Ustalov Dmitry, Arefyev Nikolay, Biemann Chris, and Panchenko Alexander. 2017. Negative sampling improves hypernymy extraction based on projection learning. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL’17). 543550.Google ScholarGoogle ScholarCross RefCross Ref
  68. Maaten Laurens van der and Hinton Geoffrey. 2008. Visualizing data using t-SNE. J. Mach. Learn. Res. 9 (2008), 25792605.Google ScholarGoogle Scholar
  69. Vulić Ivan, Gerz Daniela, Kiela Douwe, Hill Felix, and Korhonen Anna. 2017. HyperLex: A large-scale evaluation of graded lexical entailment. Comput. Linguist. 43, 4 (2017), 781835.Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. Vulić Ivan and Mrkšić Nikola. 2018. Specialising word vectors for lexical entailment. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 11341145.Google ScholarGoogle Scholar
  71. Vulić Ivan, Ponzetto Simone Paolo, and Glavaš Goran. 2019. Multilingual and cross-lingual graded lexical entailment. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 49634974.Google ScholarGoogle ScholarCross RefCross Ref
  72. Vylomova Ekaterina, Rimell Laura, Cohn Trevor, and Baldwin Timothy. 2016. Take and took, gaggle and goose, book and read: Evaluating the utility of vector differences for lexical relation learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. 16711682.Google ScholarGoogle ScholarCross RefCross Ref
  73. Wang Chengyu, Fan Yan, He Xiaofeng, and Zhou Aoying. 2019a. A family of fuzzy orthogonal projection models for monolingual and cross-lingual hypernymy prediction. In Proceedings of the World Wide Web Conference. 19651976.Google ScholarGoogle ScholarDigital LibraryDigital Library
  74. Wang Chengyu and He Xiaofeng. 2020. BiRRE: Learning bidirectional residual relation embeddings for supervised hypernymy detection. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 36303640.Google ScholarGoogle ScholarCross RefCross Ref
  75. Wang Chengyu, He Xiaofeng, and Zhou Aoying. 2017b. A short survey on taxonomy learning from text corpora: Issues, resources and recent advances. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 11901203.Google ScholarGoogle ScholarCross RefCross Ref
  76. Wang Chengyu, He Xiaofeng, and Zhou Aoying. 2019b. Improving hypernymy prediction via taxonomy enhanced adversarial learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. 71287135.Google ScholarGoogle ScholarDigital LibraryDigital Library
  77. Wang Chengyu, Yan Junchi, Zhou Aoying, and He Xiaofeng. 2017c. Transductive non-linear learning for Chinese hypernym prediction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. 13941404.Google ScholarGoogle ScholarCross RefCross Ref
  78. Wang Qi, Xu Chenming, Zhou Yangming, Ruan Tong, Gao Daqi, and He Ping. 2018. An attention-based Bi-GRU-CapsNet model for hypernymy detection between compound entities. In Proceedings of the IEEE International Conference on Bioinformatics and Biomedicine (BIBM’18). IEEE, 10311035.Google ScholarGoogle ScholarCross RefCross Ref
  79. Wang Xiao, Cui Peng, Wang Jing, Pei Jian, Zhu Wenwu, and Yang Shiqiang. 2017a. Community preserving network embedding. In Proceedings of the 31st AAAI Conference on Artificial Intelligence. 203209.Google ScholarGoogle ScholarCross RefCross Ref
  80. Wang Zhongyuan, Zhao Kejun, Wang Haixun, Meng Xiaofeng, and Wen Ji-Rong. 2015. Query understanding through knowledge-based conceptualization. In Proceedings of the 24th International Conference on Artificial Intelligence. 32643270.Google ScholarGoogle ScholarDigital LibraryDigital Library
  81. Weeds Julie, Clarke Daoud, Reffin Jeremy, Weir David, and Keller Bill. 2014. Learning to distinguish hypernyms and co-hyponyms. In Proceedings of the 25th International Conference on Computational Linguistics (COLING’14). 22492259.Google ScholarGoogle Scholar
  82. Weeds Julie, Weir David, and McCarthy Diana. 2004. Characterising measures of lexical distributional similarity. In Proceedings of the 20th International Conference on Computational Linguistics (COLING’04). 10151021.Google ScholarGoogle ScholarDigital LibraryDigital Library
  83. Williams Adina, Nangia Nikita, and Bowman Samuel. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1. 11121122.Google ScholarGoogle ScholarCross RefCross Ref
  84. Yin Wenpeng and Roth Dan. 2018. Term definitions help hypernymy detection. In Proceedings of the 7th Joint Conference on Lexical and Computational Semantics. 203213.Google ScholarGoogle ScholarCross RefCross Ref
  85. Yu Changlong, Han Jialong, Zhang Haisong, and Ng Wilfred. 2020. Hypernymy detection for low-resource languages via meta learning. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 36513656.Google ScholarGoogle ScholarCross RefCross Ref
  86. Yu Zheng, Wang Haixun, Lin Xuemin, and Wang Min. 2015. Learning term embeddings for hypernymy identification. In Proceedings of the 24th International Conference on Artificial Intelligence. 13901397.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Hypernymy Detection for Low-resource Languages: A Study for Hindi, Bengali, and Amharic

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Asian and Low-Resource Language Information Processing
      ACM Transactions on Asian and Low-Resource Language Information Processing  Volume 21, Issue 4
      July 2022
      464 pages
      ISSN:2375-4699
      EISSN:2375-4702
      DOI:10.1145/3511099
      Issue’s Table of Contents

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 4 March 2022
      • Accepted: 1 October 2021
      • Revised: 1 August 2021
      • Received: 1 April 2021
      Published in tallip Volume 21, Issue 4

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Refereed
    • Article Metrics

      • Downloads (Last 12 months)224
      • Downloads (Last 6 weeks)15

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    View Full Text

    HTML Format

    View this article in HTML Format .

    View HTML Format
    About Cookies On This Site

    We use cookies to ensure that we give you the best experience on our website.

    Learn more

    Got it!