skip to main content
research-article

Developing a Large Benchmark Corpus for Urdu Semantic Word Similarity

Authors Info & Claims
Published:13 March 2023Publication History
Skip Abstract Section

Abstract

The semantic word similarity task aims to quantify the degree of similarity between a pair of words. In literature, efforts have been made to create standard evaluation resources to develop, evaluate, and compare various methods for semantic word similarity. The majority of these efforts focused on English and some other languages. However, the problem of semantic word similarity has not been thoroughly explored for South Asian languages, particularly Urdu. To fill this gap, this study presents a large benchmark corpus of 518 word pairs for the Urdu semantic word similarity task, which were manually annotated by 12 annotators. To demonstrate how our proposed corpus can be used for the development and evaluation of Urdu semantic word similarity systems, we applied two state-of-the-art methods: (1) a word embedding–based method and (2) a Sentence Transformer–based method. As another major contribution, we proposed a feature fusion method based on Sentence Transformers and word embedding methods. The best results were obtained using our proposed feature fusion method (the combination of best features of both methods) with a Pearson correlation score of 0.67. To foster research in Urdu (an under-resourced language), our proposed corpus will be free and publicly available for research purposes.

REFERENCES

  1. [1] Abualigah Laith, Elaziz Mohamed Abd, Sumari Putra, Geem Zong Woo, and Gandomi Amir H.. 2022. Reptile search algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 191, Article 116158 (2022), 116158.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. [2] Abualigah Laith, Diabat Ali, Mirjalili Seyedali, Elaziz Mohamed Abd, and Gandomi Amir H.. 2021. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 376 (2021), 113609.Google ScholarGoogle ScholarCross RefCross Ref
  3. [3] Abualigah Laith, Diabat Ali, Sumari Putra, and Gandomi Amir H.. 2021. Applications, deployments, and integration of internet of drones (iod): A review. IEEE Sens. J. (2021).Google ScholarGoogle ScholarCross RefCross Ref
  4. [4] Abualigah Laith, Yousri Dalia, Elaziz Mohamed Abd, Ewees Ahmed A., Al-Qaness Mohammed A. A., and Gandomi Amir H.. 2021. Aquila optimizer: A novel meta-heuristic optimization algorithm. Comput. Industr. Eng. 157 (2021), 107250.Google ScholarGoogle ScholarCross RefCross Ref
  5. [5] Akhtar Syed Sarfaraz, Gupta Arihant, Vajpayee Avijit, Srivastava Arjit, and Shrivastava Manish. 2017. Word similarity datasets for indian languages: Annotation and baseline systems. In Proceedings of the 11th Linguistic Annotation Workshop. 9194.Google ScholarGoogle ScholarCross RefCross Ref
  6. [6] Amigó Enrique and Gonzalo Julio. 2022. An empirical study on similarity functions: Parameter estimation for the information contrast model. OSF Preprints.Google ScholarGoogle Scholar
  7. [7] Baroni Marco and Lenci Alessandro. 2011. How we BLESSed distributional semantic evaluation. In Proceedings of the GEMS’11 Workshop on GEometrical Models of Natural Language Semantics. Association for Computational Linguistics, 110.Google ScholarGoogle Scholar
  8. [8] Bollegala Danushka, Alsuhaibani Mohammed, Maehara Takanori, and Kawarabayashi Ken-ichi. 2016. Joint word representation learning using a corpus and a semantic lexicon. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 30.Google ScholarGoogle ScholarCross RefCross Ref
  9. [9] Bollegala Danushka, Kiryo Ryuichi, Tsujino Kosuke, and Yukawa Haruki. 2020. Language-independent tokenisation rivals language-specific tokenisation for word similarity prediction. Google ScholarGoogle ScholarCross RefCross Ref
  10. [10] Bowman Samuel R., Angeli Gabor, Potts Christopher, and Manning Christopher D.. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 632642. Google ScholarGoogle ScholarCross RefCross Ref
  11. [11] Bruni Elia, Tran Nam-Khanh, and Baroni Marco. 2014. Multimodal distributional semantics. J. Artif. Intell. Res. 49 (2014), 147.Google ScholarGoogle ScholarCross RefCross Ref
  12. [12] Camacho-Collados Jose and Navigli Roberto. 2017. BabelDomains: Large-scale domain labeling of lexical resources. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers. 223228.Google ScholarGoogle ScholarCross RefCross Ref
  13. [13] Camacho-Collados Jose, Pilehvar Mohammad Taher, Collier Nigel, and Navigli Roberto. 2017. Semeval-2017 task 2: Multilingual and cross-lingual semantic word similarity. Association for Computational Linguistics.Google ScholarGoogle Scholar
  14. [14] Camacho-Collados José, Pilehvar Mohammad Taher, and Navigli Roberto. 2015. A framework for the construction of monolingual and cross-lingual word similarity datasets. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), Vol. 2. 17.Google ScholarGoogle ScholarCross RefCross Ref
  15. [15] Cer Daniel, Yang Yinfei, Kong Sheng-yi, Hua Nan, Limtiaco Nicole, John Rhomni St., Constant Noah, Guajardo-Cespedes Mario, Yuan Steve, Tar Chris, Strope Brian, and Kurzweil Ray. 2018. Universal sentence encoder for English. In Proceedings of the Conference on Empirical Methods in Natural Language Processing: System Demonstrations. Association for Computational Linguistics, 169174. Google ScholarGoogle ScholarCross RefCross Ref
  16. [16] Conneau Alexis, Kiela Douwe, Schwenk Holger, Barrault Loïc, and Bordes Antoine. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 670680. Google ScholarGoogle ScholarCross RefCross Ref
  17. [17] Daud Ali, Khan Wahab, and Che Dunren. 2017. Urdu language processing: A survey. Artif. Intell. Rev. 47, 3 (2017), 279311.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. [18] Devlin Jacob, Chang Ming-Wei, Lee Kenton, and Toutanova Kristina. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Association for Computational Linguistics, Minneapolis, Minnesota, 41714186. Google ScholarGoogle ScholarCross RefCross Ref
  19. [19] Ercan Gökhan and Yıldız Olcay Taner. 2018. AnlamVer: Semantic model evaluation dataset for turkish-word similarity and relatedness. In Proceedings of the 27th International Conference on Computational Linguistics. 38193836.Google ScholarGoogle Scholar
  20. [20] Feng Fangxiaoyu, Yang Yinfei, Cer Daniel, Arivazhagan Naveen, and Wang Wei. 2022. Language-agnostic BERT sentence embedding. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, 878891. Google ScholarGoogle ScholarCross RefCross Ref
  21. [21] Fenogenova Alena. 2021. Russian paraphrasers: Paraphrase with transformers. In Proceedings of the 8th Workshop on Balto-Slavic Natural Language Processing. 1119.Google ScholarGoogle Scholar
  22. [22] Ferrero Jeremy, Besacier Laurent, Schwab Didier, and Agnes Frederic. 2017. Using word embedding for cross-language plagiarism detection. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers. Association for Computational Linguistics, 415421. https://www.aclweb.org/anthology/E17-2066.Google ScholarGoogle ScholarCross RefCross Ref
  23. [23] Finkelstein Lev, Gabrilovich Evgeniy, Matias Yossi, Rivlin Ehud, Solan Zach, Wolfman Gadi, and Ruppin Eytan. 2002. Placing search in context: The concept revisited. ACM Trans. Inf. Syst. 20, 1 (2002), 116131.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. [24] Franco-Salvador Marc, Rosso Paolo, and Gómez Manuel Montes-y. 2016. A systematic study of knowledge graph analysis for cross-language plagiarism detection. Inf. Process. Manage. 52, 4 (2016), 550570.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. [25] Guo Xiao, Mirzaalian Hengameh, Sabir Ekraam, Jaiswal Ayush, and Abd-Almageed Wael. 2020. CORD19STS: COVID-19 Semantic Textual Similarity Dataset. Google ScholarGoogle ScholarCross RefCross Ref
  26. [26] He Hangfeng, Ning Qiang, and Roth Dan. 2020. QuASE: Question-answer driven sentence encoding. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 87438758. Google ScholarGoogle ScholarCross RefCross Ref
  27. [27] Healy Seán. 2019. Corpora in word embedding training and application.Google ScholarGoogle Scholar
  28. [28] Hill Felix, Reichart Roi, and Korhonen Anna. 2015. Simlex-999: Evaluating semantic models with (genuine) similarity estimation. Comput. Ling. 41, 4 (2015), 665695.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. [29] Hliaoutakis Angelos, Varelas Giannis, Voutsakis Epimenidis, Petrakis Euripides G. M., and Milios Evangelos. 2006. Information retrieval by semantic similarity. Int. J. Semant. Web Inf. Syst. 2, 3 (2006), 5573.Google ScholarGoogle ScholarCross RefCross Ref
  30. [30] Jatnika Derry, Bijaksana Moch Arif, and Suryani Arie Ardiyanti. 2019. Word2vec model analysis for semantic similarities in english words. Proc. Comput. Sci. 157 (2019), 160167.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. [31] Jin Peng and Wu Yunfang. 2012. Semeval-2012 task 4: Evaluating chinese word similarity. In Proceedings of the 1st Joint Conference on Lexical and Computational Semantics–Volume 1: Proceedings of the Main Conference and the Shared Task (SEM’12) and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval’12). 374377.Google ScholarGoogle Scholar
  32. [32] Jurgens David, Pilehvar Mohammad Taher, and Navigli Roberto. 2014. Semeval-2014 task 3: Cross-level semantic similarity. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval’14). 1726.Google ScholarGoogle ScholarCross RefCross Ref
  33. [33] Kádár Ákos, Alishahi Afra, and Chrupała Grzegorz. 2015. Learning word meanings from images of natural scenes. Trait. Autom. Lang. 55, 3 (2015).Google ScholarGoogle Scholar
  34. [34] Kanwal Safia, Malik Kamran, Shahzad Khurram, Aslam Faisal, and Nawaz Zubair. 2019. Urdu named entity recognition: Corpus generation and deep learning applications. ACM Trans. As. Low-Resour. Lang. Inf. Process. 19, 1 (2019), 113.Google ScholarGoogle Scholar
  35. [35] Ke Pei, Ji Haozhe, Liu Siyang, Zhu Xiaoyan, and Huang Minlie. 2020. Sentilare: Linguistic knowledge enhanced language representation for sentiment analysis. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP’20). 69756988.Google ScholarGoogle ScholarCross RefCross Ref
  36. [36] Kenter Tom and Rijke Maarten De. 2015. Short text similarity with word embeddings. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management. 14111420.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. [37] Khattak Asad, Asghar Muhammad Zubair, Saeed Anam, Hameed Ibrahim A., Hassan Syed Asif, and Ahmad Shakeel. 2021. A survey on sentiment analysis in Urdu: A resource-poor language. Egypt. Inf. J. 22, 1 (2021), 5374.Google ScholarGoogle ScholarCross RefCross Ref
  38. [38] Khorsi Ahmed, Cherroun Hadda, Schwab Didier, et al. 2018. 2L-APD: A two-level plagiarism detection system for Arabic documents. Cybernet. Inf. Technol. 18, 1 (2018), 124138.Google ScholarGoogle Scholar
  39. [39] Koutsomitropoulos Dimitrios A. and Andriopoulos Andreas D.. 2022. Thesaurus-based word embeddings for automated biomedical literature classification. Neural Comput. Appl. 34, 2 (2022), 937950.Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. [40] Kowsher Md, Sobuj Md Shohanur Islam, Shahriar Md Fahim, Prottasha Nusrat Jahan, Arefin Mohammad Shamsul, Dhar Pranab Kumar, and Koshiba Takeshi. 2022. An enhanced neural word embedding model for transfer learning. Appl. Sci. 12, 6 (2022), 2848.Google ScholarGoogle ScholarCross RefCross Ref
  41. [41] Lahitani Alfirna Rizqi, Permanasari Adhistya Erna, and Setiawan Noor Akhmad. 2016. Cosine similarity to determine similarity measure: Study case in online essay assessment. In Proceedings of the 4th International Conference on Cyber and IT Service Management. IEEE, 16.Google ScholarGoogle ScholarCross RefCross Ref
  42. [42] Massidda Riccardo. 2020. rmassidda@ DaDoEval: Document dating using sentence embeddings at EVALITA 2020. In Proceedings of 7th Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA’20).Google ScholarGoogle ScholarCross RefCross Ref
  43. [43] Mehmood Khawar. 2021. On Multi-domain Sentence Level Sentiment Analysis for Roman Urdu. Ph.D. Dissertation. University of New South Wales Canberra Australia.Google ScholarGoogle Scholar
  44. [44] Mensa Enrico, Radicioni Daniele Paolo, and Lieto Antonio. 2017. Merali at semeval-2017 task 2 subtask 1: A cognitively inspired approach. In Proceedings of the International Workshop on Semantic Evaluation (SemEval’17). Association for Computational Linguistics, 236240.Google ScholarGoogle ScholarCross RefCross Ref
  45. [45] Miller George A. and Charles Walter G.. 1991. Contextual correlates of semantic similarity. Lang. Cogn. Process. 6, 1 (1991), 128.Google ScholarGoogle ScholarCross RefCross Ref
  46. [46] Minaee Shervin, Kalchbrenner Nal, Cambria Erik, Nikzad Narjes, Chenaghlu Meysam, and Gao Jianfeng. 2021. Deep learning based text classification: A comprehensive review. ACM Comput. Surv. 54, 3 (2021), 140.Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. [47] Mori Yusuke, Yamane Hiroaki, Mukuta Yusuke, and Harada Tatsuya. 2020. Finding and generating a missing part for story completion. In Proceedings of the 4th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature. 156166.Google ScholarGoogle Scholar
  48. [48] Muneer Iqra and Nawab Rao Muhammad Adeel. 2022. Cross-lingual text reuse detection at sentence level for English–Urdu language pair. Comput. Speech Lang. 75 (2022), 101381. Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. [49] Naseer Asma and Hussain Sarmad. 2009. Supervised word sense disambiguation for Urdu using Bayesian classification. Center for Research in Urdu Language Processing, Lahore, Pakistan.Google ScholarGoogle Scholar
  50. [50] Naumov Stanislav, Yaroslavtsev Grigory, and Avdiukhin Dmitrii. 2021. Objective-based hierarchical clustering of deep embedding vectors. In Proceedings of the Annual AAAI Conference on Artificial Intelligence (AAAI’21). 90559063.Google ScholarGoogle ScholarCross RefCross Ref
  51. [51] Navrozidis Jakob and Jansson Hannes. 2020. Using natural language processing to identify similar patent documents. LU-CS-EX. Lund University Library. LU-CS-EX EDAM05 20192. Department of Computer Science.Google ScholarGoogle Scholar
  52. [52] Netisopakul Ponrudee, Wohlgenannt Gerhard, and Pulich Aleksei. 2019. Word similarity datasets for Thai: Construction and evaluation. IEEE Access 7 (2019), 142907142915.Google ScholarGoogle ScholarCross RefCross Ref
  53. [53] Nguyen Kim Anh. 2018. Distinguishing antonymy, synonymy and hypernymy with distributional and distributed vector representations and neural networks.Google ScholarGoogle Scholar
  54. [54] Ozsoy Makbule Gulcin. 2016. From word embeddings to item recommendation. University of Stuttgart. Google ScholarGoogle ScholarCross RefCross Ref
  55. [55] Pelevina Maria, Arefiev Nikolay, Biemann Chris, and Panchenko Alexander. 2016. Making sense of word embeddings. In Proceedings of the 1st Workshop on Representation Learning for NLP. Association for Computational Linguistics, 174183. Google ScholarGoogle ScholarCross RefCross Ref
  56. [56] Peters Matthew, Neumann Mark, Iyyer Mohit, Gardner Matt, Clark Christopher, Lee Kenton, and Zettlemoyer Luke. 2018. Deep contextualized word representations. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Association for Computational Linguistics, 22272237. Google ScholarGoogle ScholarCross RefCross Ref
  57. [57] Rahman Tariq. 2004. Language policy and localization in Pakistan: Proposal for a paradigmatic shift. In Proceedings of the SCALLA Conference on Computational Linguistics, Vol. 99. 119.Google ScholarGoogle Scholar
  58. [58] Rei Ricardo, Stewart Craig, Farinha Ana C., and Lavie Alon. 2020. COMET: A neural framework for MT evaluation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP’20). Association for Computational Linguistics, 26852702. Google ScholarGoogle ScholarCross RefCross Ref
  59. [59] Reimers Nils and Gurevych Iryna. 2019. Sentence-BERT: Sentence embeddings using Siamese BERT-Networks. Google ScholarGoogle ScholarCross RefCross Ref
  60. [60] Reimers Nils and Gurevych Iryna. 2020. Making monolingual sentence embeddings multilingual using knowledge distillation. Google ScholarGoogle ScholarCross RefCross Ref
  61. [61] Rubenstein Herbert and Goodenough John B.. 1965. Contextual correlates of synonymy. Commun. ACM 8, 10 (1965), 627633.Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. [62] Speer Robyn and Lowry-Duda Joanna. 2017. ConceptNet at SemEval-2017 task 2: Extending word embeddings with multilingual relational knowledge. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval’17). Association for Computational Linguistics, 8589. Google ScholarGoogle ScholarCross RefCross Ref
  63. [63] Thakur Nandan, Reimers Nils, Daxenberger Johannes, and Gurevych Iryna. 2021. Augmented SBERT: Data augmentation method for improving bi-encoders for pairwise sentence scoring tasks. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, 296310. Google ScholarGoogle ScholarCross RefCross Ref
  64. [64] Tan Bui Van, Thai Nguyen Phuong, and Lam Pham Van. 2017. Construction of a word similarity dataset and evaluation of word similarity techniques for vietnamese. In Proceedings of the 9th International Conference on Knowledge and Systems Engineering (KSE’17). IEEE, 6570.Google ScholarGoogle ScholarCross RefCross Ref
  65. [65] Vijaymeena M. K. and Kavitha K.. 2016. A survey on similarity measures in text mining. Mach. Learn. Appl. 3, 2 (2016), 1928.Google ScholarGoogle Scholar
  66. [66] Wadud Md Anwar Hussen, Mridha M. F., Shin Jungpil, Nur Kamruddin, and Saha Aloke Kumar. 2022. Deep-BERT: Transfer learning for classifying multilingual offensive texts on social media. Comput. Syst. Sci. Eng. 44, 2 (2022), 17751791.Google ScholarGoogle ScholarCross RefCross Ref
  67. [67] Williams Adina, Nangia Nikita, and Bowman Samuel. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Association for Computational Linguistics, 11121122. Google ScholarGoogle ScholarCross RefCross Ref
  68. [68] Xiao Min and Guo Yuhong. 2014. Semi-supervised matrix completion for cross-lingual text classification. In Proceedings of the 28th AAAI Conference on Artificial Intelligence.Google ScholarGoogle ScholarCross RefCross Ref
  69. [69] Yancey Kevin, Pintard Alice, and Francois Thomas. 2021. Investigating readability of French as a foreign language with deep learning and cognitive and pedagogical features. Ling. Linguag. 20, 2 (2021), 229258.Google ScholarGoogle Scholar
  70. [70] Yates Andrew, Nogueira Rodrigo, and Lin Jimmy. 2021. Pretrained transformers for text ranking: BERT and beyond. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining. 11541156.Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. [71] Zhang Yingjie, Li Bin, Dai Xinyu, and Chen Jiajun. 2012. MIXCD: System description for evaluating Chinese word similarity at SemEval-2012. In Proceedings of the 1st Joint Conference on Lexical and Computational Semantics–Volume 1: Proceedings of the Main Conference and the Shared Task (SEM’12) and Volume 2: Proceedings of the 6th International Workshop on Semantic Evaluation (SemEval’12). 425429.Google ScholarGoogle Scholar
  72. [72] Zhuang Liu, Wayne Lin, Ya Shi, and Jun Zhao. 2021. A robustly optimized BERT pre-training approach with post-training. In Proceedings of the 20th Chinese National Conference on Computational Linguistics. Chinese Information Processing Society of China, Huhhot, 12181227.Google ScholarGoogle Scholar

Index Terms

  1. Developing a Large Benchmark Corpus for Urdu Semantic Word Similarity

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          Full Access

          • Published in

            cover image ACM Transactions on Asian and Low-Resource Language Information Processing
            ACM Transactions on Asian and Low-Resource Language Information Processing  Volume 22, Issue 3
            March 2023
            570 pages
            ISSN:2375-4699
            EISSN:2375-4702
            DOI:10.1145/3579816
            Issue’s Table of Contents

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 13 March 2023
            • Online AM: 12 October 2022
            • Accepted: 4 October 2022
            • Revised: 11 August 2022
            • Received: 7 October 2021
            Published in tallip Volume 22, Issue 3

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article
          • Article Metrics

            • Downloads (Last 12 months)181
            • Downloads (Last 6 weeks)10

            Other Metrics

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          Full Text

          View this article in Full Text.

          View Full Text

          HTML Format

          View this article in HTML Format .

          View HTML Format
          About Cookies On This Site

          We use cookies to ensure that we give you the best experience on our website.

          Learn more

          Got it!