skip to main content
research-article

Attention Mechanism for Uyghur Personal Pronouns Resolution

Authors Info & Claims
Published:13 October 2020Publication History
Skip Abstract Section

Abstract

Deep neural network models for Uyghur personal pronoun resolution learn semantic information for personal pronoun and antecedents, but tend to be short-sighted—they ignore the importance of each feature. In this article, we propose a Uyghur personal pronoun resolution model based on Attention mechanism, Convolutional neural networks and Gated recurrent unit (ATCG). Our model studies the grammatical structure and semantic features of Uyghur, and extracts 11 key features for Uyghur resolution task. Attention mechanism can focus on the importance of words in sentences. Gated Recurrent Unit (GRU) is applied in this model to achieve the interdependent features with long distance. The ATCG model effectively makes up for the shortcomings of relying only on the features of the content level and achieves better classification performance. Experimental results on Uyghur resolution dataset show that our model surpasses the state-of-the-art models.

References

  1. R. Iida, K. Torisawa, J. H. Oh, C. Kruengkrai, and J. Kloetzer. 2016. Intra-sentential subject zero anaphora resolution using multi-column convolutional neural network. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP’16). 1244--1254.Google ScholarGoogle Scholar
  2. N. Q. Luong, A. Popescu-Belis, A. Rios, and D. Tuggener. 2017. Machine translation of Spanish personal and possessive pronouns using anaphora probabilities. In Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics (EACL’17). 631--636.Google ScholarGoogle Scholar
  3. Li Min, Yu Long, Tian Sheng-Wei, Turglm Ibrahim, and Zhao Jian-Guo. 2017. Coreference resolution of Uyghur noun phrases based on deep learning. Acta Automatica Sinica 43, 11 (2017), 1984--1992.Google ScholarGoogle Scholar
  4. Tian Sheng-Wei, Qin Yue, Yu Long, Turglm Ibrahim, and Feng Guan-Jun. 2018. Anaphora resolution of Uyghur personal pronouns based on bi-LSTM. Acta Electronica Sinica 46, 7 (2018), 1691--1699.Google ScholarGoogle Scholar
  5. Li Dong-Bai, Tian Sheng-Wei, Yu Long, Turglm Ibrahim, and Feng Guan-Jun. 2017. Deep learning for pronominal anaphora resolution in Uyghur. Journal of Chinese Information Processing 31, 4 (2017), 80--88.Google ScholarGoogle Scholar
  6. Qi Qingshan, Tian Shengwei, Yu Long, and Aishan Wumaier. 2019. Anaphora resolution of Uyghur nouns based on ATT-IndRNN-CNN. Journal of Chinese Information Processing. 33, 9 (2019), 60--68.Google ScholarGoogle Scholar
  7. V. Mnih and N. Heess. 2014. Recurrent models of visual attention. In Proceedings of the Meeting of the Neural Information Processing Systems (NIPS’14). 2204--2212.Google ScholarGoogle Scholar
  8. D. Bahdanau, K. Cho, and Y. Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the ICLR (2015).Google ScholarGoogle Scholar
  9. W. M. Soon, H. T. Ng, and D. C. Y. Lim. 2001. A machine learning approach to coreference resolution of noun phrases. Computational Linguistics 27, 4 (2001), 521--544.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. V. Ng. 2007. Semantic class induction and coreference resolution. In Proceedings of the Meeting of the Association for Computational Linguistics (ACL’07). 536--543.Google ScholarGoogle Scholar
  11. X. Yang, J. Su, and C. L. Tan. 2008. A twin-candidate model for learning-based anaphora resolution, Geogr. Comput. Linguist. 34, 3 (2008), 327--356.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. C. Chen and V. Ng. 2014. Chinese zero pronoun resolution: An unsupervised approach combining ranking and integer linear programming[C]. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence. 1622--1628.Google ScholarGoogle Scholar
  13. K. Clark, C. Manning, and D. Deep. 2016. Reinforcement learning for mention-ranking coreference models. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. 2256--2262.Google ScholarGoogle Scholar
  14. O. Uryupina and A. Moschitti. Collaborative partitioning for coreference resolution. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL’17). 247--257.Google ScholarGoogle Scholar
  15. K. Lee, L. He, M. Lewis, et al. 2017. End-to-end neural coreference resolution. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. 188--197.Google ScholarGoogle ScholarCross RefCross Ref
  16. W. Yin, H. Schütze, B. Xiang, and B. Zhou. 2016. Abcnn: Attention-based convolutional neural network for modeling sentence pairs. In Proceedings of the Meeting of the Transactions of the Association for Computational Linguistics (ACL’16). 259--272.Google ScholarGoogle Scholar
  17. H. Yu, P. Da, and F. U. Guohong. 2015. Chinese explanatory opinionated sentence recognition based on auto-encoding features. Acta Scientiarum Naturalium Universitatis Pekinensis 51, 2 (2015), 234--240.Google ScholarGoogle Scholar
  18. J. Pennington R. Socher, and C. D. Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP’14). 1532--1543.Google ScholarGoogle Scholar
  19. Y. Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP’14). 1746--1751.Google ScholarGoogle ScholarCross RefCross Ref
  20. X. Wang, Y. Liu, C. Sun, B. Wang, and X. Wang. 2015. Predicting polarities of tweets by composing word embeddings with long short-term memory. In Proceedings of the Association for Computational Linguistics (ACL’15). 1343--1353.Google ScholarGoogle Scholar
  21. I. Habernal and I. Gurevych. 2016. Which argument is more convincing analyzing and predicting convincingness of web arguments using bidirectional LSTM? In Proceedings of the Association for Computational Linguistics (ACL’16). 1589--1599.Google ScholarGoogle Scholar
  22. L. Mou, P. Ghamisi, and X. Zhu. 2017. Deep recurrent neural networks for hyperspectral image classification. IEEE Transactions on Geoscience and Remote Sensing 55, 7 (2017), 3639--3655.Google ScholarGoogle ScholarCross RefCross Ref
  23. L. Wang, Z. Cao, G. D. Melo, and Z. Liu. 2016. Relation classification via multi-level attention CNNs. In Proceedings of the Association for Computational Linguistics (ACL’16). 1298--1307.Google ScholarGoogle Scholar
  24. Y. Wang, M. Huang, X. Zhu, and L. Zhao. 2016. Attention-based LSTM for aspect-level sentiment classification. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP’16). 606--615.Google ScholarGoogle Scholar
  25. X. Niu, Y. Hou, and P. Wang. 2017. Bi-Directional LSTM with quantum attention mechanism for sentence modeling. In Proceedings of the International Conference on Neural Information Processing (2017). 178--188.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Attention Mechanism for Uyghur Personal Pronouns Resolution

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Asian and Low-Resource Language Information Processing
      ACM Transactions on Asian and Low-Resource Language Information Processing  Volume 19, Issue 6
      November 2020
      277 pages
      ISSN:2375-4699
      EISSN:2375-4702
      DOI:10.1145/3426881
      Issue’s Table of Contents

      Copyright © 2020 ACM

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 13 October 2020
      • Accepted: 1 June 2020
      • Revised: 1 May 2020
      • Received: 1 December 2019
      Published in tallip Volume 19, Issue 6

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format
    About Cookies On This Site

    We use cookies to ensure that we give you the best experience on our website.

    Learn more

    Got it!