skip to main content
research-article

Leveraging Hierarchical Deep Semantics to Classify Implicit Discourse Relations via a Mutual Learning Method

Authors Info & Claims
Published:13 February 2018Publication History
Skip Abstract Section

Abstract

This article presents a mutual learning method using hierarchical deep semantics for the classification of implicit discourse relations in English. With the absence of explicit discourse markers, traditional discourse techniques mainly concentrate on discrete linguistic features in this task, which always leads to a data sparseness problem. To relieve this problem, we propose a mutual learning neural model that makes use of multilevel semantic information together, including the distribution of implicit discourse relations, the semantics of arguments, and the co-occurrence of phrases and words. During the training process, the predicting targets of the model, which are the probability of the discourse relation type and the distributed representation of semantic components, are learned jointly and optimized mutually. The experimental results show that this method outperforms the previous works, especially in multiclass identification attributed to the hierarchical semantic representations and the mutual learning strategy.

References

  1. Y. Bengio, H. Schwenk, J. S. Senécal, F. Morin, and J. L. Gauvain. 2003. A neural probabilistic language model. J. Mach. Learn. Res. 3 (2003), 1137--1155. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. O. Biran and K. Mckeown. 2013. Aggregated word pair features for implicit discourse relation disambiguation. In Proceedings of the Meeting of the Association for Computational Linguistics. 69--73.Google ScholarGoogle Scholar
  3. C. E. Braud and P. Denis. 2014. Combining natural and artificial examples to improve implicit discourse relation identification. In Proceedings of the 25th International Conference on Computational Linguistics. 1694--1705.Google ScholarGoogle Scholar
  4. L. Carlson, D. Marcu, and M. E. Okurowski. 2003. Building a Discourse-Tagged Corpus in the Framework of Rhetorical Structure Theory. Springer Netherlands, 2655--61.Google ScholarGoogle Scholar
  5. R. Fisher and R. Simmons. 2015. Spectral semi-supervised discourse relation classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. 89--93.Google ScholarGoogle Scholar
  6. M. Lan, Y. Xu, and Z. Niu. 2013. Leveraging synthetic discourse data via multi-task learning for implicit discourse relation recognition. In Proceedings of the Meeting of the Association for Computational Linguistics. 476--485.Google ScholarGoogle Scholar
  7. Q. V. Le and T. Mikolov. 2014. Distributed representations of sentences and documents. arXiv preprint arXiv:1405.4053.Google ScholarGoogle Scholar
  8. Z. Lin, M. Y. Kan, and H. T. Ng. 2009. Recognizing implicit discourse relations in the penn discourse treebank. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. 343--351. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. D. Marcu and A. Echihabi. 2002. An unsupervised approach to recognizing discourse relations. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. 368--375. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. M. P. Marcus, M. A. Marcinkiewicz, and B. Santorini. 1993. Building a large annotated corpus of English: The penn treebank. MIT Press, 313--330.Google ScholarGoogle Scholar
  11. T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean. 2013. Distributed representations of words and phrases and their compositionality. arXiv preprint arXiv:1310.4546.Google ScholarGoogle Scholar
  12. J. Park and C. Cardie. 2009. Improving implicit discourse relation recognition through feature set optimization. In Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue. 108--112. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. E. Pitler, M. Raghupathy, H. Mehta, A. Nenkova, A. Lee, and A. Joshi. 2008. Easily identifiable discourse relations. In Companion volume — Posters and Demonstrations (COLING’08), 87--90.Google ScholarGoogle Scholar
  14. E. Pitler, A. Louis, and A. Nenkova. 2009. Automatic sense prediction for implicit discourse relations in text. In Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP. 683--691. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. R. Prasad, E. Miltsakaki, A. Joshi, and B. Webber. 2008. The penn discourse treebank 2.0 annotation manual. Proceedings of Lrec. 2961--2968.Google ScholarGoogle Scholar
  16. A. T. Rutherford and N. Xue. 2014. Discovering implicit discourse relations through Brown cluster pair representation and coreference patterns. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics. 645--654.Google ScholarGoogle Scholar
  17. A. Rutherford and N. Xue. 2015. Improving the inference of implicit discourse relations via classifying explicit discourse connectives. In Human Language Technologies: The 2015 Annual Conference of the North American Chapter of the ACL. 799--808.Google ScholarGoogle Scholar
  18. R. Socher, A. Perelygin, J. Y. Wu, J. Chuang, C. D. Manning, A. Y. Ng, and C. Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. 1631--1642.Google ScholarGoogle Scholar
  19. R. Soricut and D. Marcu. 2003. Sentence level discourse parsing using syntactic and lexical information. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology. 149--156. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. P. J. Stone. 1968. The general inquirer: A computer approach to content analysis. Information Storage 8 Retrieva, 375--376.Google ScholarGoogle Scholar
  21. W. T. Wang, J. Su, and C. L. Tan. 2010. Kernel based discourse relation recognition with temporal ordering information. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. 710--719. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. B. Wellner, J. Pustejovsky, C. Havasi, A. Rumshisky, and R. Saurí. 2006. Classification of discourse coherence relations: An exploratory study using multiple knowledge sources. In Proceedings of 7th SIGDIAL Workshop on Discourse and Dialogue. 117--125. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. T. Wilson, J. Wiebe, and P. Hoffmann. 2005. Recognizing contextual polarity in phrase-level sentiment analysis. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing. 347--354. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. F. Wolf and E. Gibson. 2005. Representing discourse coherence: A corpus-based study. Computational Linguistics. 249--287. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. M. Yang, T. Cui, and W. Tu. 2015. Ordering-sensitive and Semantic-aware topic modeling. arXiv preprint arXiv:1502.0363.Google ScholarGoogle Scholar
  26. Z. M. Zhou, Y. Xu, Z. Y. Niu, M. Lan, J. Su, and C. L. Tan. 2010. Predicting discourse connectives for implicit discourse relation recognition. In Coling 2010: Poster Volume. 1507--1514. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Leveraging Hierarchical Deep Semantics to Classify Implicit Discourse Relations via a Mutual Learning Method

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader
    About Cookies On This Site

    We use cookies to ensure that we give you the best experience on our website.

    Learn more

    Got it!