skip to main content
research-article

Document-Level Relation Extraction with Path Reasoning

Authors Info & Claims
Published:25 March 2023Publication History
Skip Abstract Section

Abstract

Document-level relation extraction (DocRE) aims to extract relations among entities across multiple sentences within a document by using reasoning skills (i.e., pattern recognition, logical reasoning, coreference reasoning, etc.) related to the reasoning paths between two entities. However, most of the advanced DocRE models only attend to the feature representations of two entities to determine their relation, and do not consider one complete reasoning path from one entity to another entity, which may hinder the accuracy of relation extraction. To address this issue, this article proposes a novel method to capture this reasoning path from one entity to another entity, thereby better simulating reasoning skills to classify relation between two entities. Furthermore, we introduce an additional attention layer to summarize multiple reasoning paths for further enhancing the performance of the DocRE model. Experimental results on a large-scale document-level dataset show that the proposed approach achieved a significant performance improvement on a strong heterogeneous graph-based baseline.

REFERENCES

  1. [1] Soares Livio Baldini, FitzGerald Nicholas, Ling Jeffrey, and Kwiatkowski Tom. 2019. Matching the blanks: Distributional similarity for relation learning. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Florence, Italy,28952905. Google ScholarGoogle ScholarCross RefCross Ref
  2. [2] Christopoulou Fenia, Miwa Makoto, and Ananiadou Sophia. 2019. Connecting the dots: Document-level neural relation extraction with edge-oriented graphs. In Proceedings of the Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Association for Computational Linguistics, Hong Kong, China,49254936. Google ScholarGoogle ScholarCross RefCross Ref
  3. [3] Devlin Jacob, Chang Ming-Wei, Lee Kenton, and Toutanova Kristina. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Minneapolis, Minnesota,41714186. Google ScholarGoogle ScholarCross RefCross Ref
  4. [4] Ebner Seth, Xia Patrick, Culkin Ryan, Rawlins Kyle, and Durme Benjamin Van. 2020. Multi-sentence argument linking. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online,80578077. Google ScholarGoogle ScholarCross RefCross Ref
  5. [5] Guo Zhijiang, Zhang Yan, and Lu Wei. 2019. Attention guided graph convolutional networks for relation extraction. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Florence, Italy,241251. Google ScholarGoogle ScholarCross RefCross Ref
  6. [6] Gupta Pankaj, Rajaram Subburam, Schütze Hinrich, and Runkler Thomas A.. 2019. Neural relation extraction within and across sentence boundaries. In Proceedings of the AAAI Conference on Artificial Intelligence. AAAI Press, Hawaii, USA,65136520. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. [7] Hochreiter Sepp and Schmidhuber Jürgen. 1997. Long short-term memory. Neural Computation 9, 8 (1997),17351780. https://direct.mit.edu/neco/article-abstract/9/8/1735/6109/Long-Short-Term-Memory?redirectedFrom=fulltext.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. [8] Li J., Sun Y., Johnson R. J., Sciaky D., Leaman R., Wei C. H., Davis A. P., Mattingly C. J., Wiegers T. C., and Lu Z.. 2016. BioCreative V CDR task corpus: A resource for chemical disease relation extraction.. In Database : The Journal of Biological Databases and Curation. PubMed, Online,110. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4860626/pdf/baw068.pdf.Google ScholarGoogle Scholar
  9. [9] Jia Shengbin, E Shijia, Li Maozhen, and Xiang Yang. 2018. Chinese open relation extraction and knowledge base establishment. ACM Transactions on Asian and Low-Resource Language Information Processing 17, 3, Article 15 (2018), 22 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. [10] Liu Chunyang, Sun Wenbo, Chao Wenhan, and Che Wanxiang. 2013. Convolution neural network for relation extraction. In Proceedings of the International Conference on Advanced Data Mining and Applications (ADMA 2013). Springer-Verlag, Berlin,231242. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. [11] Liu Maofu, Zhang Yukun, Li Wenjie, and Ji Donghong. 2020. Joint model of entity recognition and relation extraction with self-attention mechanism. ACM Transactions on Asian and Low-Resource Language Information Processing 19, 4, Article 59 (2020), 19 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. [12] Ma Shuming, Zhang Dongdong, and Zhou Ming. 2020. A simple and effective unified encoder for document-level machine translation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online,35053511. Google ScholarGoogle ScholarCross RefCross Ref
  13. [13] Makino Kohei, Miwa Makoto, and Sasaki Yutaka. 2021. A neural edge-editing approach for document-level relation graph extraction. In Findings of the Association for Computational Linguistics: ACL-IJCNLP. Association for Computational Linguistics, Online,26532662. Google ScholarGoogle ScholarCross RefCross Ref
  14. [14] Nan Guoshun, Guo Zhijiang, Sekulic Ivan, and Lu Wei. 2020. Reasoning with latent structure refinement for document-level relation extraction. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online,15461557. Google ScholarGoogle ScholarCross RefCross Ref
  15. [15] Quirk Chris and Poon Hoifung. 2017. Distant supervision for relation extraction beyond the sentence boundary. In Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics, Vol. 1. Association for Computational Linguistics, Valencia, Spain,11711182. https://www.aclweb.org/anthology/E17-1110.Google ScholarGoogle ScholarCross RefCross Ref
  16. [16] Sahu Sunil Kumar, Christopoulou Fenia, Miwa Makoto, and Ananiadou Sophia. 2019. Inter-sentence relation extraction with document-level graph convolutional neural network. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Florence, Italy,43094316. Google ScholarGoogle ScholarCross RefCross Ref
  17. [17] Scarselli Franco, Gori Marco, Tsoi Ah Chung, Hagenbuchner Markus, and Monfardini Gabriele. 2009. The graph neural network model. IEEE Transactions on Neural Networks 20, 1 (2009),6180. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. [18] Socher Richard, Huval Brody, Manning Christopher D., and Ng Andrew Y.. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Association for Computational Linguistics, Jeju Island, Korea,12011211. https://aclanthology.org/D12-1110.Google ScholarGoogle Scholar
  19. [19] Song Linfeng, Zhang Yue, Gildea Daniel, Yu Mo, Wang Zhiguo, and Su Jinsong. 2019. Leveraging dependency forest for neural medical relation extraction. In Proceedings of the Conference on Empirical Methods in Natural Language Processing and the International Joint Conference on Natural Language Processing. Association for Computational Linguistics, Hong Kong, China,208218. Google ScholarGoogle ScholarCross RefCross Ref
  20. [20] Su Yu, Liu Honglei, Yavuz Semih, Gür Izzeddin, Sun Huan, and Yan Xifeng. 2018. Global relation embedding for relation extraction. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, New Orleans, LA,820830. Google ScholarGoogle ScholarCross RefCross Ref
  21. [21] Tang Hengzhu, Cao Yanan, Zhang Zhenyu, Cao Jiangxia, Fang Fang, Wang Shi, and Yin Pengfei. 2020. HIN: Hierarchical inference network for document-level relation extraction. In Advances in Knowledge Discovery and Data Mining. Springer-Verlag, Berlin,197209. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. [22] Vaswani Ashish, Shazeer Noam, Parmar Niki, Uszkoreit Jakob, Jones Llion, Gomez Aidan N., Kaiser Łukasz, and Polosukhin Illia. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, Guyon I., Luxburg U. V., Bengio S., Wallach H., Fergus R., Vishwanathan S., and Garnett R. (Eds.), Vol. 30. Curran Associates, Inc., California,59986008. https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf.Google ScholarGoogle Scholar
  23. [23] Veličković Petar, Cucurull Guillem, Casanova Arantxa, Romero Adriana, Liò Pietro, and Bengio Yoshua. 2018. Graph attention networks. In Proceedings of the International Conference on Learning Representations. OpenReview.net, Online,12 pages. https://openreview.net/forum?id=rJXMpikCZ.Google ScholarGoogle Scholar
  24. [24] Wang Hong, Focke Christfried, Sylvester Rob, Mishra Nilesh, and Wang William W. J.. 2019. Fine-tune BERT for DocRED with two-step process. ArXiv abs/1909.11898 (2019),9 pages. https://arxiv.org/pdf/1909.11898.pdf.Google ScholarGoogle Scholar
  25. [25] Wang Hao, Tao Qiongxing, Du Siyuan, and Luo Xiangfeng. 2020. An extensible framework of leveraging syntactic skeleton for semantic relation classification. ACM Transactions on Asian and Low-Resource Language Information Processing 19, 6, Article 77 (2020), 21 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. [26] Wang Linlin, Cao Zhu, Melo Gerard de, and Liu Zhiyuan. 2016. Relation classification via multi-level attention CNNs. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Berlin, Germany,12981307. Google ScholarGoogle ScholarCross RefCross Ref
  27. [27] Wei Zhepei, Su Jianlin, Wang Yue, Tian Yuan, and Chang Yi. 2020. A novel cascade binary tagging framework for relational triple extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online,14761488. https://www.aclweb.org/anthology/2020.acl-main.136.Google ScholarGoogle ScholarCross RefCross Ref
  28. [28] Wu Zonghan, Pan Shirui, Chen Fengwen, Long Guodong, Zhang Chengqi, and Yu Philip S.. 2019. A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems 32 (2019),424. https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9046288&tag=1.Google ScholarGoogle ScholarCross RefCross Ref
  29. [29] Xiao Yuxin, Zhang Zecheng, Mao Yuning, Yang Carl, and Han Jiawei. 2022. SAIS: Supervising and augmenting intermediate steps for document-level relation extraction. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Seattle, United States,23952409. Google ScholarGoogle ScholarCross RefCross Ref
  30. [30] Xie Yiqing, Shen Jiaming, Li Sha, Mao Yuning, and Han Jiawei. 2022. Eider: Empowering document-level relation extraction with efficient evidence extraction and inference-stage fusion. In Findings of the Association for Computational Linguistics: ACL 2022. Association for Computational Linguistics, Dublin, Ireland,257268. Google ScholarGoogle ScholarCross RefCross Ref
  31. [31] Xu Song, Li Haoran, Yuan Peng, Wu Youzheng, He Xiaodong, and Zhou Bowen. 2020. Self-attention guided copy mechanism for abstractive summarization. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online,13551362. Google ScholarGoogle ScholarCross RefCross Ref
  32. [32] Xu Wang, Chen Kehai, and Zhao Tiejun. 2021. Document-level relation extraction with reconstruction. In Proceedings of the AAAI Conference on Artificial Intelligence 35, 16 (2021),1416714175. https://ojs.aaai.org/index.php/AAAI/article/view/17667.Google ScholarGoogle ScholarCross RefCross Ref
  33. [33] Yang Shan, Zhang Yongfei, Niu Guanglin, Zhao Qinghua, and Pu Shiliang. 2021. Entity concept-enhanced few-shot relation extraction. In Proceedings of the Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. Association for Computational Linguistics, Online,987991. Google ScholarGoogle ScholarCross RefCross Ref
  34. [34] Yao Yuan, Ye Deming, Li Peng, Han Xu, Lin Yankai, Liu Zhenghao, Liu Zhiyuan, Huang Lixin, Zhou Jie, and Sun Maosong. 2019. DocRED: A large-scale document-level relation extraction dataset. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Florence, Italy,764777. Google ScholarGoogle ScholarCross RefCross Ref
  35. [35] Zeng Daojian, Liu Kang, Lai Siwei, Zhou Guangyou, and Zhao Jun. 2014. Relation classification via convolutional deep neural network. In Proceedings of the International Conference on Computational Linguistics. Dublin City University and Association for Computational Linguistics, Dublin, Ireland,23352344. https://www.aclweb.org/anthology/C14-1220.Google ScholarGoogle Scholar
  36. [36] Zeng Shuang, Wu Yuting, and Chang Baobao. 2021. SIRE: Separate intra- and inter-sentential reasoning for document-level relation extraction. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Association for Computational Linguistics, Online,524534. Google ScholarGoogle ScholarCross RefCross Ref
  37. [37] Zeng Shuang, Xu Runxin, Chang Baobao, and Li Lei. 2020. Double graph based reasoning for document-level relation extraction. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Online,16301640. https://www.aclweb.org/anthology/2020.emnlp-main.127.Google ScholarGoogle ScholarCross RefCross Ref
  38. [38] Zhang Kai, Yao Yuan, Xie Ruobing, Han Xu, Liu Zhiyuan, Lin Fen, Lin Leyu, and Sun Maosong. 2021. Open hierarchical relation extraction. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Online,56825693. Google ScholarGoogle ScholarCross RefCross Ref
  39. [39] Zhang Yue, Bo Zhang, Wang Rui, Cao Junjie, Li Chen, and Bao Zuyi. 2021. Entity relation extraction as dependency parsing in visually rich documents. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Online and Punta Cana, Dominican Republic,27592768. Google ScholarGoogle ScholarCross RefCross Ref
  40. [40] Zhang Yuhao, Qi Peng, and Manning Christopher D.. 2018. Graph convolution over pruned dependency trees improves relation extraction. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Brussels, Belgium,22052215. Google ScholarGoogle ScholarCross RefCross Ref
  41. [41] Zhou Wenxuan, Huang Kevin, Ma Tengyu, and Huang Jing. 2021. Document-level relation extraction with adaptive thresholding and localized context pooling. In Proceedings of the AAAI Conference on Artificial Intelligence 35, 16 (2021),1461214620. https://ojs.aaai.org/index.php/AAAI/article/view/17717.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Document-Level Relation Extraction with Path Reasoning

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Asian and Low-Resource Language Information Processing
      ACM Transactions on Asian and Low-Resource Language Information Processing  Volume 22, Issue 4
      April 2023
      682 pages
      ISSN:2375-4699
      EISSN:2375-4702
      DOI:10.1145/3588902
      Issue’s Table of Contents

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 25 March 2023
      • Online AM: 30 November 2022
      • Accepted: 18 November 2022
      • Revised: 23 September 2022
      • Received: 11 March 2022
      Published in tallip Volume 22, Issue 4

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
    • Article Metrics

      • Downloads (Last 12 months)305
      • Downloads (Last 6 weeks)52

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    View Full Text

    HTML Format

    View this article in HTML Format .

    View HTML Format
    About Cookies On This Site

    We use cookies to ensure that we give you the best experience on our website.

    Learn more

    Got it!