skip to main content
research-article

An Understanding-oriented Robust Machine Reading Comprehension Model

Published:27 December 2022Publication History
Skip Abstract Section

Abstract

Although existing machine reading comprehension models are making rapid progress on many datasets, they are far from robust. In this article, we propose an understanding-oriented machine reading comprehension model to address three kinds of robustness issues, which are over-sensitivity, over-stability, and generalization. Specifically, we first use a natural language inference module to help the model understand the accurate semantic meanings of input questions to address the issues of over-sensitivity and over-stability. Then, in the machine reading comprehension module, we propose a memory-guided multi-head attention method that can further well understand the semantic meanings of input questions and passages. Third, we propose a multi-language learning mechanism to address the issue of generalization. Finally, these modules are integrated with a multi-task learning-based method. We evaluate our model on three benchmark datasets that are designed to measure models’ robustness, including DuReader (robust) and two SQuAD-related datasets. Extensive experiments show that our model can well address the mentioned three kinds of robustness issues. And it achieves much better results than the compared state-of-the-art models on all these datasets under different evaluation metrics, even under some extreme and unfair evaluations. The source code of our work is available at https://github.com/neukg/RobustMRC.

REFERENCES

  1. Banerjee Pratyay, Gokhale Tejas, and Baral Chitta. 2021. Self-supervised test-time learning for reading comprehension. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, 12001211. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  2. Baradaran Razieh and Amirkhani Hossein. 2021. Ensemble learning-based approach for improving generalization capability of machine reading comprehension systems. Neurocomputing 466 (2021), 229–242.Google ScholarGoogle Scholar
  3. Bartolo Max, Thrush Tristan, Jia Robin, Riedel Sebastian, Stenetorp Pontus, and Kiela Douwe. 2021. Improving question answering model robustness with synthetic adversarial data generation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 88308848. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  4. Belinkov Yonatan, Poliak Adam, Shieber Stuart, Durme Benjamin Van, and Rush Alexander. 2019. Don’t take the premise for granted: Mitigating artifacts in natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 877891. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  5. Bowman Samuel R., Angeli Gabor, Potts Christopher, and Manning Christopher D.. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 632642. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  6. Chen Jifan, Choi Eunsol, and Durrett Greg. 2021a. Can NLI models verify QA systems’ predictions? In Findings of the Association for Computational Linguistics: EMNLP 2021. Association for Computational Linguistics, 38413854. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  7. Chen Jifan and Durrett Greg. 2021. Robust question answering through sub-part alignment. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, 12511263. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  8. Chen Wuya, Quan Xiaojun, Kit Chunyu, Min Zhengcheng, and Wang Jiahai. 2020. Multi-choice relational reasoning for machine reading comprehension. In Proceedings of the 28th International Conference on Computational Linguistics. 64486458.Google ScholarGoogle ScholarCross RefCross Ref
  9. Chen Zeming, Gao Qiyue, and Moss Lawrence S.. 2021b. NeuralLog: Natural language inference with joint neural and logical reasoning. In Proceedings of the 10th Joint Conference on Lexical and Computational Semantics. Association for Computational Linguistics, 7888. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  10. Chen Zheng and Wu Kangjian. 2020. ForceReader: A BERT-based interactive machine reading comprehension model with attention separation. In Proceedings of the 28th International Conference on Computational Linguistics. 26762686.Google ScholarGoogle ScholarCross RefCross Ref
  11. Chen Zheqian, Yang Rongqin, Cao Bin, Zhao Zhou, Cai Deng, and He Xiaofei. 2017. SmarNet: Teaching machines to read and comprehend like human. arXiv preprint arXiv:1710.02772 (2017).Google ScholarGoogle Scholar
  12. Clark Christopher and Gardner Matt. 2018. Simple and effective multi-paragraph reading comprehension. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. 845855.Google ScholarGoogle ScholarCross RefCross Ref
  13. Cui Yiming, Che Wanxiang, Liu Ting, Qin Bing, Yang Ziqing, Wang Shijin, and Hu Guoping. 2019. Pre-training with whole word masking for Chinese BERT. arXiv preprint arXiv:1906.08101 (2019).Google ScholarGoogle Scholar
  14. Cui Yiming, Chen Zhipeng, Wei Si, Wang Shijin, Liu Ting, and Hu Guoping. 2017. Attention-over-attention neural networks for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. 593602.Google ScholarGoogle ScholarCross RefCross Ref
  15. Devlin Jacob, Chang Ming-Wei, Lee Kenton, and Toutanova Kristina N.. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 41714186.Google ScholarGoogle Scholar
  16. Falke Tobias, Ribeiro Leonardo F. R., Utama Prasetya Ajie, Dagan Ido, and Gurevych Iryna. 2019. Ranking generated summaries by correctness: An interesting but challenging application for natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 22142220. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  17. Gan Wee Chung and Ng Hwee Tou. 2019. Improving the robustness of question answering systems to question paraphrasing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 60656075.Google ScholarGoogle ScholarCross RefCross Ref
  18. Gong Hongyu, Shen Yelong, Yu Dian, Chen Jianshu, and Yu Dong. 2020. Recurrent chunking mechanisms for long-text machine reading comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 67516761.Google ScholarGoogle ScholarCross RefCross Ref
  19. Guo Shaoru, Guan Yong, Li Ru, Li Xiaoli, and Tan Hongye. 2020a. Incorporating syntax and frame semantics in neural network for machine reading comprehension. In Proceedings of the 28th International Conference on Computational Linguistics. 26352641.Google ScholarGoogle ScholarCross RefCross Ref
  20. Guo Shaoru, Li Ru, Tan Hongye, Li Xiaoli, Guan Yong, Zhao Hongyan, and Zhang Yueping. 2020b. A frame-based sentence representation for machine reading comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 891896.Google ScholarGoogle ScholarCross RefCross Ref
  21. He Wei, Liu Kai, Liu Jing, Lyu Yajuan, Zhao Shiqi, Xiao Xinyan, Liu Yuan, Wang Yizhong, Wu Hua, She Qiaoqiao, Liu Xuan, Wu Tian, and Wang Haifeng. 2018. DuReader: A Chinese machine reading comprehension dataset from real-world applications. In Proceedings of the Workshop on Machine Reading for Question Answering. 3746.Google ScholarGoogle ScholarCross RefCross Ref
  22. Hsu Tsung-Yuan, Liu Chi-Liang, and Lee Hung-yi. 2019. Zero-shot reading comprehension by cross-lingual transfer learning with multi-lingual language representation model. In Proceedings of the Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics, 59335940. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  23. Hu Minghao, Peng Yuxing, Huang Zhen, and Li Dongsheng. 2019a. Retrieve, read, rerank: Towards end-to-end multi-document reading comprehension. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 22852295.Google ScholarGoogle ScholarCross RefCross Ref
  24. Hu Minghao, Peng Yuxing, Wei Furu, Huang Zhen, Li Dongsheng, Yang Nan, and Zhou Ming. 2018. Attention-guided answer distillation for machine reading comprehension. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. 20772086.Google ScholarGoogle ScholarCross RefCross Ref
  25. Hu Minghao, Wei Furu, Peng Yuxing, Huang Zhen, Yang Nan, and Li Dongsheng. 2019b. Read + verify: Machine reading comprehension with unanswerable questions. In Proceedings of the AAAI Conference on Artificial Intelligence. 65296537.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Huang Canming, He Weinan, and Liu Yongmei. 2021. Improving unsupervised commonsense reasoning using knowledge-enabled natural language inference. In Findings of the Association for Computational Linguistics: EMNLP 2021. Association for Computational Linguistics, 48754885. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  27. Huang Rongtao, Zou Bowei, Hong Yu, Zhang Wei, Aw Ai Ti, and Zhou Guodong. 2020. NUT-RC: Noisy user-generated text-oriented reading comprehension. In Proceedings of the 28th International Conference on Computational Linguistics. 26872698.Google ScholarGoogle ScholarCross RefCross Ref
  28. Glickman Bernardo Magnini, Ido Dagan, and Oren. 2006. The PASCAL recognising textual entailment challenge. In Proceedings of the 1st International Conference on Machine Learning Challenges: Evaluating Predictive Uncertainty Visual Object Classification, and Recognizing Textual Entailment. 177190.Google ScholarGoogle Scholar
  29. Jia Robin and Liang Percy. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. 20212031.Google ScholarGoogle ScholarCross RefCross Ref
  30. Jiang Zhongtao, Zhang Yuanzhe, Yang Zhao, Zhao Jun, and Liu Kang. 2021. Alignment rationale for natural language inference. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. Association for Computational Linguistics, 53725387. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  31. Joshi Mandar, Choi Eunsol, Weld Daniel S., and Zettlemoyer Luke. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. 16011611.Google ScholarGoogle ScholarCross RefCross Ref
  32. Sabharwal Peter Clark, Tushar Khot, and Ashish. 2018. SciTaiL: A textual entailment dataset from science question answering. In Proceedings of the AAAI Conference on Artificial Intelligence.Google ScholarGoogle Scholar
  33. Kingma Diederik P. and Ba Jimmy. 2015. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations.Google ScholarGoogle Scholar
  34. Koreeda Yuta and Manning Christopher. 2021. ContractNLI: A dataset for document-level natural language inference for contracts. In Findings of the Association for Computational Linguistics: EMNLP 2021. Association for Computational Linguistics, 19071919. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  35. Lan Zhenzhong, Chen Mingda, Goodman Sebastian, Gimpel Kevin, Sharma Piyush, and Soricut Radu. 2020. ALBERT: A lite BERT for self-supervised learning of language representations. In Proceedings of the 8th International Conference on Learning Representations.Google ScholarGoogle Scholar
  36. Li Dongfang, Hu Baotian, Chen Qingcai, Peng Weihua, and Wang Anqi. 2020b. Towards medical machine reading comprehension with structural knowledge and plain text. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP’20). 14271438.Google ScholarGoogle ScholarCross RefCross Ref
  37. Li Hongyu, Chen Tengyang, Bai Shuting, Utsuro Takehito, and Kawada Yasuhide. 2020a. MRC examples answerable by BERT without a question are less effective in MRC model training. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: Student Research Workshop. 146152.Google ScholarGoogle Scholar
  38. Li Tianda, Rashid Ahmad, Jafari Aref, Sharma Pranav, Ghodsi Ali, and Rezagholizadeh Mehdi. 2021. How to select one among all? An empirical study towards the robustness of knowledge distillation in natural language understanding. In Findings of the Association for Computational Linguistics: EMNLP 2021. Association for Computational Linguistics, 750762. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  39. Liu Kai, Liu Xin, Yang An, Liu Jing, Su Jinsong, Li Sujian, and She Qiaoqiao. 2020. A robust adversarial training approach to machine reading comprehension. In Proceedings of the AAAI Conference on Artificial Intelligence. 83928400.Google ScholarGoogle ScholarCross RefCross Ref
  40. Liu Xin, Chen Qingcai, Deng Chong, Zeng Huajun, Chen Jing, Li Dongfang, and Tang Buzhou. 2018a. LCQMC: A large-scale Chinese question matching corpus. In Proceedings of the 27th International Conference on Computational Linguistics. 19521962.Google ScholarGoogle Scholar
  41. Liu Xiaodong, Shen Yelong, Duh Kevin, and Gao Jianfeng. 2018b. Stochastic answer networks for machine reading comprehension. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. 16941704.Google ScholarGoogle ScholarCross RefCross Ref
  42. Liu Yinhan, Ott Myle, Goyal Naman, Du Jingfei, Joshi Mandar, Chen Danqi, Levy Omer, Lewis Mike, Zettlemoyer Luke, and Stoyanov Veselin. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692 (2019).Google ScholarGoogle Scholar
  43. Long Siyu, Wang Ran, Tao Kun, Zeng Jiali, and Dai Xinyu. 2020. Synonym knowledge enhanced reader for Chinese idiom reading comprehension. In Proceedings of the 28th International Conference on Computational Linguistics. 36843695.Google ScholarGoogle ScholarCross RefCross Ref
  44. Luo Huaishao, Shi Yu, Gong Ming, Shou Linjun, and Li Tianrui. 2020. MaP: A matrix-based prediction approach to improve span extraction in machine reading comprehension. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing. 687695.Google ScholarGoogle Scholar
  45. Malmaud Jonathan, Levy Roger, and Berzak Yevgeni. 2020. Bridging information-seeking human gaze and machine reading comprehension. In Proceedings of the 24th Conference on Computational Natural Language Learning. 142152.Google ScholarGoogle ScholarCross RefCross Ref
  46. Meissner Johannes Mario, Thumwanit Napat, Sugawara Saku, and Aizawa Akiko. 2021. Embracing ambiguity: Shifting the training target of NLI models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. Association for Computational Linguistics, 862869. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  47. Min Sewon, Zhong Victor, Socher Richard, and Xiong Caiming. 2018. Efficient and robust question answering from minimal context over documents. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. 17251735.Google ScholarGoogle ScholarCross RefCross Ref
  48. Náplava Jakub, Popel Martin, Straka Milan, and Straková Jana. 2021. Understanding model robustness to user-generated noisy texts. In Proceedings of the 7th Workshop on Noisy User-generated Text (W-NUT’21). Association for Computational Linguistics, 340350. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  49. Nguyen Tri, Rosenberg Mir, Song Xia, Gao Jianfeng, Tiwary Saurabh, Majumder Rangan, and Deng Li. 2016. MS MARCO: A human generated machine reading comprehension dataset. In Proceedings of the [email protected] Workshop.Google ScholarGoogle Scholar
  50. Shou Min Gong, Jian Pei, Daxin Jiang, Nuo Chen, and Linjun. 2022. From good to best: Two-stage training for cross-lingual machine reading comprehension. In Proceedings of the AAAI Conference on Artificial Intelligence.Google ScholarGoogle Scholar
  51. Peng Wei, Hu Yue, Xing Luxi, Xie Yuqiang, Yu Jing, Sun Yajing, and Wei Xiangpeng. 2020. Bi-directional cognitive thinking network for machine reading comprehension. In Proceedings of the 28th International Conference on Computational Linguistics.Google ScholarGoogle ScholarCross RefCross Ref
  52. Rajpurkar Pranav, Jia Robin, and Liang Percy. 2018. Know what you don’t know: Unanswerable questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. 784789.Google ScholarGoogle ScholarCross RefCross Ref
  53. Rosenberg Daniel, Gat Itai, Feder Amir, and Reichart Roi. 2021. Are VQA systems RAD? Measuring robustness to augmented data with focused interventions. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. Association for Computational Linguistics, 6170. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  54. Seo Min Joon, Kembhavi Aniruddha, Farhadi Ali, and Hajishirzi Hannaneh. 2016. Bidirectional attention flow for machine comprehension. In Proceedings of the International Conference on Learning Representations (Poster).Google ScholarGoogle Scholar
  55. Shinoda Kazutoshi, Sugawara Saku, and Aizawa Akiko. 2021. Improving the robustness of QA models to challenge sets with variational question-answer pair generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: Student Research Workshop. Association for Computational Linguistics, 197214. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  56. Si Chenglei, Yang Ziqing, Cui Yiming, Ma Wentao, Liu Ting, and Wang Shijin. 2020. Benchmarking robustness of machine reading comprehension models. arXiv preprint arXiv:2004.14004 (2020).Google ScholarGoogle Scholar
  57. Sun Kai, Yu Dian, Yu Dong, and Cardie Claire. 2019b. Improving machine reading comprehension with general reading strategies. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 26332643.Google ScholarGoogle ScholarCross RefCross Ref
  58. Sun Yu, Wang Shuohuan, Li Yukun, Feng Shikun, Chen Xuyi, Zhang Han, Tian Xin, Zhu Danxiang, Tian Hao, and Wu Hua. 2019a. ERNIE: Enhanced representation through knowledge integration. arXiv preprint arXiv:1904.09223 (2019).Google ScholarGoogle Scholar
  59. Tang Hongxuan, Li Hongyu, Liu Jing, Hong Yu, Wu Hua, and Wang Haifeng. 2021. DuReader_robust: A Chinese dataset towards evaluating robustness and generalization of machine reading comprehension in real-world applications. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. Association for Computational Linguistics, 955963. Retrieved from https://aclanthology.org/2021.acl-short.120.Google ScholarGoogle ScholarCross RefCross Ref
  60. Tian Zhixing, Zhang Yuanzhe, Liu Kang, Zhao Jun, Jia Yantao, and Sheng Zhicheng. 2020. Scene restoring for narrative machine reading comprehension. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP’20). 30633073.Google ScholarGoogle ScholarCross RefCross Ref
  61. Vaswani Ashish, Shazeer Noam, Parmar Niki, Uszkoreit Jakob, Jones Llion, Gomez Aidan N., Kaiser Lukasz, and Polosukhin Illia. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems. 59986008.Google ScholarGoogle Scholar
  62. Wang Chao and Jiang Hui. 2019. Explicit utilization of general knowledge in machine reading comprehension. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 22632272.Google ScholarGoogle ScholarCross RefCross Ref
  63. Wang Wei, Yan Ming, and Wu Chen. 2018c. Multi-granularity hierarchical attention fusion networks for reading comprehension and question answering. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. 17051714.Google ScholarGoogle ScholarCross RefCross Ref
  64. Wang Yicheng and Bansal Mohit. 2018. Robust machine comprehension models via adversarial training. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 575581.Google ScholarGoogle ScholarCross RefCross Ref
  65. Wang Yizhong, Liu Kai, Liu Jing, He Wei, Lyu Yajuan, Wu Hua, Li Sujian, and Wang Haifeng. 2018a. Multi-passage machine reading comprehension with cross-passage answer verification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. 19181927.Google ScholarGoogle ScholarCross RefCross Ref
  66. Wang Zhen, Liu Jiachen, Xiao Xinyan, Lyu Yajuan, and Wu Tian. 2018b. Joint training of candidate extraction and answer selection for reading comprehension. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. 17151724.Google ScholarGoogle ScholarCross RefCross Ref
  67. Welbl Johannes, Minervini Pasquale, Bartolo Max, Stenetorp Pontus, and Riedel Sebastian. 2020. Undersensitivity in neural reading comprehension. In Findings of the Association for Computational Linguistics: EMNLP 2020. Association for Computational Linguistics, 1152–1165.Google ScholarGoogle Scholar
  68. Welleck Sean, Weston Jason, Szlam Arthur, and Cho Kyunghyun. 2019. Dialogue natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 37313741. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  69. Williams Adina, Nangia Nikita, and Bowman Samuel. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, 11121122. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  70. Wu Zhijing and Xu Hua. 2020. Improving the robustness of machine reading comprehension model with hierarchical knowledge and auxiliary unanswerability prediction. Knowl.-based Syst. 203 (2020), 106075.Google ScholarGoogle ScholarCross RefCross Ref
  71. Yan Ming, Xia Jiangnan, Wu Chen, Bi Bin, Zhao Zhongzhou, Zhang Ji, Si Luo, Wang Rui, Wang Wei, and Chen Haiqing. 2019. A deep cascade model for multi-document reading comprehension. In Proceedings of the AAAI Conference on Artificial Intelligence. 73547361.Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. Yang Zhilin, Dai Zihang, Yang Yiming, Carbonell Jaime G., Salakhutdinov Ruslan, and Le Quoc V.. 2019. XLNet: Generalized autoregressive pretraining for language understanding. In Proceedings of the Conference on Advances in Neural Information Processing Systems. 57535763.Google ScholarGoogle Scholar
  73. Yu Adams Wei, Dohan David, Luong Minh-Thang, Zhao Rui, Chen Kai, Norouzi Mohammad, and Le Quoc V.. 2018. QANet: Combining local convolution with global self-attention for reading comprehension. In Proceedings of the International Conference on Learning Representations.Google ScholarGoogle Scholar
  74. Zhang Xuemiao, Zhou Kun, Wang Sirui, Zhang Fuzheng, Wang Zhongyuan, and Liu Junfei. 2020. Learn with noisy data via unsupervised loss correction for weakly supervised reading comprehension. In Proceedings of the 28th International Conference on Computational Linguistics. 26242634.Google ScholarGoogle ScholarCross RefCross Ref
  75. Zhang Zhuosheng, Yang Junjie, and Zhao Hai. 2021. Retrospective reader for machine reading comprehension. In AAAI 2021.Google ScholarGoogle Scholar
  76. Zheng Bo, Wen Haoyang, Liang Yaobo, Duan Nan, Che Wanxiang, Jiang Daxin, Zhou Ming, and Liu Ting. 2020. Document modeling with graph attention networks for multi-grained machine reading comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 67086718.Google ScholarGoogle ScholarCross RefCross Ref
  77. Zhou Mantong, Huang Minlie, and Zhu Xiaoyan. 2020. Robust reading comprehension with linguistic constraints via posterior regularization. IEEE Transactions on Audio, Speech, and Language Processing 28 (2020), 25002510.Google ScholarGoogle ScholarDigital LibraryDigital Library
  78. Zhou Xiang and Bansal Mohit. 2020. Towards robustifying NLI models against lexical dataset biases. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 87598771. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  79. Zylberajch Hugo, Lertvittayakumjorn Piyawat, and Toni Francesca. 2021. HILDIF: Interactive debugging of NLI models using influence functions. In Proceedings of the First Workshop on Interactive Learning for Natural Language Processing. Association for Computational Linguistics, 16. DOI:Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. An Understanding-oriented Robust Machine Reading Comprehension Model

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Asian and Low-Resource Language Information Processing
      ACM Transactions on Asian and Low-Resource Language Information Processing  Volume 22, Issue 2
      February 2023
      624 pages
      ISSN:2375-4699
      EISSN:2375-4702
      DOI:10.1145/3572719
      Issue’s Table of Contents

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 27 December 2022
      • Online AM: 30 June 2022
      • Accepted: 24 June 2022
      • Revised: 19 February 2022
      • Received: 5 August 2021
      Published in tallip Volume 22, Issue 2

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Refereed
    • Article Metrics

      • Downloads (Last 12 months)243
      • Downloads (Last 6 weeks)19

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    View Full Text

    HTML Format

    View this article in HTML Format .

    View HTML Format
    About Cookies On This Site

    We use cookies to ensure that we give you the best experience on our website.

    Learn more

    Got it!