skip to main content
research-article

A Hybrid Siamese Neural Network for Natural Language Inference in Cyber-Physical Systems

Published:15 March 2021Publication History
Skip Abstract Section

Abstract

Cyber-Physical Systems (CPS), as a multi-dimensional complex system that connects the physical world and the cyber world, has a strong demand for processing large amounts of heterogeneous data. These tasks also include Natural Language Inference (NLI) tasks based on text from different sources. However, the current research on natural language processing in CPS does not involve exploration in this field. Therefore, this study proposes a Siamese Network structure that combines Stacked Residual Long Short-Term Memory (bidirectional) with the Attention mechanism and Capsule Network for the NLI module in CPS, which is used to infer the relationship between text/language data from different sources. This model is mainly used to implement NLI tasks and conduct a detailed evaluation in three main NLI benchmarks as the basic semantic understanding module in CPS. Comparative experiments prove that the proposed method achieves competitive performance, has a certain generalization ability, and can balance the performance and the number of trained parameters.

References

  1. Asma Ben Abacha and Dina Demner-Fushman. 2019. A question-entailment approach to question answering. BMC Bioinformatics 20, 1 (2019), 511.Google ScholarGoogle ScholarCross RefCross Ref
  2. Asma Ben Abacha, Chaitanya Shivade, and Dina Demner-Fushman. 2019. Overview of the MEDIQA 2019 shared task on textual inference, question entailment and question answering. In Proceedings of the 18th BioNLP Workshop and Shared Task. 370–379.Google ScholarGoogle Scholar
  3. Rod Adams. 2006. Textual entailment through extended lexical overlap. In Proceedings of the 2nd PASCAL Challenges Workshop on Recognising Textual Entailment. 128–133.Google ScholarGoogle Scholar
  4. Elena Akhmatova. 2005. Textual entailment resolution via atomic propositions. In Proceedings of the 1st PASCAL Challenges Workshop on Recognising Textual Entailment.Google ScholarGoogle Scholar
  5. Ion Androutsopoulos and Prodromos Malakasiotis. 2010. A survey of paraphrasing and textual entailment methods. Journal of Artificial Intelligence Research 38 (2010), 135–187. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Gabor Angeli, Melvin Jose Johnson Premkumar, and Christopher D. Manning. 2015. Leveraging linguistic structure for open domain information extraction. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). 344–354.Google ScholarGoogle Scholar
  7. Jorge Balazs, Edison Marrese-Taylor, Pablo Loyola, and Yutaka Matsuo. 2017. Refining raw sentence representations for textual entailment recognition via attention. In Proceedings of the 2nd Workshop on Evaluating Vector Space Representations for NLP. 51–55.Google ScholarGoogle ScholarCross RefCross Ref
  8. Roy Bar-Haim, Ido Dagan, Iddo Greental, and Eyal Shnarch. 2007. Semantic inference at the lexical-syntactic level. In Proceedings of the 22nd National Conference on Artificial Intelligence, Vol. 1. 871–876. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Samuel Bayer, John Burger, Lisa Ferro, John Henderson, and Er Yeh. 2005. MITRE’s submission to the EU Pascal RTE Challenge. In Proceedings of the 1st Challenge Workshop: Recognizing Textual Entailment.Google ScholarGoogle Scholar
  10. Luís Borges, Bruno Martins, and Pável Calado. 2019. Combining similarity features and deep representation learning for stance detection in the context of checking fake news. Journal of Data and Information Quality 11, 3 (2019), 1–26. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Samuel Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. 632–642.Google ScholarGoogle Scholar
  12. Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard Säckinger, and Roopak Shah. 1994. Signature verification using a “Siamese” time delay neural network. In Advances in Neural Information Processing Systems. 737–744. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Peixin Chen, Wu Guo, Zhi Chen, Jian Sun, and Lanhua You. 2018. Gated convolutional neural network for sentence matching. In Proceedings of the 2018 Interspeech Conference. 2853–2857.Google ScholarGoogle ScholarCross RefCross Ref
  14. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, and Si Wei. 2018. Neural natural language inference models enhanced with external knowledge. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2406–2417.Google ScholarGoogle ScholarCross RefCross Ref
  15. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced LSTM for natural language inference. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 1657–1668.Google ScholarGoogle ScholarCross RefCross Ref
  16. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Recurrent neural network-based sentence encoder with gated attention for natural language inference. In Proceedings of the 2nd Workshop on Evaluating Vector Space Representations for NLP. 36–40.Google ScholarGoogle ScholarCross RefCross Ref
  17. Alexis Conneau, Douwe Kiela, Holger Schwenk, Loïc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. 670–680.Google ScholarGoogle ScholarCross RefCross Ref
  18. Clément Delgrange, Jean-Michel Dussoux, and Peter Ford Dominey. 2019. Usage-based learning in human interaction with an adaptive virtual assistant. IEEE Transactions on Cognitive and Developmental Systems 12, 1 (2019), 109–123.Google ScholarGoogle ScholarCross RefCross Ref
  19. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long and Short Papers). 4171–4186.Google ScholarGoogle Scholar
  20. Li Ding, Tim Finin, Anupam Joshi, Rong Pan, R. Scott, Cost Yun Peng, Pavan Reddivari, Vishal Doshi, and Joel Sachs. 2004. Swoogle: A semantic web search and metadata engine. In Proceedings of the ACM Conference on Information and Knowledge Management. ACM, New York, NY. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Bill Dolan, Chris Brockett, and Chris Quirk. 2005. Microsoft research paraphrase corpus. Microsoft. Retrieved May 3, 2020 from https://www.microsoft.com/en-us/download/details.aspx?id=52398.Google ScholarGoogle Scholar
  22. Qianlong Du, Chengqing Zong, and Keh-Yih Su. 2020. Conducting natural language inference with word-pair-dependency and local context. ACM Transactions on Asian and Low-Resource Language Information Processing 19, 3 (2020), 1–23. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Myroslava O. Dzikovska, Rodney Nielsen, Chris Brew, Claudia Leacock, Danilo Giampiccolo, Luisa Bentivogli, Peter Clark, Ido Dagan, and Hoa Trang Dang. 2013. SemEval-2013 Task 7: The joint student response analysis and 8th recognizing textual entailment challenge. In Proceedings of the 2nd Joint Conference on Lexical and Computational Semantics (*Sem), Volume 2, Co-located with Proceedings of the 7th International Workshop on Semantic Evaluation (SemEval’13). 263–274.Google ScholarGoogle Scholar
  24. Myroslava O. Dzikovska, Rodney D. Nielsen, and Claudia Leacock. 2016. The joint student response analysis and recognizing textual entailment challenge: Making sense of student responses in educational applications. Language Resources and Evaluation 50, 1 (2016), 67–93. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Jianfeng Gao, Michel Galley, and Lihong Li. 2018. Neural approaches to conversational AI. In Proceedings of the 41st International ACM SIGIR Conference on Research and Development in Information Retrieval. 1371–1374. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Reza Ghaeini, Sadid A. Hasan, Vivek Datla, Joey Liu, Kathy Lee, Ashequl Qadir, Yuan Ling, Aaditya Prakash, Xiaoli Fern, and Oladimeji Farri. 2018. DR-BiLSTM: Dependent reading bidirectional LSTM for natural language inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers). 1460–1469.Google ScholarGoogle ScholarCross RefCross Ref
  27. Carlos Gómez-Rodríguez, Iago Alonso-Alonso, and David Vilares. 2019. How important is syntactic parsing accuracy? An empirical evaluation on rule-based sentiment analysis. Artificial Intelligence Review 52, 3 (2019), 2081–2097.Google ScholarGoogle ScholarCross RefCross Ref
  28. Yichen Gong, Heng Luo, and Jian Zhang. 2018. Natural language inference over interaction space. In Proceedings of the International Conference on Learning Representations.Google ScholarGoogle Scholar
  29. Alex Graves, Abdel-Rahman Mohamed, and Geoffrey Hinton. 2013. Speech recognition with deep recurrent neural networks. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech, and Signal Processing. IEEE, Los Alamitos, CA, 6645–6649.Google ScholarGoogle ScholarCross RefCross Ref
  30. Anand Gupta, Manpreet Kaur, Disha Garg, and Karuna Saini. 2017. Using variant directional dis (similarity) measures for the task of textual entailment. In Proceedings of the International Conference on Recent Developments in Science, Engineering, and Technology. 287–297.Google ScholarGoogle Scholar
  31. Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural language inference data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers). 107–112.Google ScholarGoogle Scholar
  32. Sanda Harabagiu and Andrew Hickl. 2006. Methods for using textual entailment in open-domain question answering. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics. 905–912. DOI:https://doi.org/10.3115/1220175.1220289 Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Sanda Harabagiu, Andrew Hickl, and Finley Lacatusu. 2007. Satisfying information needs with multi-document summaries. Information Processing & Management 43, 6 (2007), 1619–1642. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Michael Heilman and Noah A. Smith. 2010. Tree edit models for recognizing textual entailments, paraphrases, and answers to questions. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. 1011–1019. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv:1503.02531Google ScholarGoogle Scholar
  36. Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q. Weinberger. 2017. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4700–4708.Google ScholarGoogle Scholar
  37. Jinbae Im and Sungzoon Cho. 2017. Distance-based self-attention network for natural language inference. arXiv:1712.02047Google ScholarGoogle Scholar
  38. Yangfeng Ji and Jacob Eisenstein. 2013. Discriminative improvements to distributional sentence similarity. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. 891–896.Google ScholarGoogle Scholar
  39. Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2019. TinyBERT: Distilling BERT for natural language understanding. arXiv:1909.10351Google ScholarGoogle Scholar
  40. Valentin Jijkoun and Maarten de Rijke. 2005. Recognizing textual entailment using lexical similarity. In Proceedings of the PASCAL Challenges Workshop on Recognising Textual Entailment. 73–76.Google ScholarGoogle Scholar
  41. Tushar Khot, Ashish Sabharwal, and Peter Clark. 2018. SciTail: A textual entailment dataset from science question answering. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence.Google ScholarGoogle Scholar
  42. Seonhoon Kim, Inho Kang, and Nojun Kwak. 2019. Semantic sentence matching with densely-connected recurrent and co-attentive information. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. 6586–6593.Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. 2015. Skip-thought vectors. In Proceedings of the 28th International Conference on Neural Information Processing Systems—Volume 2. 3294–3302. Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Daniel Z. Korman, Eric Mack, Jacob Jett, and Allen H. Renear. 2018. Defining textual entailment. Journal of the Association for Information Science and Technology 69, 6 (2018), 763–772.Google ScholarGoogle ScholarCross RefCross Ref
  45. Guanyu Li, Pengfei Zhang, and Caiyan Jia. 2018. Attention boosted sequential inference model. arXiv:1812.01840Google ScholarGoogle Scholar
  46. Yuming Li, Pin Ni, Gangmin Li, and Victor Chang.2020. Effective piecewise CNN with attention mechanism for distant supervision on relation extraction task. In Proceedings of the 5th International Conference on Complexity, Future Information Systems, and Risk—Volume 1: COMPLEXIS. 53–60. DOI:https://doi.org/10.5220/0009582700530060Google ScholarGoogle Scholar
  47. Yuming Li, Pin Ni, Junkun Peng, Jiayi Zhu, Zhenjin Dai, Gangmin Li, and Xuming Bai. 2019. A joint model of clinical domain classification and slot filling based on RCNN and BiGRU-CRF. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data ’19). IEEE, Los Alamitos, CA, 6133–6135.Google ScholarGoogle ScholarCross RefCross Ref
  48. Chang Liu, Daniel Dahlmeier, and Hwee Tou Ng. 2011. Better evaluation metrics lead to better machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. 375–384. Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Xiaodong Liu, Kevin Duh, and Jianfeng Gao. 2018. Stochastic answer networks for natural language inference. arXiv:1804.07888Google ScholarGoogle Scholar
  50. Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019. Improving multi-task deep neural networks via knowledge distillation for natural language understanding. arXiv:1904.09482Google ScholarGoogle Scholar
  51. Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019. Multi-task deep neural networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 4487–4496.Google ScholarGoogle ScholarCross RefCross Ref
  52. Bill MacCartney and Christopher D. Manning. 2007. Natural logic for textual inference. In Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing. 193–200. Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Suguru Matsuyoshi. 2016. Identification of event and topic for multi-document summarization. In Human Language Technology. Springer, 304.Google ScholarGoogle Scholar
  54. Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Contextualized word vectors. In Proceedings of the 31st International Conference on Neural Information Processing Systems. 6297–6308. Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Yashar Mehdad, Matteo Negri, Elena Cabrio, Milen Ognianov Kouylekov, and Bernardo Magnini. 2009. EDITS: An open source framework for recognizing textual entailment. In Proceedings of the 2009 Text Analysis Conference (TAC ’09).Google ScholarGoogle Scholar
  56. Dan Moldovan, Christine Clark, Sanda Harabagiu, and Steve Maiorano. 2003. COGEX: A logic prover for question answering. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology—Volume 1. 87–93. Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. Lili Mou, Rui Men, Ge Li, Yan Xu, Lu Zhang, Rui Yan, and Zhi Jin. 2016. Natural language inference by tree-based convolution and heuristic matching. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). 130–136.Google ScholarGoogle ScholarCross RefCross Ref
  58. Tsendsuren Munkhdalai and Hong Yu. 2017. Neural tree indexers for text understanding. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics(Volume 1: Long Papers). 11–21.Google ScholarGoogle ScholarCross RefCross Ref
  59. Pin Ni, Yuming Li, Gangmin Li, and Victor Chang. 2020. Natural language understanding approaches based on joint task of intent detection and slot filling for IoT voice interaction. Neural Computing and Applications 32 (2020), 16149–16166.Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. Pin Ni, Yuming Li, Jiayi Zhu, Junkun Peng, Zhenjin Dai, Gangmin Li, and Xuming Bai. 2019. Disease diagnosis prediction of EMR based on BiGRU-ATT-CapsNetwork model. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data ’19). IEEE, Los Alamitos, CA, 6166–6168.Google ScholarGoogle ScholarCross RefCross Ref
  61. Allen Nie, Erin D. Bennett, and Noah D. Goodman. 2017. Dissent: Sentence representation learning from explicit discourse relations. arXiv:1710.04334Google ScholarGoogle Scholar
  62. Yixin Nie and Mohit Bansal. 2017. Shortcut-stacked sentence encoders for multi-domain inference. In Proceedings of the 2nd Workshop on Evaluating Vector Space Representations for NLP. 41–45.Google ScholarGoogle ScholarCross RefCross Ref
  63. Rodney D. Nielsen, Wayne Ward, and James H. Martin. 2009. Recognizing entailment in intelligent tutoring systems. Natural Language Engineering 15, 4 (2009), 479–501. Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. Bolanle Ojokoh and Emmanuel Adebisi. 2018. A review of question answering systems. Journal of Web Engineering 17, 8 (2018), 717–758.Google ScholarGoogle ScholarCross RefCross Ref
  65. Sebastian Padó, Daniel Cer, Michel Galley, Dan Jurafsky, and Christopher D. Manning. 2009. Measuring machine translation quality as semantic equivalence: A metric based on entailment features. Machine Translation 23, 2-3 (2009), 181–193. Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. Boyuan Pan, Yazheng Yang, Zhou Zhao, Yueting Zhuang, Deng Cai, and Xiaofei He. 2018. Discourse marker augmented network with reinforcement learning for natural language inference. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 989–999.Google ScholarGoogle ScholarCross RefCross Ref
  67. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers). 2227–2237.Google ScholarGoogle ScholarCross RefCross Ref
  68. Aaditya Prakash, Sadid A. Hasan, Kathy Lee, Vivek Datla, Ashequl Qadir, Joey Liu, and Oladimeji Farri. 2016. Neural paraphrase generation with stacked residual LSTM networks. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. 2923–2934.Google ScholarGoogle Scholar
  69. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. Retrieved January 25, 2021 from https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf.Google ScholarGoogle Scholar
  70. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv:1910.10683Google ScholarGoogle Scholar
  71. Rajat Raina, Andrew Y. Ng, and Christopher D. Manning. 2005. Robust textual inference via learning and abductive reasoning. In Proceedings of the 20th National Conference on Artificial Intelligence (AAAI’05). 1099–1105. Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. Tim Rocktäschel, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiskỳ, and Phil Blunsom. 2016. Reasoning about entailment with neural attention. In Proceedings of the International Conference on Learning Representations (ICLR’16).Google ScholarGoogle Scholar
  73. Lorenza Romano, Milen Kouylekov, Idan Szpektor, Ido Dagan, and Alberto Lavelli. 2006. Investigating a generic paraphrase-based approach for relation extraction. In Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics.Google ScholarGoogle Scholar
  74. Subhro Roy, Tim Vieira, and Dan Roth. 2015. Reasoning about quantities in natural language. Transactions of the Association for Computational Linguistics 3 (2015), 1–13.Google ScholarGoogle ScholarCross RefCross Ref
  75. Sara Sabour, Nicholas Frosst, and Geoffrey E. Hinton. 2017. Dynamic routing between capsules. In Advances in Neural Information Processing Systems. 3856–3866. Google ScholarGoogle ScholarDigital LibraryDigital Library
  76. Dinghan Shen, Guoyin Wang, Wenlin Wang, Martin Renqiang Min, Qinliang Su, Yizhe Zhang, Chunyuan Li, Ricardo Henao, and Lawrence Carin. 2018. Baseline needs more love: On simple word-embedding-based models and associated pooling mechanisms. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 440–450.Google ScholarGoogle ScholarCross RefCross Ref
  77. Tao Shen, Jing Jiang, Tianyi Zhou, Shirui Pan, Guodong Long, and Chengqi Zhang. 2018. DiSAN: Directional self-attention network for RNN/CNN-free language understanding. In Proceedings of the 2018 AAAI Conference on Artificial Intelligence. 5446–5455.Google ScholarGoogle Scholar
  78. Ta-Chun Su and Hsiang-Chih Cheng. 2019. SesameBERT: Attention for anywhere. arXiv:1910.03176Google ScholarGoogle Scholar
  79. Sandeep Subramanian, Adam Trischler, Yoshua Bengio, and Christopher J. Pal. 2018. Learning general purpose distributed sentence representations via large scale multi-task learning. In Proceedings of the International Conference on Learning Representations.Google ScholarGoogle Scholar
  80. Siqi Sun, Yu Cheng, Zhe Gan, and Jingjing Liu. 2019. Patient knowledge distillation for BERT model compression. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP’19). 4314–4323.Google ScholarGoogle ScholarCross RefCross Ref
  81. Janos Sztipanovits, Xenofon Koutsoukos, Gabor Karsai, Nicholas Kottenstette, Panos Antsaklis, Vijay Gupta, Bill Goodwine, John Baras, and Shige Wang. 2011. Toward a science of cyber–physical system integration. Proceedings of the IEEE 100, 1 (2011), 29–44.Google ScholarGoogle ScholarCross RefCross Ref
  82. Chuanqi Tan, Furu Wei, Wenhui Wang, Weifeng Lv, and Ming Zhou. 2018. Multiway attention networks for modeling sentence pairs. In Proceedings of the 27th International Joint Conference on Artificial Intelligence. 4411–4417. Google ScholarGoogle ScholarDigital LibraryDigital Library
  83. Ming Tan, Cicero Dos Santos, Bing Xiang, and Bowen Zhou. 2016. Improved representation learning for question answer matching. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 464–473.Google ScholarGoogle ScholarCross RefCross Ref
  84. Yi Tay, Anh Tuan Luu, and Siu Cheung Hui. 2018. Compare, compress and propagate: Enhancing neural architectures with alignment factorization for natural language inference. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 1565–1575.Google ScholarGoogle ScholarCross RefCross Ref
  85. Nam Khanh Tran and Claudia Niedereée. 2018. Multihop attention networks for question answer matching. In Proceedings of the 41st International ACM SIGIR Conference on Research and Development in Information Retrieval. 325–334. Google ScholarGoogle ScholarDigital LibraryDigital Library
  86. Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Well-read students learn better: The impact of student initialization on knowledge distillation. arXiv:1980.08962Google ScholarGoogle Scholar
  87. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems. 5998–6008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  88. Andreas Vogelsang, Kerstin Hartig, Florian Pudlitz, Aaron Schlutter, and Jonas Winkler. 2019. Supporting the development of cyber-physical systems with natural language processing: A report. In NLP4RE 2019: The 2nd Workshop on Natural Language Processing for Requirements Engineering.Google ScholarGoogle Scholar
  89. Hoa Trong Vu, Thuong-Hai Pham, Xiaoyu Bai, Marc Tanti, Lonneke van der Plas, and Albert Gatt. 2017. LCT-MALTA’s submission to RepEval 2017 shared task. In Proceedings of the 2nd Workshop on Evaluating Vector Space Representations for NLP. 56–60. DOI:https://doi.org/10.18653/v1/W17-5311Google ScholarGoogle Scholar
  90. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. 353–355.Google ScholarGoogle ScholarCross RefCross Ref
  91. Shuohang Wang and Jing Jiang. 2016. Learning natural language inference with LSTM. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 1442–1451.Google ScholarGoogle ScholarCross RefCross Ref
  92. Zhiguo Wang, Wael Hamza, and Radu Florian. 2017. Bilateral multi-perspective matching for natural language sentences. In Proceedings of the 26th International Joint Conference on Artificial Intelligence. 4144–4150. Google ScholarGoogle ScholarDigital LibraryDigital Library
  93. Stefan Wiesner, Christian Gorldt, Mathias Soeken, Klaus-Dieter Thoben, and Rolf Drechsler. 2014. Requirements engineering for cyber-physical systems. In Proceedings of the IFIP International Conference on Advances in Production Management Systems. 281–288.Google ScholarGoogle ScholarCross RefCross Ref
  94. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers). 1112–1122.Google ScholarGoogle ScholarCross RefCross Ref
  95. Yujia Wu, Jing Li, Jia Wu, and Jun Chang. 2020. Siamese capsule networks with global and local features for text classification. Neurocomputing 390 (2020), 88–98.Google ScholarGoogle ScholarCross RefCross Ref
  96. Zhenyu Wu, Yuan Xu, Yunong Yang, Chunhong Zhang, Xinning Zhu, and Yang Ji. 2017. Towards a semantic Web of Things: A hybrid semantic annotation, extraction, and reasoning framework for cyber-physical system. Sensors 17, 2 (2017), 403.Google ScholarGoogle ScholarCross RefCross Ref
  97. Canwen Xu, Wangchunshu Zhou, Tao Ge, Furu Wei, and Ming Zhou. 2020. BERT-of-Theseus: Compressing BERT by progressive module replacing. arxiv:cs.CL/2002.02925Google ScholarGoogle Scholar
  98. Han Yang, Marta R Costa-Jussà, and José A. R. Fonollosa. 2017. Character-level intra attention network for natural language inference. In Proceedings of the 2nd Workshop on Evaluating Vector Space Representations for NLP. 46–50.Google ScholarGoogle Scholar
  99. Runqi Yang, Jianhai Zhang, Xing Gao, Feng Ji, and Haiqing Chen. 2019. Simple and effective text matching with richer alignment features. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 4699–4709.Google ScholarGoogle ScholarCross RefCross Ref
  100. Wenpeng Yin, Hinrich Schütze, Bing Xiang, and Bowen Zhou. 2016. ABCNN: Attention-based convolutional neural network for modeling sentence pairs. Transactions of the Association for Computational Linguistics 4 (2016), 259–272.Google ScholarGoogle ScholarCross RefCross Ref
  101. Deunsol Yoon, Dongbok Lee, and SangKeun Lee. 2018. Dynamic self-attention: Computing attention over words dynamically for sentence embedding. arXiv:1808.07383Google ScholarGoogle Scholar
  102. Deniz Yuret, Aydin Han, and Zehra Turgut. 2010. SemEval-2010 Task 12: Parser evaluation using textual entailments. In Proceedings of the 5th International Workshop on Semantic Evaluation. 51–56. Google ScholarGoogle ScholarDigital LibraryDigital Library
  103. Wojciech Zaremba and Ilya Sutskever. 2014. Learning to execute. arXiv:1410.4615Google ScholarGoogle Scholar
  104. Kun Zhang, Enhong Chen, Qi Liu, Chuanren Liu, and Guangyi Lv. 2017. A context-enriched neural network method for recognizing lexical entailment. In Proceedings of the 31st AAAI Conference on Artificial Intelligence. Google ScholarGoogle ScholarDigital LibraryDigital Library
  105. Zhuosheng Zhang, Yuwei Wu, Zuchao Li, and Hai Zhao. 2019. Explicit contextual semantics for text comprehension. In Proceedings of the 33rd Pacific Asia Conference on Language, Information, and Computation (PACLIC’19). 298–308.Google ScholarGoogle Scholar
  106. Zhuosheng Zhang, Yuwei Wu, Hai Zhao, Zuchao Li, Shuailiang Zhang, Xi Zhou, and Xiang Zhou. 2020. Semantics-aware BERT for language understanding. In Proceedings of the 34th AAAI Conference on Artificial Intelligence (AAAI’20).Google ScholarGoogle Scholar
  107. Wei Zhao, Jianbo Ye, Min Yang, Zeyang Lei, Suofei Zhang, and Zhou Zhao. 2018. Investigating capsule networks with dynamic routing for text classification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing.Google ScholarGoogle Scholar

Index Terms

  1. A Hybrid Siamese Neural Network for Natural Language Inference in Cyber-Physical Systems

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image ACM Transactions on Internet Technology
        ACM Transactions on Internet Technology  Volume 21, Issue 2
        June 2021
        599 pages
        ISSN:1533-5399
        EISSN:1557-6051
        DOI:10.1145/3453144
        • Editor:
        • Ling Liu
        Issue’s Table of Contents

        Copyright © 2021 Association for Computing Machinery.

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 15 March 2021
        • Revised: 1 July 2020
        • Accepted: 1 July 2020
        • Received: 1 June 2020
        Published in toit Volume 21, Issue 2

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Refereed

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format .

      View HTML Format
      About Cookies On This Site

      We use cookies to ensure that we give you the best experience on our website.

      Learn more

      Got it!