Abstract
Cyber-Physical Systems (CPS), as a multi-dimensional complex system that connects the physical world and the cyber world, has a strong demand for processing large amounts of heterogeneous data. These tasks also include Natural Language Inference (NLI) tasks based on text from different sources. However, the current research on natural language processing in CPS does not involve exploration in this field. Therefore, this study proposes a Siamese Network structure that combines Stacked Residual Long Short-Term Memory (bidirectional) with the Attention mechanism and Capsule Network for the NLI module in CPS, which is used to infer the relationship between text/language data from different sources. This model is mainly used to implement NLI tasks and conduct a detailed evaluation in three main NLI benchmarks as the basic semantic understanding module in CPS. Comparative experiments prove that the proposed method achieves competitive performance, has a certain generalization ability, and can balance the performance and the number of trained parameters.
- Asma Ben Abacha and Dina Demner-Fushman. 2019. A question-entailment approach to question answering. BMC Bioinformatics 20, 1 (2019), 511.Google Scholar
Cross Ref
- Asma Ben Abacha, Chaitanya Shivade, and Dina Demner-Fushman. 2019. Overview of the MEDIQA 2019 shared task on textual inference, question entailment and question answering. In Proceedings of the 18th BioNLP Workshop and Shared Task. 370–379.Google Scholar
- Rod Adams. 2006. Textual entailment through extended lexical overlap. In Proceedings of the 2nd PASCAL Challenges Workshop on Recognising Textual Entailment. 128–133.Google Scholar
- Elena Akhmatova. 2005. Textual entailment resolution via atomic propositions. In Proceedings of the 1st PASCAL Challenges Workshop on Recognising Textual Entailment.Google Scholar
- Ion Androutsopoulos and Prodromos Malakasiotis. 2010. A survey of paraphrasing and textual entailment methods. Journal of Artificial Intelligence Research 38 (2010), 135–187. Google Scholar
Digital Library
- Gabor Angeli, Melvin Jose Johnson Premkumar, and Christopher D. Manning. 2015. Leveraging linguistic structure for open domain information extraction. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). 344–354.Google Scholar
- Jorge Balazs, Edison Marrese-Taylor, Pablo Loyola, and Yutaka Matsuo. 2017. Refining raw sentence representations for textual entailment recognition via attention. In Proceedings of the 2nd Workshop on Evaluating Vector Space Representations for NLP. 51–55.Google Scholar
Cross Ref
- Roy Bar-Haim, Ido Dagan, Iddo Greental, and Eyal Shnarch. 2007. Semantic inference at the lexical-syntactic level. In Proceedings of the 22nd National Conference on Artificial Intelligence, Vol. 1. 871–876. Google Scholar
Digital Library
- Samuel Bayer, John Burger, Lisa Ferro, John Henderson, and Er Yeh. 2005. MITRE’s submission to the EU Pascal RTE Challenge. In Proceedings of the 1st Challenge Workshop: Recognizing Textual Entailment.Google Scholar
- Luís Borges, Bruno Martins, and Pável Calado. 2019. Combining similarity features and deep representation learning for stance detection in the context of checking fake news. Journal of Data and Information Quality 11, 3 (2019), 1–26. Google Scholar
Digital Library
- Samuel Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. 632–642.Google Scholar
- Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard Säckinger, and Roopak Shah. 1994. Signature verification using a “Siamese” time delay neural network. In Advances in Neural Information Processing Systems. 737–744. Google Scholar
Digital Library
- Peixin Chen, Wu Guo, Zhi Chen, Jian Sun, and Lanhua You. 2018. Gated convolutional neural network for sentence matching. In Proceedings of the 2018 Interspeech Conference. 2853–2857.Google Scholar
Cross Ref
- Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, and Si Wei. 2018. Neural natural language inference models enhanced with external knowledge. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2406–2417.Google Scholar
Cross Ref
- Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced LSTM for natural language inference. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 1657–1668.Google Scholar
Cross Ref
- Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Recurrent neural network-based sentence encoder with gated attention for natural language inference. In Proceedings of the 2nd Workshop on Evaluating Vector Space Representations for NLP. 36–40.Google Scholar
Cross Ref
- Alexis Conneau, Douwe Kiela, Holger Schwenk, Loïc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. 670–680.Google Scholar
Cross Ref
- Clément Delgrange, Jean-Michel Dussoux, and Peter Ford Dominey. 2019. Usage-based learning in human interaction with an adaptive virtual assistant. IEEE Transactions on Cognitive and Developmental Systems 12, 1 (2019), 109–123.Google Scholar
Cross Ref
- Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long and Short Papers). 4171–4186.Google Scholar
- Li Ding, Tim Finin, Anupam Joshi, Rong Pan, R. Scott, Cost Yun Peng, Pavan Reddivari, Vishal Doshi, and Joel Sachs. 2004. Swoogle: A semantic web search and metadata engine. In Proceedings of the ACM Conference on Information and Knowledge Management. ACM, New York, NY. Google Scholar
Digital Library
- Bill Dolan, Chris Brockett, and Chris Quirk. 2005. Microsoft research paraphrase corpus. Microsoft. Retrieved May 3, 2020 from https://www.microsoft.com/en-us/download/details.aspx?id=52398.Google Scholar
- Qianlong Du, Chengqing Zong, and Keh-Yih Su. 2020. Conducting natural language inference with word-pair-dependency and local context. ACM Transactions on Asian and Low-Resource Language Information Processing 19, 3 (2020), 1–23. Google Scholar
Digital Library
- Myroslava O. Dzikovska, Rodney Nielsen, Chris Brew, Claudia Leacock, Danilo Giampiccolo, Luisa Bentivogli, Peter Clark, Ido Dagan, and Hoa Trang Dang. 2013. SemEval-2013 Task 7: The joint student response analysis and 8th recognizing textual entailment challenge. In Proceedings of the 2nd Joint Conference on Lexical and Computational Semantics (*Sem), Volume 2, Co-located with Proceedings of the 7th International Workshop on Semantic Evaluation (SemEval’13). 263–274.Google Scholar
- Myroslava O. Dzikovska, Rodney D. Nielsen, and Claudia Leacock. 2016. The joint student response analysis and recognizing textual entailment challenge: Making sense of student responses in educational applications. Language Resources and Evaluation 50, 1 (2016), 67–93. Google Scholar
Digital Library
- Jianfeng Gao, Michel Galley, and Lihong Li. 2018. Neural approaches to conversational AI. In Proceedings of the 41st International ACM SIGIR Conference on Research and Development in Information Retrieval. 1371–1374. Google Scholar
Digital Library
- Reza Ghaeini, Sadid A. Hasan, Vivek Datla, Joey Liu, Kathy Lee, Ashequl Qadir, Yuan Ling, Aaditya Prakash, Xiaoli Fern, and Oladimeji Farri. 2018. DR-BiLSTM: Dependent reading bidirectional LSTM for natural language inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers). 1460–1469.Google Scholar
Cross Ref
- Carlos Gómez-Rodríguez, Iago Alonso-Alonso, and David Vilares. 2019. How important is syntactic parsing accuracy? An empirical evaluation on rule-based sentiment analysis. Artificial Intelligence Review 52, 3 (2019), 2081–2097.Google Scholar
Cross Ref
- Yichen Gong, Heng Luo, and Jian Zhang. 2018. Natural language inference over interaction space. In Proceedings of the International Conference on Learning Representations.Google Scholar
- Alex Graves, Abdel-Rahman Mohamed, and Geoffrey Hinton. 2013. Speech recognition with deep recurrent neural networks. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech, and Signal Processing. IEEE, Los Alamitos, CA, 6645–6649.Google Scholar
Cross Ref
- Anand Gupta, Manpreet Kaur, Disha Garg, and Karuna Saini. 2017. Using variant directional dis (similarity) measures for the task of textual entailment. In Proceedings of the International Conference on Recent Developments in Science, Engineering, and Technology. 287–297.Google Scholar
- Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural language inference data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers). 107–112.Google Scholar
- Sanda Harabagiu and Andrew Hickl. 2006. Methods for using textual entailment in open-domain question answering. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics. 905–912. DOI:https://doi.org/10.3115/1220175.1220289 Google Scholar
Digital Library
- Sanda Harabagiu, Andrew Hickl, and Finley Lacatusu. 2007. Satisfying information needs with multi-document summaries. Information Processing & Management 43, 6 (2007), 1619–1642. Google Scholar
Digital Library
- Michael Heilman and Noah A. Smith. 2010. Tree edit models for recognizing textual entailments, paraphrases, and answers to questions. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. 1011–1019. Google Scholar
Digital Library
- Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv:1503.02531Google Scholar
- Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q. Weinberger. 2017. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4700–4708.Google Scholar
- Jinbae Im and Sungzoon Cho. 2017. Distance-based self-attention network for natural language inference. arXiv:1712.02047Google Scholar
- Yangfeng Ji and Jacob Eisenstein. 2013. Discriminative improvements to distributional sentence similarity. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. 891–896.Google Scholar
- Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2019. TinyBERT: Distilling BERT for natural language understanding. arXiv:1909.10351Google Scholar
- Valentin Jijkoun and Maarten de Rijke. 2005. Recognizing textual entailment using lexical similarity. In Proceedings of the PASCAL Challenges Workshop on Recognising Textual Entailment. 73–76.Google Scholar
- Tushar Khot, Ashish Sabharwal, and Peter Clark. 2018. SciTail: A textual entailment dataset from science question answering. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence.Google Scholar
- Seonhoon Kim, Inho Kang, and Nojun Kwak. 2019. Semantic sentence matching with densely-connected recurrent and co-attentive information. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. 6586–6593.Google Scholar
Digital Library
- Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. 2015. Skip-thought vectors. In Proceedings of the 28th International Conference on Neural Information Processing Systems—Volume 2. 3294–3302. Google Scholar
Digital Library
- Daniel Z. Korman, Eric Mack, Jacob Jett, and Allen H. Renear. 2018. Defining textual entailment. Journal of the Association for Information Science and Technology 69, 6 (2018), 763–772.Google Scholar
Cross Ref
- Guanyu Li, Pengfei Zhang, and Caiyan Jia. 2018. Attention boosted sequential inference model. arXiv:1812.01840Google Scholar
- Yuming Li, Pin Ni, Gangmin Li, and Victor Chang.2020. Effective piecewise CNN with attention mechanism for distant supervision on relation extraction task. In Proceedings of the 5th International Conference on Complexity, Future Information Systems, and Risk—Volume 1: COMPLEXIS. 53–60. DOI:https://doi.org/10.5220/0009582700530060Google Scholar
- Yuming Li, Pin Ni, Junkun Peng, Jiayi Zhu, Zhenjin Dai, Gangmin Li, and Xuming Bai. 2019. A joint model of clinical domain classification and slot filling based on RCNN and BiGRU-CRF. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data ’19). IEEE, Los Alamitos, CA, 6133–6135.Google Scholar
Cross Ref
- Chang Liu, Daniel Dahlmeier, and Hwee Tou Ng. 2011. Better evaluation metrics lead to better machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. 375–384. Google Scholar
Digital Library
- Xiaodong Liu, Kevin Duh, and Jianfeng Gao. 2018. Stochastic answer networks for natural language inference. arXiv:1804.07888Google Scholar
- Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019. Improving multi-task deep neural networks via knowledge distillation for natural language understanding. arXiv:1904.09482Google Scholar
- Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019. Multi-task deep neural networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 4487–4496.Google Scholar
Cross Ref
- Bill MacCartney and Christopher D. Manning. 2007. Natural logic for textual inference. In Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing. 193–200. Google Scholar
Digital Library
- Suguru Matsuyoshi. 2016. Identification of event and topic for multi-document summarization. In Human Language Technology. Springer, 304.Google Scholar
- Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Contextualized word vectors. In Proceedings of the 31st International Conference on Neural Information Processing Systems. 6297–6308. Google Scholar
Digital Library
- Yashar Mehdad, Matteo Negri, Elena Cabrio, Milen Ognianov Kouylekov, and Bernardo Magnini. 2009. EDITS: An open source framework for recognizing textual entailment. In Proceedings of the 2009 Text Analysis Conference (TAC ’09).Google Scholar
- Dan Moldovan, Christine Clark, Sanda Harabagiu, and Steve Maiorano. 2003. COGEX: A logic prover for question answering. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology—Volume 1. 87–93. Google Scholar
Digital Library
- Lili Mou, Rui Men, Ge Li, Yan Xu, Lu Zhang, Rui Yan, and Zhi Jin. 2016. Natural language inference by tree-based convolution and heuristic matching. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). 130–136.Google Scholar
Cross Ref
- Tsendsuren Munkhdalai and Hong Yu. 2017. Neural tree indexers for text understanding. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics(Volume 1: Long Papers). 11–21.Google Scholar
Cross Ref
- Pin Ni, Yuming Li, Gangmin Li, and Victor Chang. 2020. Natural language understanding approaches based on joint task of intent detection and slot filling for IoT voice interaction. Neural Computing and Applications 32 (2020), 16149–16166.Google Scholar
Digital Library
- Pin Ni, Yuming Li, Jiayi Zhu, Junkun Peng, Zhenjin Dai, Gangmin Li, and Xuming Bai. 2019. Disease diagnosis prediction of EMR based on BiGRU-ATT-CapsNetwork model. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data ’19). IEEE, Los Alamitos, CA, 6166–6168.Google Scholar
Cross Ref
- Allen Nie, Erin D. Bennett, and Noah D. Goodman. 2017. Dissent: Sentence representation learning from explicit discourse relations. arXiv:1710.04334Google Scholar
- Yixin Nie and Mohit Bansal. 2017. Shortcut-stacked sentence encoders for multi-domain inference. In Proceedings of the 2nd Workshop on Evaluating Vector Space Representations for NLP. 41–45.Google Scholar
Cross Ref
- Rodney D. Nielsen, Wayne Ward, and James H. Martin. 2009. Recognizing entailment in intelligent tutoring systems. Natural Language Engineering 15, 4 (2009), 479–501. Google Scholar
Digital Library
- Bolanle Ojokoh and Emmanuel Adebisi. 2018. A review of question answering systems. Journal of Web Engineering 17, 8 (2018), 717–758.Google Scholar
Cross Ref
- Sebastian Padó, Daniel Cer, Michel Galley, Dan Jurafsky, and Christopher D. Manning. 2009. Measuring machine translation quality as semantic equivalence: A metric based on entailment features. Machine Translation 23, 2-3 (2009), 181–193. Google Scholar
Digital Library
- Boyuan Pan, Yazheng Yang, Zhou Zhao, Yueting Zhuang, Deng Cai, and Xiaofei He. 2018. Discourse marker augmented network with reinforcement learning for natural language inference. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 989–999.Google Scholar
Cross Ref
- Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers). 2227–2237.Google Scholar
Cross Ref
- Aaditya Prakash, Sadid A. Hasan, Kathy Lee, Vivek Datla, Ashequl Qadir, Joey Liu, and Oladimeji Farri. 2016. Neural paraphrase generation with stacked residual LSTM networks. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. 2923–2934.Google Scholar
- Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. Retrieved January 25, 2021 from https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf.Google Scholar
- Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv:1910.10683Google Scholar
- Rajat Raina, Andrew Y. Ng, and Christopher D. Manning. 2005. Robust textual inference via learning and abductive reasoning. In Proceedings of the 20th National Conference on Artificial Intelligence (AAAI’05). 1099–1105. Google Scholar
Digital Library
- Tim Rocktäschel, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiskỳ, and Phil Blunsom. 2016. Reasoning about entailment with neural attention. In Proceedings of the International Conference on Learning Representations (ICLR’16).Google Scholar
- Lorenza Romano, Milen Kouylekov, Idan Szpektor, Ido Dagan, and Alberto Lavelli. 2006. Investigating a generic paraphrase-based approach for relation extraction. In Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics.Google Scholar
- Subhro Roy, Tim Vieira, and Dan Roth. 2015. Reasoning about quantities in natural language. Transactions of the Association for Computational Linguistics 3 (2015), 1–13.Google Scholar
Cross Ref
- Sara Sabour, Nicholas Frosst, and Geoffrey E. Hinton. 2017. Dynamic routing between capsules. In Advances in Neural Information Processing Systems. 3856–3866. Google Scholar
Digital Library
- Dinghan Shen, Guoyin Wang, Wenlin Wang, Martin Renqiang Min, Qinliang Su, Yizhe Zhang, Chunyuan Li, Ricardo Henao, and Lawrence Carin. 2018. Baseline needs more love: On simple word-embedding-based models and associated pooling mechanisms. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 440–450.Google Scholar
Cross Ref
- Tao Shen, Jing Jiang, Tianyi Zhou, Shirui Pan, Guodong Long, and Chengqi Zhang. 2018. DiSAN: Directional self-attention network for RNN/CNN-free language understanding. In Proceedings of the 2018 AAAI Conference on Artificial Intelligence. 5446–5455.Google Scholar
- Ta-Chun Su and Hsiang-Chih Cheng. 2019. SesameBERT: Attention for anywhere. arXiv:1910.03176Google Scholar
- Sandeep Subramanian, Adam Trischler, Yoshua Bengio, and Christopher J. Pal. 2018. Learning general purpose distributed sentence representations via large scale multi-task learning. In Proceedings of the International Conference on Learning Representations.Google Scholar
- Siqi Sun, Yu Cheng, Zhe Gan, and Jingjing Liu. 2019. Patient knowledge distillation for BERT model compression. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP’19). 4314–4323.Google Scholar
Cross Ref
- Janos Sztipanovits, Xenofon Koutsoukos, Gabor Karsai, Nicholas Kottenstette, Panos Antsaklis, Vijay Gupta, Bill Goodwine, John Baras, and Shige Wang. 2011. Toward a science of cyber–physical system integration. Proceedings of the IEEE 100, 1 (2011), 29–44.Google Scholar
Cross Ref
- Chuanqi Tan, Furu Wei, Wenhui Wang, Weifeng Lv, and Ming Zhou. 2018. Multiway attention networks for modeling sentence pairs. In Proceedings of the 27th International Joint Conference on Artificial Intelligence. 4411–4417. Google Scholar
Digital Library
- Ming Tan, Cicero Dos Santos, Bing Xiang, and Bowen Zhou. 2016. Improved representation learning for question answer matching. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 464–473.Google Scholar
Cross Ref
- Yi Tay, Anh Tuan Luu, and Siu Cheung Hui. 2018. Compare, compress and propagate: Enhancing neural architectures with alignment factorization for natural language inference. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 1565–1575.Google Scholar
Cross Ref
- Nam Khanh Tran and Claudia Niedereée. 2018. Multihop attention networks for question answer matching. In Proceedings of the 41st International ACM SIGIR Conference on Research and Development in Information Retrieval. 325–334. Google Scholar
Digital Library
- Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Well-read students learn better: The impact of student initialization on knowledge distillation. arXiv:1980.08962Google Scholar
- Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems. 5998–6008. Google Scholar
Digital Library
- Andreas Vogelsang, Kerstin Hartig, Florian Pudlitz, Aaron Schlutter, and Jonas Winkler. 2019. Supporting the development of cyber-physical systems with natural language processing: A report. In NLP4RE 2019: The 2nd Workshop on Natural Language Processing for Requirements Engineering.Google Scholar
- Hoa Trong Vu, Thuong-Hai Pham, Xiaoyu Bai, Marc Tanti, Lonneke van der Plas, and Albert Gatt. 2017. LCT-MALTA’s submission to RepEval 2017 shared task. In Proceedings of the 2nd Workshop on Evaluating Vector Space Representations for NLP. 56–60. DOI:https://doi.org/10.18653/v1/W17-5311Google Scholar
- Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. 353–355.Google Scholar
Cross Ref
- Shuohang Wang and Jing Jiang. 2016. Learning natural language inference with LSTM. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 1442–1451.Google Scholar
Cross Ref
- Zhiguo Wang, Wael Hamza, and Radu Florian. 2017. Bilateral multi-perspective matching for natural language sentences. In Proceedings of the 26th International Joint Conference on Artificial Intelligence. 4144–4150. Google Scholar
Digital Library
- Stefan Wiesner, Christian Gorldt, Mathias Soeken, Klaus-Dieter Thoben, and Rolf Drechsler. 2014. Requirements engineering for cyber-physical systems. In Proceedings of the IFIP International Conference on Advances in Production Management Systems. 281–288.Google Scholar
Cross Ref
- Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers). 1112–1122.Google Scholar
Cross Ref
- Yujia Wu, Jing Li, Jia Wu, and Jun Chang. 2020. Siamese capsule networks with global and local features for text classification. Neurocomputing 390 (2020), 88–98.Google Scholar
Cross Ref
- Zhenyu Wu, Yuan Xu, Yunong Yang, Chunhong Zhang, Xinning Zhu, and Yang Ji. 2017. Towards a semantic Web of Things: A hybrid semantic annotation, extraction, and reasoning framework for cyber-physical system. Sensors 17, 2 (2017), 403.Google Scholar
Cross Ref
- Canwen Xu, Wangchunshu Zhou, Tao Ge, Furu Wei, and Ming Zhou. 2020. BERT-of-Theseus: Compressing BERT by progressive module replacing. arxiv:cs.CL/2002.02925Google Scholar
- Han Yang, Marta R Costa-Jussà, and José A. R. Fonollosa. 2017. Character-level intra attention network for natural language inference. In Proceedings of the 2nd Workshop on Evaluating Vector Space Representations for NLP. 46–50.Google Scholar
- Runqi Yang, Jianhai Zhang, Xing Gao, Feng Ji, and Haiqing Chen. 2019. Simple and effective text matching with richer alignment features. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 4699–4709.Google Scholar
Cross Ref
- Wenpeng Yin, Hinrich Schütze, Bing Xiang, and Bowen Zhou. 2016. ABCNN: Attention-based convolutional neural network for modeling sentence pairs. Transactions of the Association for Computational Linguistics 4 (2016), 259–272.Google Scholar
Cross Ref
- Deunsol Yoon, Dongbok Lee, and SangKeun Lee. 2018. Dynamic self-attention: Computing attention over words dynamically for sentence embedding. arXiv:1808.07383Google Scholar
- Deniz Yuret, Aydin Han, and Zehra Turgut. 2010. SemEval-2010 Task 12: Parser evaluation using textual entailments. In Proceedings of the 5th International Workshop on Semantic Evaluation. 51–56. Google Scholar
Digital Library
- Wojciech Zaremba and Ilya Sutskever. 2014. Learning to execute. arXiv:1410.4615Google Scholar
- Kun Zhang, Enhong Chen, Qi Liu, Chuanren Liu, and Guangyi Lv. 2017. A context-enriched neural network method for recognizing lexical entailment. In Proceedings of the 31st AAAI Conference on Artificial Intelligence. Google Scholar
Digital Library
- Zhuosheng Zhang, Yuwei Wu, Zuchao Li, and Hai Zhao. 2019. Explicit contextual semantics for text comprehension. In Proceedings of the 33rd Pacific Asia Conference on Language, Information, and Computation (PACLIC’19). 298–308.Google Scholar
- Zhuosheng Zhang, Yuwei Wu, Hai Zhao, Zuchao Li, Shuailiang Zhang, Xi Zhou, and Xiang Zhou. 2020. Semantics-aware BERT for language understanding. In Proceedings of the 34th AAAI Conference on Artificial Intelligence (AAAI’20).Google Scholar
- Wei Zhao, Jianbo Ye, Min Yang, Zeyang Lei, Suofei Zhang, and Zhou Zhao. 2018. Investigating capsule networks with dynamic routing for text classification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing.Google Scholar
Index Terms
A Hybrid Siamese Neural Network for Natural Language Inference in Cyber-Physical Systems
Recommendations
Towards Independent In-Cloud Evolution of Cyber-Physical Systems
CPSNA '14: Proceedings of the 2014 IEEE International Conference on Cyber-Physical Systems, Networks, and ApplicationsThe capabilities of Cyber-Physical Systems (CPSs) are increasingly being extended towards new composite services deployed across a range of smart sensing and controlling devices. These services enable the emergence of multiple end-to-end cyber-physical ...
Towards a Unified Framework for Cyber-Physical Systems (CPS)
CDEE '10: Proceedings of the 2010 First ACIS International Symposium on Cryptography, and Network Security, Data Mining and Knowledge Discovery, E-Commerce and Its Applications, and Embedded SystemsCyber-Physical Systems (CPS) integrate computation with physical processes. By merging computing and communication with physical processes CPS allows computer systems to monitor and interact with the physical world. However, today's computing and ...
Cyber/Physical Co-verification for Developing Reliable Cyber-physical Systems
COMPSAC '13: Proceedings of the 2013 IEEE 37th Annual Computer Software and Applications ConferenceCyber-Physical Systems (CPS) tightly integrate cyber and physical components and transcend discrete and continuous domains. It is greatly desired that the physical components being controlled and the software implementation of control algorithms can be ...






Comments