skip to main content
research-article

A Context-focused Attention Evolution Model for Aspect-based Sentiment Classification

Published:09 May 2023Publication History
Skip Abstract Section

Abstract

Due to their inherent capability in the semantic alignment of aspects and their context words, Attention and Long-Short-Term-Memory (LSTM) mechanisms are widely adopted for Aspect-Based Sentiment Classification (ABSC) tasks. Instead, it is challenging to handle long-range word dependencies on multiple entities due to the deficiency in attention mechanisms. To solve this problem, we propose a Context-Focused Aspect-Based Network to align attention before LSTM, making the model focus more on aspect-related words and ignore irrelevant words, improving the accuracy of final classification. This can either alleviate attention distraction or reinforce the text representation ability. Experiments on two benchmark datasets show that the results achieve respectable performance compared to the state-of-the-art methods available in ABSC. Our approach has the potential to improve classification accuracy by adaptively adjusting the focus on context.

REFERENCES

  1. [1] Yadav A. and Vishwakarma D. K.. 2020. Sentiment analysis using deep learning architectures: A review. Artif. Intell. Rev. 53, 6 (2020), 43354385.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. [2] Dang N. C., Moreno-García M. N., and De la Prieta F.. 2020. Sentiment analysis based on deep learning: A comparative study. Electronics 9, 3 (2020), 483.Google ScholarGoogle ScholarCross RefCross Ref
  3. [3] Hemmatian F. and Sohrabi M. K.. 2019. A survey on classification techniques for opinion mining and sentiment analysis. Artif. Intell. Rev. 52, 3 (2019), 14951545.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. [4] Liu Bing. 2012. Sentiment analysis and opinion mining. Synth. Lect. Hum. Lang. Technol. 5, 1 (2012), 1167.Google ScholarGoogle ScholarCross RefCross Ref
  5. [5] Pang Bo and Lee Lillian. 2007. Opinion mining and sentiment analysis. Found. Trends Inf. Retriev. 2, 1–2 (2007), 1135.Google ScholarGoogle Scholar
  6. [6] Chugh A., Sharma V. K., Kumar S., Nayyar A., Qureshi B., Bhatia M. K., and Jain C.. 2021. Spider monkey crow optimization algorithm with deep learning for sentiment classification and information retrieval. IEEE Access 9 (2021), 2424924262.Google ScholarGoogle ScholarCross RefCross Ref
  7. [7] Wu Y., Zhang W., and Wan S.. 2022. CE-text: A context-Aware and embedded text detector in natural scene images. Pattern Recogn. Lett. 159 (2022), 7783.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. [8] Wu Yirui, Liu Wenxiang, and Wan Shaohua. 2021. Multiple attention encoded cascade R-CNN for scene text detection. J. Vis. Commun. Image Represent. 80 (2021), 103261.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. [9] Ding S., Qu S., Xi Y., and Wan S.. 2020. Stimulus-driven and concept-driven analysis for image caption generation. Neurocomputing 398 (2020), 520530.Google ScholarGoogle ScholarCross RefCross Ref
  10. [10] Tang D., Qin B., and Liu T.. 2016. Aspect level sentiment classification with deep memory network. ACM Transactions on Intelligent Systems and Technology 9, 2 (2016), 1--19.Google ScholarGoogle Scholar
  11. [11] Abdi A., Shamsuddin S. M., Hasan S., and Piran J.. 2019. Deep learning-based sentiment classification of evaluative text based on Multi-feature fusion. Inf. Process. Manage. 56, 4 (2019), 12451259.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. [12] Wang J., Xu B., and Zu Y.. 2021. Deep learning for aspect-based sentiment analysis. In Proceedings of the International Conference on Machine Learning and Intelligent Systems Engineering (MLISE’21). IEEE. 267271.Google ScholarGoogle ScholarCross RefCross Ref
  13. [13] Ma D., Li S., Zhang X., and Wang H.. 2017. Interactive attention networks for aspect-level sentiment classification. arXiv preprint arXiv:1709.00893. (2017).Google ScholarGoogle Scholar
  14. [14] Salur M. U. and Aydin I.. 2020. A novel hybrid deep learning model for sentiment classification. IEEE Access 8 (2020), 5808058093.Google ScholarGoogle ScholarCross RefCross Ref
  15. [15] Li Xin, Bing Lidong, Lam Wai, and Shi Bei. 2018. Transformation networks for target-oriented sentiment classification. arXiv:1805.01086. Retrieved from https://arxiv.org/abs/1805.01086. Google ScholarGoogle ScholarCross RefCross Ref
  16. [16] Devlin J., Chang M. W., Lee K., and Toutanova K.. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv:1810.04805. Retrieved from https://arxiv.org/abs/1810.04805.Google ScholarGoogle Scholar
  17. [17] Hochreiter S. and Schmidhuber J.. 1997. Long short-term memory. Neural Comput. 9, 8 (1997), 17351780.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. [18] Vaswani Ashish, Shazeer Noam, Parmar Niki, Uszkoreit Jakob, Jones Llion, Gomez Aidan N., Kaiser Łukasz, and Polosukhin Illia. 2017. Attention is all you need. Adv. Neural Inf. Process. Syst. 30 (2017), 59986008.Google ScholarGoogle Scholar
  19. [19] Graves A. and Schmidhuber J.. 2005. Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Netw. 18, 5-6 (2005), 602610.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. [20] Fang L. and Huang M.. 2012. Fine granular aspect analysis using latent structural models. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers-Volume 2. Association for Computational Linguistics, New York, NY, 333337.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. [21] Jin W., Ho H. H., and Srihari R. K.. 2009. OpinionMiner: A novel machine learning system for web opinion mining and extraction. In Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 11951204.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. [22] Jiang L., Yu M., Zhou M., Liu X., and Zhao T.. 2011. Target-dependent twitter sentiment classification. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1. Association for Computational Linguistics, New York, NY, 151160.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. [23] Kiritchenko S., Zhu X., Cherry C., and Mohammad S.. 2014. NRC-Canada-2014: Detecting aspects and sentiment in customer reviews. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval’14). 437442.Google ScholarGoogle ScholarCross RefCross Ref
  24. [24] Dong L., Wei F., Tan C., Tang D., Zhou M., and Xu K.. 2014. Adaptive recursive neural network for target-dependent twitter sentiment classification. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics: Short Papers, Volume 2. 4954.Google ScholarGoogle ScholarCross RefCross Ref
  25. [25] Tang D., Qin B., Feng X., and Liu T.. 2016. Effective LSTMs for target-dependent sentiment classification. In Proceedings of the 26th International Conference on Computational Linguistics: Technical Papers (COLING’16). 32983307.Google ScholarGoogle Scholar
  26. [26] Bahdanau Dzmitry, Cho Kyunghyun, and Bengio Yoshua. 2014. Neural machine translation by jointly learning to align and translate. arXiv:1409.0473. Retrieved from https://arxiv.org/abs/1409.0473.Google ScholarGoogle Scholar
  27. [27] Vaswani Ashish, Shazeer Noam, Parmar Niki, Uszkoreit Jakob, Jones Llion, Gomez Aidan N., Kaiser Lukasz, and Polosukhin Illia. 2017. Attention is all you need. In Proceedings of the Conference and Workshop on Neural Information Processing Systems (NIPS’17). 60006010.Google ScholarGoogle Scholar
  28. [28] Tang Duyu, Qin Bing, and Liu Ting. 2015. Document modeling with gated recurrent neural network for sentiment classification. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP’15). 14221432.Google ScholarGoogle ScholarCross RefCross Ref
  29. [29] Wang Yequan, Huang Minlie, Zhao Li, and Zhu Xiaoyan. 2016. Attention-based LSTM for aspect-level sentiment classification. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP’16). 606615.Google ScholarGoogle ScholarCross RefCross Ref
  30. [30] Rush Alexander M., Chopra Sumit, and Weston Jason. 2015. A neural attention model for abstractive sentence summarization. arXiv:1509.00685. Retrieved from https://arxiv.org/abs/1509.00685.Google ScholarGoogle Scholar
  31. [31] Chen P., Sun Z., Bing L., and Yang W.. 2017. Recurrent attention network on memory for aspect sentiment analysis. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP’17). 452461.Google ScholarGoogle ScholarCross RefCross Ref
  32. [32] Wang J., Li J., Li S., Kang Y., Zhang M., Si L., and Zhou G.. 2018. Aspect sentiment classification with both word-level and clause-level attention networks. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI’18). 44394445.Google ScholarGoogle ScholarCross RefCross Ref
  33. [33] Yang Z., Yang D., Dyer C., He X., Smola A., and Hovy E.. 2016. Hierarchical attention networks for document classification. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 14801489.Google ScholarGoogle ScholarCross RefCross Ref
  34. [34] Huang B., Ou Y., and Carley K. M.. 2018. Aspect level sentiment classification with attention-over-attention neural networks. In International Conference on Social Computing, Behavioral-Cultural Modeling and Prediction and Behavior Representation in Modeling and Simulation. Springer, Cham. 197206.Google ScholarGoogle ScholarCross RefCross Ref
  35. [35] Gu S., Zhang L., Hou Y., and Song Y.. 2018. A position-aware bidirectional attention network for aspect-level sentiment analysis. In Proceedings of the 27th International Conference on Computational Linguistics. 774784.Google ScholarGoogle Scholar
  36. [36] Chung J., Gulcehre C., Cho K., and Bengio Y.. 2015. Gated feedback recurrent neural networks. In Proceedings of the International Conference on Machine Learning. 20672075.Google ScholarGoogle Scholar
  37. [37] Liu Q., Zhang H., Zeng Y., Huang Z., and Wu Z.. 2018. Content attention model for aspect based sentiment analysis. In Proceedings of the World Wide Web Conference. 10231032.Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. [38] Fan F., Feng Y., and Zhao D.. 2018. Multi-grained attention network for aspect-level sentiment classification. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP’18). 34333442.Google ScholarGoogle ScholarCross RefCross Ref
  39. [39] Xu Q., Zhu L., Dai T. et al. 2020. Aspect-based sentiment classification with multi-attention network. Neurocomputing 388, (2020), 135143.Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. [40] Jiang Q., Chen L., Xu R., Ao X., and Yang M.. 2019. A challenge dataset and effective models for aspect-based sentiment analysis. In Proceedings of the Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP’19). 62816286.Google ScholarGoogle ScholarCross RefCross Ref
  41. [41] Glorot X. and Bengio Y.. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the 13th International Conference on Artificial Intelligence and Statistics. 249256.Google ScholarGoogle Scholar
  42. [42] Pennington J., Socher R., and Manning C. D.. 2014. Glove: Global vectors for word representation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP’14). 15321543.Google ScholarGoogle ScholarCross RefCross Ref
  43. [43] Kingma D. P. and Ba J.. 2014. Adam: A method for stochastic optimization. arXiv:1412.6980. Retrieved from https://arxiv.org/abs/1412.6980.Google ScholarGoogle Scholar

Index Terms

  1. A Context-focused Attention Evolution Model for Aspect-based Sentiment Classification

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image ACM Transactions on Asian and Low-Resource Language Information Processing
        ACM Transactions on Asian and Low-Resource Language Information Processing  Volume 22, Issue 5
        May 2023
        653 pages
        ISSN:2375-4699
        EISSN:2375-4702
        DOI:10.1145/3596451
        Issue’s Table of Contents

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 9 May 2023
        • Online AM: 20 March 2023
        • Accepted: 1 March 2023
        • Revised: 27 February 2023
        • Received: 7 May 2022
        Published in tallip Volume 22, Issue 5

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
      • Article Metrics

        • Downloads (Last 12 months)80
        • Downloads (Last 6 weeks)4

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Full Text

      View this article in Full Text.

      View Full Text
      About Cookies On This Site

      We use cookies to ensure that we give you the best experience on our website.

      Learn more

      Got it!