skip to main content
research-article

Affective Interaction: Attentive Representation Learning for Multi-Modal Sentiment Classification

Authors Info & Claims
Published:01 November 2022Publication History
Skip Abstract Section

Abstract

The recent booming of artificial intelligence (AI) applications, e.g., affective robots, human-machine interfaces, autonomous vehicles, and so on, has produced a great number of multi-modal records of human communication. Such data often carry latent subjective users’ attitudes and opinions, which provides a practical and feasible path to realize the connection between human emotion and intelligence services. Sentiment and emotion analysis of multi-modal records is of great value to improve the intelligence level of affective services. However, how to find an optimal manner to learn people’s sentiments and emotional representations has been a difficult problem, since both of them involve subtle mind activity. To solve this problem, a lot of approaches have been published, but most of them are insufficient to mine sentiment and emotion, since they have treated sentiment analysis and emotion recognition as two separate tasks. The interaction between them has been neglected, which limits the efficiency of sentiment and emotion representation learning. In this work, emotion is seen as the external expression of sentiment, while sentiment is the essential nature of emotion. We thus argue that they are strongly related to each other where one’s judgment helps the decision of the other. The key challenges are multi-modal fused representation and the interaction between sentiment and emotion. To solve such issues, we design an external knowledge enhanced multi-task representation learning network, termed KAMT. The major elements contain two attention mechanisms, which are inter-modal and inter-task attentions and an external knowledge augmentation layer. The external knowledge augmentation layer is used to extract the vector of the participant’s gender, age, occupation, and of overall color or shape. The main use of inter-modal attention is to capture effective multi-modal fused features. Inter-task attention is designed to model the correlation between sentiment analysis and emotion classification. We perform experiments on three widely used datasets, and the experimental performance proves the effectiveness of the KAMT model.

REFERENCES

  1. [1] Aloufi Samah and Saddik Abdulmotaleb El. 2018. Sentiment identification in football-specific tweets. IEEE Access 6 (2018), 7860978621.Google ScholarGoogle ScholarCross RefCross Ref
  2. [2] Muhammad Ghulam and Hossain M. S.. 2021. COVID-19 and non-COVID-19 classification using multi-layer fusion from lung ultrasound images. Inf. Fusion 72 (2021), 8088.Google ScholarGoogle ScholarCross RefCross Ref
  3. [3] Chakravarthi Bharathi Raja, Soman K. P., Ponnusamy Rahul, Kumaresan Prasanna Kumar, Thamburaj Kingston Pal, McCrae John P., et al. 2021. DravidianMultiModality: A dataset for multi-modal sentiment analysis in tamil and malayalam. arXiv:2106.04853. Retrieved from https://arxiv.org/abs/2106.04853.Google ScholarGoogle Scholar
  4. [4] Chauhan Dushyant Singh, S. R. Dhanush, Ekbal Asif, and Bhattacharyya Pushpak. 2020. Sentiment and emotion help sarcasm? A multi-task learning framework for multi-modal sarcasm, sentiment and emotion analysis. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 43514360. DOI:DOI:Google ScholarGoogle ScholarCross RefCross Ref
  5. [5] Chen Jing, Wang Chenhui, Wang Kejun, Yin Chaoqun, Zhao Cong, Xu Tao, Zhang Xinyi, Huang Ziqiang, Liu Meichen, and Yang Tao. 2021. HEU emotion: A large-scale database for multimodal emotion recognition in the wild. Neural Computing and Applications 33, (2021), 86698685.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. [6] Cimtay Yucel, Ekmekcioglu Erhan, and Caglar-Ozhan Seyma. 2020. Cross-subject multimodal emotion recognition based on hybrid fusion. IEEE Access 8 (2020), 168865168878.Google ScholarGoogle ScholarCross RefCross Ref
  7. [7] et al. D. Yang2018. emHealth: towards emotion health through depression prediction and intelligent health recommender system. Mobile Netw Appl 23, 2 (2018), 216226.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. [8] Devlin Jacob, Chang Ming-Wei, Lee Kenton, and Toutanova Kristina. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the NAACL-HLT 2019: Annual Conference of the North American Chapter of the Association for Computational Linguistics. 41714186.Google ScholarGoogle Scholar
  9. [9] Devlin Jacob, Chang Ming-Wei, Lee Kenton, and Toutanova Kristina N.. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 41714186.Google ScholarGoogle Scholar
  10. [10] Du Yongping, Liu Yang, Peng Zhi, and Jin Xingnan. 2022. Gated attention fusion network for multimodal sentiment classification. Knowledge-Based Systems 240 (2022), 108107. https://www.sciencedirect.com/science/article/abs/pii/S0950705121011679#:~:text=Gated%20attention%20fusion%20network%20We%20propose%20a%20novel,with%20the%20image%20and%20gives%20them%20higher%20weight.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. [11] Ethayarajh Kawin. 2019. How contextual are contextualized word representations? Comparing the geometry of BERT, ELMo, and GPT-2 embeddings. arXiv:1909.00512. Retrieved from https://arxiv.org/abs/1909.00512.Google ScholarGoogle Scholar
  12. [12] He Zhipeng, Li Zina, Yang Fuzhou, Wang Lei, Li Jingcong, Zhou Chengju, and Pan Jiahui. 2020. Advances in multimodal emotion recognition based on brain–computer interfaces. Brain Sciences 10, 10 (2020), 687.Google ScholarGoogle ScholarCross RefCross Ref
  13. [13] Hossain M. S., Amin Syed Umar, Alsulaiman Mansour, and Muhammad Ghulam. 2019. Applying deep learning for epilepsy seizure detection and brain mapping visualization. ACM Transactions on Multimedia Computing, Communications, and Applications 15, 1s (2019), 17 pages.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. [14] Hossain M. S. and Muhammad Ghulam. 2018. Emotion-aware connected healthcare big data towards 5G. IEEE Internet of Things Journal 5, 4 (2018), 23992406.Google ScholarGoogle ScholarCross RefCross Ref
  15. [15] Hossain M. S. and Muhammad Ghulam. 2019. Emotion recognition using deep learning approach from audio–visual emotional big data. Information Fusion 49 (2019), 6978.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. [16] Hossain M. S. and Muhammad Ghulam. 2019. Emotion recognition using secure edge and cloud computing. Information Sciences 504 (2019), 589601.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. [17] Huang Feiran, Zhang Xiaoming, Zhao Zhonghua, Xu Jie, and Li Zhoujun. 2019. Image–text sentiment analysis via deep multimodal attentive fusion. Knowledge-Based Systems 167 (2019), 2637.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. [18] Huddar Mahesh G., Sannakki Sanjeev S., and Rajpurohit Vijay S.. 2021. Attention-based multimodal contextual fusion for sentiment and emotion classification using bidirectional LSTM. Multimed Tools Appl. 80, (2021), 1305913076.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. [19] Keswani Vishal, Singh Sakshi, Agarwal Suryansh, and Modi Ashutosh. 2020. IITK at SemEval-2020 task 8: Unimodal and bimodal sentiment analysis of internet memes. In Proceedings of the 14th Workshop on Semantic Evaluation. 11351140.Google ScholarGoogle ScholarCross RefCross Ref
  20. [20] Kumar Avinash, Narapareddy Vishnu Teja, Srikanth Veerubhotla Aditya, Malapati Aruna, and Neti Lalita Bhanu Murthy. 2020. Sarcasm detection using multi-head attention based bidirectional LSTM. IEEE Access 8 (2020), 63886397.Google ScholarGoogle ScholarCross RefCross Ref
  21. [21] Li Dan and Qian Jiang. 2016. Text sentiment analysis based on long short-term memory. In Proceedings of the 2016 First IEEE International Conference on Computer Communication and the Internet. IEEE, 471475.Google ScholarGoogle ScholarCross RefCross Ref
  22. [22] Li Qiuchi and Melucci Massimo. 2019. Quantum-inspired multimodal representation. In Proceedings of the10th Italian Information Retrieval Workshop. 12.Google ScholarGoogle Scholar
  23. [23] Shorfuzzaman Mohammad and Hossain M. S.. 2021. MetaCOVID: A Siamese neural network framework with contrastive loss for n-shot diagnosis of COVID-19 patients. Pattern Recognition 113 (2021), 107700.Google ScholarGoogle ScholarCross RefCross Ref
  24. [24] Li Xiang, Zhang Yazhou, and Li Jing. 2021. Supercomputer supported online deep learning techniques for high throughput EEG prediction. In Proceedings of the 2021 IEEE International Conference on Bioinformatics and Biomedicine. IEEE, 23922398.Google ScholarGoogle ScholarCross RefCross Ref
  25. [25] Hossain M. S., Muhammad G., Song B., Hassan M. M., Alelaiwi A., and Alamri A.. 2015. Audio–Visual Emotion-Aware Cloud Gaming Framework. IEEE Trans Circuits Syst Video Technol. 25, 12 (2015), 21052118.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. [26] Liu Fen, Chen Jianfeng, Tan Weijie, and Cai Chang. 2021. A multi-modal fusion method based on higher-order orthogonal iteration decomposition. Entropy 23, 10 (2021), 1349.Google ScholarGoogle ScholarCross RefCross Ref
  27. [27] Liu Jiaxing, Chen Sen, Wang Longbiao, Liu Zhilei, Fu Yahui, Guo Lili, and Dang Jianwu. 2021. Multimodal emotion recognition with capsule graph convolutional based representation fusion. In Proceedings of the ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 63396343.Google ScholarGoogle ScholarCross RefCross Ref
  28. [28] Liu Yaochen, Zhang Yazhou, Li Qiuchi, Wang Benyou, and Song Dawei. 2021. What does your smile mean? Jointly detecting multi-modal sarcasm and sentiment using quantum probability. In Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2021. 871880.Google ScholarGoogle ScholarCross RefCross Ref
  29. [29] Lin H. et al. 2020. Privacy-enhanced data fusion for COVID-19 applications in intelligent Internet of medical Things. IEEE Internet of Things J. 8, 21 (2020), 1568315693.Google ScholarGoogle ScholarCross RefCross Ref
  30. [30] Mittal Trisha, Bhattacharya Uttaran, Chandra Rohan, Bera Aniket, and Manocha Dinesh. 2020. M3er: Multiplicative multimodal emotion recognition using facial, textual, and speech cues. In Proceedings of the AAAI Conference on Artificial Intelligence. 13591367.Google ScholarGoogle ScholarCross RefCross Ref
  31. [31] Morishita Terufumi, Morio Gaku, Horiguchi Shota, Ozaki Hiroaki, and Miyoshi Toshinori. 2020. Hitachi at SemEval-2020 task 8: Simple but effective modality ensemble for meme emotion recognition. In Proceedings of the 14th Workshop on Semantic Evaluation. 11261134.Google ScholarGoogle ScholarCross RefCross Ref
  32. [32] Muhammad Ghulam, Hossain M. S., and Kumar Neeraj. 2021. EEG-based pathology detection for home health monitoring. IEEE Journal on Selected Areas in Communications 39, 2 (2021), 603610.Google ScholarGoogle ScholarCross RefCross Ref
  33. [33] Nemati Shahla, Rohani Reza, Basiri Mohammad Ehsan, Abdar Moloud, Yen Neil Y, and Makarenkov Vladimir. 2019. A hybrid latent space data fusion method for multimodal emotion recognition. IEEE Access 7 (2019), 172948172964.Google ScholarGoogle ScholarCross RefCross Ref
  34. [34] Poria Soujanya, Cambria Erik, Hazarika Devamanyu, Majumder Navonil, Zadeh Amir, and Morency Louis-Philippe. 2017. Context-dependent sentiment analysis in user-generated videos. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (volume 1: Long Papers). 873883.Google ScholarGoogle ScholarCross RefCross Ref
  35. [35] Potamias Rolandos Alexandros, Siolas Georgios, and Stafylopatis Andreas-Georgios. 2020. A transformer-based approach to irony and sarcasm detection. Neural Computing and Applications 32, (2020), 1730917320.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. [36] Qiu Ying, Liu Yang, Arteaga-Falconi Juan, Dong Haiwei, and Saddik Abdulmotaleb El. 2019. EVM-CNN: Real-time contactless heart rate estimation from facial video. IEEE Transactions on Multimedia 21, 7 (2019), 17781787. DOI:DOI:Google ScholarGoogle ScholarCross RefCross Ref
  37. [37] Sharma Chhavi, Bhageria Deepesh, Scott William Paka,, K. L. Srinivas P. Y., Das Amitava, Chakraborty Tanmoy, Pulabaigari Viswanath, and Gambäck Björn. 2020. SemEval-2020 task 8: Memotion analysis-the visuo-lingual metaphor!. In Proceedings of the 14th International Workshop on Semantic Evaluation. Association for Computational Linguistics, Barcelona, Spain.Google ScholarGoogle ScholarCross RefCross Ref
  38. [38] Singh Tajinder and Kumari Madhu. 2016. Role of text pre-processing in twitter sentiment analysis. Procedia Computer Science 89 (2016), 549554.Google ScholarGoogle ScholarCross RefCross Ref
  39. [39] Song Lingling, Zhang Yazhou, and Hou Yuexian. 2018. Convolutional neural network with pair-wise pure dependence for sentence classification. In Proceedings of the 2018 International Conference on Artificial Intelligence and Big Data. IEEE, 117121.Google ScholarGoogle ScholarCross RefCross Ref
  40. [40] Tan Mingxing and Le Quoc. 2019. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the International Conference on Machine Learning. PMLR, 61056114.Google ScholarGoogle Scholar
  41. [41] Tsai Yao-Hung Hubert, Bai Shaojie, Liang Paul Pu, Kolter J. Zico, Morency Louis-Philippe, and Salakhutdinov Ruslan. 2019. Multimodal transformer for unaligned multimodal language sequences. In Proceedings of the Conference. Association for Computational Linguistics. Meeting, Vol. 2019. NIH Public Access, 6558.Google ScholarGoogle ScholarCross RefCross Ref
  42. [42] Tu Geng, Wen Jintao, Liu Hao, Chen Sentao, Zheng Lin, and Jiang Dazhi. 2022. Exploration meets exploitation: Multitask learning for emotion recognition based on discrete and dimensional models. Knowledge-Based Systems 235 (2022), 107598.Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. [43] Vlad George-Alexandru, Zaharia George-Eduard, Cercel Dumitru-Clementin, Chiru Costin-Gabriel, and Trausan-Matu Stefan. 2020. UPB at SemEval-2020 task 8: Joint textual and visual modeling in a multi-task learning architecture for memotion analysis. In Proceedings of the 14th Workshop on Semantic Evaluation. 12081214.Google ScholarGoogle ScholarCross RefCross Ref
  44. [44] Wen Huanglu, You Shaodi, and Fu Ying. 2021. Cross-modal context-gated convolution for multi-modal sentiment analysis. Pattern Recognition Letters 146 (2021), 252259.Google ScholarGoogle ScholarCross RefCross Ref
  45. [45] Wu Ting, Peng Junjie, Zhang Wenqiang, Zhang Huiran, Tan Shuhua, Yi Fen, Ma Chuanshuai, and Huang Yansong. 2022. Video sentiment analysis with bimodal information-augmented multi-head attention. Knowledge-Based Systems 235 (2022), 107676.Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. [46] Hossain M. S., and Muhammad G.. 2017. An emotion recognition system for mobile applications. IEEE Access5 (2017), 22812287.Google ScholarGoogle ScholarCross RefCross Ref
  47. [47] Yang Kaicheng, Xu Hua, and Gao Kai. 2020. CM-BERT: Cross-modal BERT for text-audio sentiment analysis. In Proceedings of the 28th ACM International Conference on Multimedia. 521528.Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. [48] Yu Wenmeng, Xu Hua, Yuan Ziqi, and Wu Jiele. 2021. Learning modality-specific representations with self-supervised multi-task learning for multimodal sentiment analysis. arXiv:2102.04830. Retrieved from https://arxiv.org/abs/2102.04830.Google ScholarGoogle Scholar
  49. [49] Yuzhu Wang, Jun Xie, Bo Chen, and Xinying Xu. 2021. Multi-modal sentiment analysis based on cross-modal context-aware attention. Data Analysis and Knowledge Discovery 5, 4 (2021), 4959.Google ScholarGoogle Scholar
  50. [50] Zadeh AmirAli Bagher, Liang Paul Pu, Poria Soujanya, Cambria Erik, and Morency Louis-Philippe. 2018. Multimodal language analysis in the wild: Cmu-mosei dataset and interpretable dynamic fusion graph. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 22362246.Google ScholarGoogle ScholarCross RefCross Ref
  51. [51] Zhang Dengsheng and Lu Guojun. 2002. Shape-based image retrieval using generic fourier descriptor. Signal Processing: Image Communication 17, 10 (2002), 825848.Google ScholarGoogle ScholarCross RefCross Ref
  52. [52] Zhang Shunxiang, Wei Zhongliang, Wang Yin, and Liao Tao. 2018. Sentiment analysis of chinese micro-blog text based on extended sentiment dictionary. Future Generation Computer Systems 81 (2018), 395403.Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. [53] Yang X., Zhang T., Xu C. and Hossain M. S.. 2015. Automatic Visual Concept Learning for Social Event Understanding. IEEE Transactions on Multimedia 17, 3 (2015), 346358.Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. [54] Zhang Yin, Du Jinyang, Ma Xiao, Wen Haoyu, and Fortino Giancarlo. 2021. Aspect-based sentiment analysis for user reviews. Cognitive Computation 13, 5 (2021), 1114–1127.Google ScholarGoogle ScholarCross RefCross Ref
  55. [55] Zhang Yazhou, Li Qiuchi, Song Dawei, Zhang Peng, and Wang Panpan. 2019. Quantum-inspired interactive networks for conversational sentiment analysis. In Proceedings of the 28th International Joint Conference on Artificial Intelligence. 54365442.Google ScholarGoogle ScholarCross RefCross Ref
  56. [56] Zhang Yazhou, Li Xiang, Rong Lu, and Tiwari Prayag. 2021. Multi-task learning for jointly detecting depression and emotion. In Proceedings of the 2021 IEEE International Conference on Bioinformatics and Biomedicine. IEEE, 31423149.Google ScholarGoogle ScholarCross RefCross Ref
  57. [57] Zhang Yazhou, Liu Yaochen, Li Qiuchi, Tiwari Prayag, Wang Benyou, Li Yuhua, Pandey Hari Mohan, Zhang Peng, and Song Dawei. 2021. CFN: A complex-valued fuzzy network for sarcasm detection in conversations. IEEE Transactions on Fuzzy Systems 29, 12 (2021), 36963710.Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. [58] Zhang Yin, Lu Huimin, Jiang Chi, Li Xin, and Tian Xinliang. 2021. Aspect-based sentiment analysis of user reviews in 5G networks. IEEE Network 35, 4 (2021), 228233.Google ScholarGoogle ScholarCross RefCross Ref
  59. [59] Zhang Yin, Ma Xiao, Zhang Jing, Hossain M. S., Muhammad Ghulam, and Amin Syed Umar. 2019. Edge intelligence in the cognitive internet of things: Improving sensitivity and interactivity. IEEE Network 33, 3 (2019), 5864.Google ScholarGoogle ScholarCross RefCross Ref
  60. [60] Zhang Yin, Qian Yongfeng, Wu Di, Hossain M. S., Ghoneim Ahmed, and Chen Min. 2018. Emotion-aware multimedia systems security. IEEE Transactions on Multimedia 21, 3 (2018), 617624.Google ScholarGoogle ScholarCross RefCross Ref
  61. [61] Rahman Md. Abdur et al. 2021. Adversarial examples-security threats to COVID-19 deep-learning systems in medical IoT devices. IEEE Internet Things J. 8, 12 (2021), 96039610.Google ScholarGoogle ScholarCross RefCross Ref
  62. [62] Zhang Yazhou, Song Dawei, Li Xiang, and Zhang Peng. 2018. Unsupervised sentiment analysis of twitter posts using density matrix representation. In Proceedings of the European Conference on Information Retrieval. Springer, 316329.Google ScholarGoogle ScholarCross RefCross Ref
  63. [63] Zhang Yazhou, Song Dawei, Li Xiang, Zhang Peng, Wang Panpan, Rong Lu, Yu Guangliang, and Wang Bo. 2020. A quantum-like multimodal network framework for modeling interaction dynamics in multiparty conversational sentiment analysis. Information Fusion 62 (2020), 1431.Google ScholarGoogle ScholarCross RefCross Ref
  64. [64] Zhang Yazhou, Song Dawei, Zhang Peng, Wang Panpan, Li Jingfei, Li Xiang, and Wang Benyou. 2018. A quantum-inspired multimodal sentiment analysis framework. Theoretical Computer Science 752 (2018), 2140.Google ScholarGoogle ScholarCross RefCross Ref
  65. [65] Zhang Yazhou, Tiwari Prayag, Song Dawei, Mao Xiaoliu, Wang Panpan, Li Xiang, and Pandey Hari Mohan. 2021. Learning interaction dynamics with an interactive LSTM for conversational sentiment analysis. Neural Networks 133 (2021), 4056Google ScholarGoogle ScholarCross RefCross Ref
  66. [66] Hossain M. S., and Muhammad G.. 2016. Audio-visual emotion recognition using multi-directional regression and Ridgelet transform. Journal on Multimodal User Interfaces 10, 4 (2016), 325333.Google ScholarGoogle ScholarCross RefCross Ref
  67. [67] Zhong Zebin, Yang Shiqi, and Becigneul Gary. 2021. Environment and speaker related emotion recognition in conversations. In Proceedings of the 2nd International Conference on Computing and Data Science. 16.Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. [68] Zhou Junhao, Lu Yue, Dai Hong-Ning, Wang Hao, and Xiao Hong. 2019. Sentiment analysis of chinese microblog based on stacked bidirectional LSTM. IEEE Access 7 (2019), 3885638866.Google ScholarGoogle ScholarCross RefCross Ref
  69. [69] Zhou Ping, Hossain M. S., Zong Xiaofen, Muhammad Ghulam, Amin Syed Umar, and Humar Iztok. 2019. Multi-task emotion communication system with dynamic resource allocations. Information Fusion 52 (2019), 167174.Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. [70] M. S. Hossain et al. 2016. Audio-visual emotion recognition using big data towards 5G. Mob. Networks Appl. 21, 5 (2016), 753–763.Google ScholarGoogle Scholar

Index Terms

  1. Affective Interaction: Attentive Representation Learning for Multi-Modal Sentiment Classification

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM Transactions on Multimedia Computing, Communications, and Applications
          ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 18, Issue 3s
          October 2022
          381 pages
          ISSN:1551-6857
          EISSN:1551-6865
          DOI:10.1145/3567476
          • Editor:
          • Abdulmotaleb El Saddik
          Issue’s Table of Contents

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 1 November 2022
          • Online AM: 25 March 2022
          • Accepted: 14 March 2022
          • Revised: 22 January 2022
          • Received: 1 November 2021
          Published in tomm Volume 18, Issue 3s

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Refereed
        • Article Metrics

          • Downloads (Last 12 months)411
          • Downloads (Last 6 weeks)28

          Other Metrics

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Full Text

        View this article in Full Text.

        View Full Text

        HTML Format

        View this article in HTML Format .

        View HTML Format
        About Cookies On This Site

        We use cookies to ensure that we give you the best experience on our website.

        Learn more

        Got it!