skip to main content
research-article

Attention-Augmented Memory Network for Image Multi-Label Classification

Authors Info & Claims
Published:25 February 2023Publication History
Skip Abstract Section

Abstract

The purpose of image multi-label classification is to predict all the object categories presented in an image. Some recent works exploit graph convolution network to capture the correlation between labels. Although promising results have been reported, these methods cannot learn salient object features in the images and ignore the correlation between channel feature maps. In addition, the current researches only learn the feature information within individual input image, but fail to mine the contextual information of various categories from the dataset to enhance the input feature representation. To address these issues, we propose an Attention-Augmented Memory Network (AAMN) model for the image multi-label classification task. Specifically, we first propose a novel categorical memory module to excavate the contextual information of various categories from the dataset to augment the current input feature. Secondly, we design a new channel-relation exploration module to capture the inter-channel relationship of features, so as to enhance the correlation between objects in the images. Thirdly, we develop a spatial-relation enhancement module to model second-order statistics of features and capture long-range dependencies between pixels in feature maps, so as to learn salient object features. Experimental results on standard benchmarks, including MS-COCO 2014, PASCAL VOC 2007, and VG-500, demonstrate the effectiveness and superiority of AAMN model, which outperforms current state-of-the-art methods.

REFERENCES

  1. [1] Alonso Inigo, Sabater Alberto, Ferstl David, Montesano Luis, and Murillo Ana C.. 2021. Semi-supervised semantic segmentation with pixel-level contrastive learning from a class-wise memory bank. arXiv preprint arXiv:2104.13415 (2021).Google ScholarGoogle Scholar
  2. [2] Cao Yue, Xu Jiarui, Lin Stephen, Wei Fangyun, and Hu Han. 2019. GCNet: Non-local networks meet squeeze-excitation networks and beyond. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops. 010.Google ScholarGoogle ScholarCross RefCross Ref
  3. [3] Carreira Joao, Caseiro Rui, Batista Jorge, and Sminchisescu Cristian. 2012. Semantic segmentation with second-order pooling. In European Conference on Computer Vision. Springer, 430443.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. [4] Chen Tianshui, Lin Liang, Hui Xiaolu, Chen Riquan, and Wu Hefeng. 2020. Knowledge-guided multi-label few-shot learning for general image recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence (2020).Google ScholarGoogle Scholar
  5. [5] Chen Tianshui, Xu Muxin, Hui Xiaolu, Wu Hefeng, and Lin Liang. 2019. Learning semantic-specific graph representation for multi-label image recognition. In Proceedings of the IEEE International Conference on Computer Vision. 522531.Google ScholarGoogle ScholarCross RefCross Ref
  6. [6] Chen Zhao-Min, Wei Xiu-Shen, Wang Peng, and Guo Yanwen. 2019. Multi-label image recognition with graph convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 51775186.Google ScholarGoogle ScholarCross RefCross Ref
  7. [7] Cheng Xing, Lin Hezheng, Wu Xiangyu, Yang Fan, Shen Dong, Wang Zhongyuan, Shi Nian, and Liu Honglin. 2021. MlTr: Multi-label classification with transformer. arXiv preprint arXiv:2106.06195 (2021).Google ScholarGoogle Scholar
  8. [8] Deng Hanming, Hua Yang, Song Tao, Zhang Zongpu, Xue Zhengui, Ma Ruhui, Robertson Neil, and Guan Haibing. 2019. Object guided external memory network for video object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 66786687.Google ScholarGoogle ScholarCross RefCross Ref
  9. [9] Deng Jia, Dong Wei, Socher Richard, Li Li-Jia, Li Kai, and Fei-Fei Li. 2009. ImageNet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 248255.Google ScholarGoogle ScholarCross RefCross Ref
  10. [10] Deng Weijian, Marsh Joshua, Gould Stephen, and Zheng Liang. 2020. Fine-grained classification via categorical memory networks. arXiv preprint arXiv:2012.06793 (2020).Google ScholarGoogle Scholar
  11. [11] Durand Thibaut, Mordan Taylor, Thome Nicolas, and Cord Matthieu. 2017. Wildcat: Weakly supervised learning of deep ConvNets for image classification, pointwise localization and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 642651.Google ScholarGoogle ScholarCross RefCross Ref
  12. [12] Durand Thibaut, Thome Nicolas, and Cord Matthieu. 2018. Exploiting negative evidence for deep latent structured models. IEEE Transactions on Pattern Analysis and Machine Intelligence 41, 2 (2018), 337351.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. [13] Dutta Ayushi, Verma Yashaswi, and Jawahar C. V.. 2020. Recurrent image annotation with explicit inter-label dependencies. In European Conference on Computer Vision. Springer, 191207.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. [14] Everingham Mark, Gool Luc Van, Williams Christopher K. I., Winn John, and Zisserman Andrew. 2010. The PASCAL visual object classes (VOC) challenge. International Journal of Computer Vision 88, 2 (2010), 303338.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. [15] Fu Jun, Liu Jing, Tian Haijie, Li Yong, Bao Yongjun, Fang Zhiwei, and Lu Hanqing. 2019. Dual attention network for scene segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 31463154.Google ScholarGoogle ScholarCross RefCross Ref
  16. [16] Gao Bin-Bin and Zhou Hong-Yu. 2021. Learning to discover multi-class attentional regions for multi-label image recognition. IEEE Transactions on Image Processing 30 (2021), 59205932.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. [17] Gong Yunchao, Jia Yangqing, Leung Thomas, Toshev Alexander, and Ioffe Sergey. 2013. Deep convolutional ranking for multilabel image annotation. arXiv preprint arXiv:1312.4894 (2013).Google ScholarGoogle Scholar
  18. [18] Guo Hao, Zheng Kang, Fan Xiaochuan, Yu Hongkai, and Wang Song. 2019. Visual attention consistency under image transforms for multi-label image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 729739.Google ScholarGoogle ScholarCross RefCross Ref
  19. [19] Gupta Shikha, Sharma Krishan, Dinesh Dileep Aroor, and Thenkanidiyoor Veena. 2021. Visual semantic-based representation learning using deep CNNs for scene recognition. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM) 17, 2 (2021), 124.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. [20] Hassanin Mohammed, Radwan Ibrahim, Khan Salman, and Tahtali Murat. 2022. Learning discriminative representations for multi-label image recognition. Journal of Visual Communication and Image Representation 83 (2022), 103448.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. [21] He Kaiming, Zhang Xiangyu, Ren Shaoqing, and Sun Jian. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 770778.Google ScholarGoogle ScholarCross RefCross Ref
  22. [22] Hu Jie, Shen Li, and Sun Gang. 2018. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 71327141.Google ScholarGoogle ScholarCross RefCross Ref
  23. [23] Jeong Somi, Kim Youngjung, Lee Eungbean, and Sohn Kwanghoon. 2021. Memory-guided unsupervised image-to-image translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 65586567.Google ScholarGoogle ScholarCross RefCross Ref
  24. [24] Ji Wanting and Wang Ruili. 2021. A multi-instance multi-label dual learning approach for video captioning. ACM Transactions on Multimedia Computing Communications and Applications 17, 2s (2021), 118.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. [25] Jin Jiren and Nakayama Hideki. 2016. Annotation order matters: Recurrent image annotator for arbitrary length image tagging. In 2016 23rd International Conference on Pattern Recognition (ICPR). IEEE, 24522457.Google ScholarGoogle ScholarCross RefCross Ref
  26. [26] Kipf Thomas N. and Welling Max. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016).Google ScholarGoogle Scholar
  27. [27] Krishna Ranjay, Zhu Yuke, Groth Oliver, Johnson Justin, Hata Kenji, Kravitz Joshua, Chen Stephanie, Kalantidis Yannis, Li Li-Jia, Shamma David A., et al. 2016. Visual genome: Connecting language and vision using crowdsourced dense image annotations. arXiv preprint arXiv:1602.07332 (2016).Google ScholarGoogle Scholar
  28. [28] Lanchantin Jack, Wang Tianlu, Ordonez Vicente, and Qi Yanjun. 2021. General multi-label image classification with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1647816488.Google ScholarGoogle ScholarCross RefCross Ref
  29. [29] Levi Hila and Ullman Shimon. 2018. Efficient coarse-to-fine non-local module for the detection of small objects. arXiv preprint arXiv:1811.12152 (2018).Google ScholarGoogle Scholar
  30. [30] Li Junbing, Zhang Changqing, Wang Xueman, and Du Ling. 2020. Multi-scale cross-modal spatial attention fusion for multi-label image recognition. In International Conference on Artificial Neural Networks. Springer, 736747.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. [31] Li Qing, Peng Xiaojiang, Qiao Yu, and Peng Qiang. 2019. Learning category correlations for multi-label image recognition with graph networks. arXiv preprint arXiv:1909.13005 (2019).Google ScholarGoogle Scholar
  32. [32] Lin Tsung-Yi, Maire Michael, Belongie Serge, Hays James, Perona Pietro, Ramanan Deva, Dollár Piotr, and Zitnick C. Lawrence. 2014. Microsoft COCO: Common objects in context. In European Conference on Computer Vision. Springer, 740755.Google ScholarGoogle ScholarCross RefCross Ref
  33. [33] Lin Tsung-Yu, RoyChowdhury Aruni, and Maji Subhransu. 2017. Bilinear convolutional neural networks for fine-grained visual recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 40, 6 (2017), 13091322.Google ScholarGoogle ScholarCross RefCross Ref
  34. [34] Liu Shilong, Zhang Lei, Yang Xiao, Su Hang, and Zhu Jun. 2021. Query2Label: A simple transformer way to multi-label classification. arXiv preprint arXiv:2107.10834 (2021).Google ScholarGoogle Scholar
  35. [35] Lyu Fan, Wu Qi, Hu Fuyuan, Wu Qingyao, and Tan Mingkui. 2019. Attend and imagine: Multi-label image classification with visual attention and recurrent neural networks. IEEE Transactions on Multimedia 21, 8 (2019), 19711981.Google ScholarGoogle ScholarCross RefCross Ref
  36. [36] Meng Quanling and Zhang Weigang. 2019. Multi-label image classification with attention mechanism and graph convolutional networks. In Proceedings of the ACM Multimedia Asia. 16.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. [37] Rendle Steffen. 2010. Factorization machines. In 2010 IEEE International Conference on Data Mining. IEEE, 9951000.Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. [38] Ridnik Tal, Ben-Baruch Emanuel, Zamir Nadav, Noy Asaf, Friedman Itamar, Protter Matan, and Zelnik-Manor Lihi. 2021. Asymmetric loss for multi-label classification. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 8291.Google ScholarGoogle ScholarCross RefCross Ref
  39. [39] Selvaraju Ramprasaath R., Cogswell Michael, Das Abhishek, Vedantam Ramakrishna, Parikh Devi, and Batra Dhruv. 2017. Grad-CAM: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision. 618626.Google ScholarGoogle ScholarCross RefCross Ref
  40. [40] Simonyan Karen and Zisserman Andrew. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).Google ScholarGoogle Scholar
  41. [41] Singh Inder Pal, Oyedotun Oyebade, Ghorbel Enjie, and Aouada Djamila. 2022. IML-GCN: Improved multi-label graph convolutional network for efficient yet precise image classification. In AAAI-22 Workshop Program-Deep Learning on Graphs: Methods and Applications.Google ScholarGoogle Scholar
  42. [42] Sun Dengdi, Ma Leilei, Ding Zhuanlian, and Luo Bin. 2022. An attention-driven multi-label image classification with semantic embedding and graph convolutional networks. Cognitive Computation (2022), 112.Google ScholarGoogle Scholar
  43. [43] Vaswani Ashish, Shazeer Noam, Parmar Niki, Uszkoreit Jakob, Jones Llion, Gomez Aidan N., Kaiser Łukasz, and Polosukhin Illia. 2017. Attention is all you need. In Advances in Neural Information Processing Systems. 59986008.Google ScholarGoogle Scholar
  44. [44] Wang Jiang, Yang Yi, Mao Junhua, Huang Zhiheng, Huang Chang, and Xu Wei. 2016. CNN-RNN: A unified framework for multi-label image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 22852294.Google ScholarGoogle ScholarCross RefCross Ref
  45. [45] Wang Qilong, Wu Banggu, Zhu Pengfei, Li Peihua, Zuo Wangmeng, and Hu Qinghua. 2020. ECA-Net: Efficient channel attention for deep convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1153411542.Google ScholarGoogle ScholarCross RefCross Ref
  46. [46] Wang Xiaolong, Girshick Ross, Gupta Abhinav, and He Kaiming. 2018. Non-local neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 77947803.Google ScholarGoogle ScholarCross RefCross Ref
  47. [47] Wang Xiaomei, Li Yaqian, Luo Tong, Guo Yandong, Fu Yanwei, and Xue Xiangyang. 2021. Distance restricted transformer encoder for multi-label classification. In 2021 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 16.Google ScholarGoogle ScholarCross RefCross Ref
  48. [48] Wang Ya, He Dongliang, Li Fu, Long Xiang, Zhou Zhichao, Ma Jinwen, and Wen Shilei. 2020. Multi-label classification with label graph superimposing. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 1226512272.Google ScholarGoogle ScholarCross RefCross Ref
  49. [49] Wang Yangtao, Xie Yanzhao, Liu Yu, Zhou Ke, and Li Xiaocui. 2020. Fast graph convolution network based multi-label image recognition via cross-modal fusion. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management. 15751584.Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. [50] Wang Zhe, Fang Zhongli, Li Dongdong, Yang Hai, and Du Wenli. 2021. Semantic supplementary network with prior information for multi-label image classification. IEEE Transactions on Circuits and Systems for Video Technology (2021).Google ScholarGoogle Scholar
  51. [51] Weston Jason, Chopra Sumit, and Bordes Antoine. 2014. Memory networks. arXiv preprint arXiv:1410.3916 (2014).Google ScholarGoogle Scholar
  52. [52] Woo Sanghyun, Park Jongchan, Lee Joon-Young, and Kweon In So. 2018. CBAM: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV). 319.Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. [53] Wu Xiangping, Chen Qingcai, Li Wei, Xiao Yulun, and Hu Baotian. 2020. AdaHGNN: Adaptive hypergraph neural networks for multi-label image classification. In Proceedings of the 28th ACM International Conference on Multimedia. 284293.Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. [54] Yan Zheng, Liu Weiwei, Wen Shiping, and Yang Yin. 2019. Multi-label image classification by feature attention network. IEEE Access 7 (2019), 9800598013.Google ScholarGoogle ScholarCross RefCross Ref
  55. [55] Yazici Vacit Oguz, Gonzalez-Garcia Abel, Ramisa Arnau, Twardowski Bartlomiej, and Weijer Joost van de. 2020. Orderless recurrent models for multi-label classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1344013449.Google ScholarGoogle Scholar
  56. [56] Yu Chaojian, Zhao Xinyi, Zheng Qi, Zhang Peng, and You Xinge. 2018. Hierarchical bilinear pooling for fine-grained visual recognition. In Proceedings of the European Conference on Computer Vision (ECCV). 574589.Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. [57] Yu Wan-Jin, Chen Zhen-Duo, Luo Xin, Liu Wu, and Xu Xin-Shun. 2019. DELTA: A deep dual-stream network for multi-label image classification. Pattern Recognition 91 (2019), 322331.Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. [58] Yu Zhou, Yu Jun, Fan Jianping, and Tao Dacheng. 2017. Multi-modal factorized bilinear pooling with co-attention learning for visual question answering. In Proceedings of the IEEE International Conference on Computer Vision. 18211830.Google ScholarGoogle ScholarCross RefCross Ref
  59. [59] Yue Kaiyu, Sun Ming, Yuan Yuchen, Zhou Feng, Ding Errui, and Xu Fuxin. 2018. Compact generalized non-local network. arXiv preprint arXiv:1810.13125 (2018).Google ScholarGoogle Scholar
  60. [60] Zeiler Matthew D. and Fergus Rob. 2014. Visualizing and understanding convolutional networks. In European Conference on Computer Vision. Springer, 818833.Google ScholarGoogle ScholarCross RefCross Ref
  61. [61] Zhang Junjie, Wu Qi, Shen Chunhua, Zhang Jian, and Lu Jianfeng. 2018. Multilabel image classification with regional latent semantic dependencies. IEEE Transactions on Multimedia 20, 10 (2018), 28012813.Google ScholarGoogle ScholarCross RefCross Ref
  62. [62] Zhang Kaihua, Li Tengpeng, Shen Shiwen, Liu Bo, Chen Jin, and Liu Qingshan. 2020. Adaptive graph convolutional network with attention graph clustering for co-saliency detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 90509059.Google ScholarGoogle ScholarCross RefCross Ref
  63. [63] Zhang Xuying, Sun Xiaoshuai, Luo Yunpeng, Ji Jiayi, Zhou Yiyi, Wu Yongjian, Huang Feiyue, and Ji Rongrong. 2021. RSTNet: Captioning with adaptive attention on visual and non-visual words. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1546515474.Google ScholarGoogle ScholarCross RefCross Ref
  64. [64] Zhang Zhizheng, Lan Cuiling, Zeng Wenjun, Jin Xin, and Chen Zhibo. 2019. Relation-aware global attention. arXiv preprint arXiv:1904.02998 (2019).Google ScholarGoogle Scholar
  65. [65] Zhao Haiying, Zhou Wei, Hou Xiaogang, and Zhu Hui. 2020. Double attention for multi-label image classification. IEEE Access 8 (2020), 225539225550.Google ScholarGoogle ScholarCross RefCross Ref
  66. [66] Zhao Rui, Zheng Kecheng, Zha Zheng-Jun, Xie Hongtao, and Luo Jiebo. 2021. Memory enhanced embedding learning for cross-modal video-text retrieval. arXiv preprint arXiv:2103.15686 (2021).Google ScholarGoogle Scholar
  67. [67] Zhou Fengtao, Huang Sheng, Liu Bo, and Yang Dan. 2021. Multi-label image classification via category prototype compositional learning. IEEE Transactions on Circuits and Systems for Video Technology (2021).Google ScholarGoogle Scholar
  68. [68] Zhou Fengtao, Huang Sheng, and Xing Yun. 2020. Deep semantic dictionary learning for multi-label image classification. arXiv preprint arXiv:2012.12509 (2020).Google ScholarGoogle Scholar
  69. [69] Zhou Wei, Xia Zhiwu, Dou Peng, Su Tao, and Hu Haifeng. 2022. Double attention based on graph attention network for image multi-label classification. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM) (2022).Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. [70] Zhu Feng, Li Hongsheng, Ouyang Wanli, Yu Nenghai, and Wang Xiaogang. 2017. Learning spatial regularization with image-level supervisions for multi-label image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 55135522.Google ScholarGoogle ScholarCross RefCross Ref
  71. [71] Zhu Ke and Wu Jianxin. 2021. Residual attention: A simple but effective method for multi-label recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 184193.Google ScholarGoogle ScholarCross RefCross Ref
  72. [72] Zhu Lei, She Qi, Li Duo, Lu Yanye, Kang Xuejing, Hu Jie, and Wang Changhu. 2021. Unifying nonlocal blocks for neural networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 1229212301.Google ScholarGoogle ScholarCross RefCross Ref
  73. [73] Zhu Xizhou, Cheng Dazhi, Zhang Zheng, Lin Stephen, and Dai Jifeng. 2019. An empirical study of spatial attention mechanisms in deep networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 66886697.Google ScholarGoogle ScholarCross RefCross Ref
  74. [74] Zilin Gao, Jiangtao Xie, Qilong Wang, and Peihua Li. 2019. Global second-order pooling convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, CA, USA, 1620.Google ScholarGoogle Scholar

Index Terms

  1. Attention-Augmented Memory Network for Image Multi-Label Classification

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image ACM Transactions on Multimedia Computing, Communications, and Applications
        ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 19, Issue 3
        May 2023
        514 pages
        ISSN:1551-6857
        EISSN:1551-6865
        DOI:10.1145/3582886
        • Editor:
        • Abdulmotaleb El Saddik
        Issue’s Table of Contents

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 25 February 2023
        • Online AM: 3 November 2022
        • Accepted: 27 October 2022
        • Revised: 26 July 2022
        • Received: 10 April 2022
        Published in tomm Volume 19, Issue 3

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Full Text

      View this article in Full Text.

      View Full Text

      HTML Format

      View this article in HTML Format .

      View HTML Format
      About Cookies On This Site

      We use cookies to ensure that we give you the best experience on our website.

      Learn more

      Got it!