skip to main content
research-article

Skeleton Sequence and RGB Frame Based Multi-Modality Feature Fusion Network for Action Recognition

Published:04 March 2022Publication History
Skip Abstract Section

Abstract

Action recognition has been a heated topic in computer vision for its wide application in vision systems. Previous approaches achieve improvement by fusing the modalities of the skeleton sequence and RGB video. However, such methods pose a dilemma between the accuracy and efficiency for the high complexity of the RGB video network. To solve the problem, we propose a multi-modality feature fusion network to combine the modalities of the skeleton sequence and RGB frame instead of the RGB video, as the key information contained by the combination of the skeleton sequence and RGB frame is close to that of the skeleton sequence and RGB video. In this way, complementary information is retained while the complexity is reduced by a large margin. To better explore the correspondence of the two modalities, a two-stage fusion framework is introduced in the network. In the early fusion stage, we introduce a skeleton attention module that projects the skeleton sequence on the single RGB frame to help the RGB frame focus on the limb movement regions. In the late fusion stage, we propose a cross-attention module to fuse the skeleton feature and the RGB feature by exploiting the correlation. Experiments on two benchmarks, NTU RGB+D and SYSU, show that the proposed model achieves competitive performance compared with the state-of-the-art methods while reducing the complexity of the network.

REFERENCES

  1. [1] Baradel Fabien, Wolf Christian, Mille Julien, and Taylor Graham W.. 2018. Glimpse clouds: Human activity recognition from unstructured feature points. In Computer Vision Foundation Salt Lake City, UT, USA, June 18-22. IEEE Computer Society, 469–478.Google ScholarGoogle Scholar
  2. [2] Becattini Federico, Uricchio Tiberio, Seidenari Lorenzo, Ballan Lamberto, and Bimbo Alberto Del. 2021. Am I done? Predicting action progress in videos. ACM Trans. Multim. Comput. Commun. Appl. 16, 4 (2021), 119:1–119:24.Google ScholarGoogle Scholar
  3. [3] Caetano C., Brémond F., and Schwartz W. R.. 2019. Skeleton image representation for 3D action recognition based on tree structure and reference joints. In 32nd SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI’09) Rio de Janeiro, Brazil, October 28-30. IEEE, 16–23.Google ScholarGoogle ScholarCross RefCross Ref
  4. [4] Cai Jinmiao, Jiang Nianjuan, Han Xiaoguang, Jia Kui, and Lu Jiangbo. 2021. JOLO-GCN: Mining joint-centered light-weight information for skeleton-based action recognition. In WACV, Waikoloa, HI, USA, January 3-8. IEEE, 2734–2743.Google ScholarGoogle Scholar
  5. [5] Carreira João and Zisserman Andrew. 2017. Quo vadis, action recognition? A new model and the kinetics dataset. In CVPR, Honolulu, HI, USA, July 21-26. IEEE Computer Society, 4724–4733.Google ScholarGoogle Scholar
  6. [6] Chen Yunpeng, Kalantidis Yannis, Li Jianshu, Yan Shuicheng, and Feng Jiashi. 2018. Multi-fiber networks for video recognition. In ECCV, Munich, Germany, September 8-14, Ferrari Vittorio, Hebert Martial, Sminchisescu Cristian, and Weiss Yair (Eds.).Google ScholarGoogle Scholar
  7. [7] Chen Yuxin, Ma Gaoqun, Yuan Chunfeng, Li Bing, Zhang Hui, Wang Fangshi, and Hu Weiming. 2020. Graph convolutional network with structure pooling and joint-wise channel attention for action recognition. Pattern Recognition 103 (2020), 107321.Google ScholarGoogle ScholarCross RefCross Ref
  8. [8] Chollet Francois. 2017. Xception: Deep learning with depthwise separable convolutions. In CVPR, Honolulu, HI, USA, July 21-26. IEEE Computer Society, 1800–1807.Google ScholarGoogle Scholar
  9. [9] Ding Zewei, Wang Pichao, Ogunbona Philip O., and Li Wanqing. 2017. Investigation of different skeleton features for CNN-based 3D action recognition. In ICME Workshops, Hong Kong, China, July 10-14. IEEE Computer Society, 617–622.Google ScholarGoogle Scholar
  10. [10] Donahue Jeff, Hendricks Lisa Anne, Rohrbach Marcus, Venugopalan Subhashini, Guadarrama Sergio, Saenko Kate, and Darrell Trevor. 2017. Long-term recurrent convolutional networks for visual recognition and description. IEEE Trans. Pattern Anal. Mach. Intell. 39, 4 (2017), 677691.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. [11] Hochreiter Sepp and Schmidhuber Jürgen. 1997. Long short-term memory. Neural Computation 9, 8 (1997), 17351780.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. [12] Hu Jianfang, Zheng Wei-Shi, Lai Jian-Huang, and Zhang Jianguo. 2015. Jointly learning heterogeneous features for RGB-D activity recognition. In CVPR, Boston, MA, USA, June 7-12. IEEE Computer Society, 5344–5352.Google ScholarGoogle Scholar
  13. [13] Hu Jianfang, Zheng Wei-Shi, Lai Jianhuang, and Zhang Jianguo. 2017. Jointly learning heterogeneous features for RGB-D activity recognition. IEEE Trans. Pattern Anal. Mach. Intell. 39, 11 (2017), 21862200.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. [14] Hu Jian-Fang, Zheng Wei-Shi, Pan Jiahui, Lai Jianhuang, and Zhang Jianguo. 2018. Deep bilinear learning for RGB-D action recognition. In ECCV, Munich, Germany, September 8-14. Springer, 346–362.Google ScholarGoogle Scholar
  15. [15] Huang Junqin, Huang Zhenhuan, Xiang Xiang, Gong Xuan, and Zhang Baochang. 2020. Long-short graph memory network for skeleton-based action recognition. In WACV, Snowmass Village, CO, USA, March 1-5. IEEE, 634–641.Google ScholarGoogle Scholar
  16. [16] Ji Yanli, Xu Feixiang, Yang Yang, Xie Ning, Shen Heng Tao, and Harada Tatsuya. 2019. Attention transfer (ANT) network for view-invariant action recognition. In ACM MM, Nice, France, October 21-25. ACM, 574–582.Google ScholarGoogle Scholar
  17. [17] Joze Hamid Reza Vaezi, Shaban Amirreza, Iuzzolino Michael L., and Koishida Kazuhito. 2020. MMTM: Multimodal transfer module for CNN fusion. In CVPR, Seattle, WA, USA, June 13-19. Computer Vision Foundation/IEEE, 13286–13296.Google ScholarGoogle Scholar
  18. [18] Ke Qiuhong, Bennamoun Mohammed, An Senjian, Sohel Ferdous, and Boussaid Farid. 2017. A new representation of skeleton sequences for 3D action recognition. In CVPR, Honolulu, HI, USA, July 21-26. IEEE Computer Society, 4570–4579.Google ScholarGoogle Scholar
  19. [19] Ke Qiuhong, Bennamoun Mohammed, Rahmani Hossein, An Senjian, Sohel Ferdous, and Boussaïd Farid. 2020. Learning latent global network for skeleton-based action prediction. IEEE Trans. Image Process. 29 (2020), 959970.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. [20] Kipf Thomas N. and Welling Max. 2017. Semi-supervised classification with graph convolutional networks. In ICLR, Toulon, France, April 24-26. OpenReview.net.Google ScholarGoogle Scholar
  21. [21] Lee Inwoong, Kim Doyoung, Kang Seoungyoon, and Lee Sanghoon. 2017. Ensemble deep learning for skeleton-based action recognition using temporal sliding LSTM networks. In ICCV, Venice, Italy, October 22–29. IEEE Computer Society, 1012–1020.Google ScholarGoogle Scholar
  22. [22] Li Chao, Zhong Qiaoyong, Xie Di, and Pu Shiliang. 2018. Co-occurrence feature learning from skeleton data for action recognition and detection with hierarchical aggregation. In IJCAI, 2018, Stockholm, Sweden, July 13–19. ijcai.org, 786–792.Google ScholarGoogle Scholar
  23. [23] Li Jianan, Xie Xuemei, Pan Qingzhe, Cao Yuhan, Zhao Zhifu, and Shi Guangming. 2020. SGM-Net: Skeleton-guided multimodal network for action recognition. Pattern Recognition 104 (2020), 107356.Google ScholarGoogle ScholarCross RefCross Ref
  24. [24] Li Maosen, Chen Siheng, Chen Xu, Zhang Ya, Wang Yanfeng, and Tian Qi. 2019. Actional-structural graph convolutional networks for skeleton-based action recognition. In CVPR, Long Beach, CA, USA, June 16-20. Computer Vision Foundation/IEEE, 3595–3603.Google ScholarGoogle Scholar
  25. [25] Li Shuai, Li Wanqing, Cook Chris, Zhu Ce, and Gao Yanbo. 2018. Independently recurrent neural network (IndRNN): Building a longer and deeper RNN. In CVPR, Salt Lake City, UT, USA, June 18-22. Computer Vision Foundation/IEEE Computer Society, 5457–5466.Google ScholarGoogle Scholar
  26. [26] Li Y., Xia R., Liu X., and Huang Q.. 2019. Learning shape-motion representations from geometric algebra spatio-temporal model for skeleton-based action recognition. In ICME, Shanghai, China, July 8-12. IEEE, 1066–1071.Google ScholarGoogle Scholar
  27. [27] Liu Jun, Shahroudy Amir, Xu Dong, and Wang Gang. 2016. Spatio-temporal LSTM with trust gates for 3D human action recognition. In ECCV, Amsterdam, The Netherlands, October 11-14, Vol. 9907. Springer, 816–833.Google ScholarGoogle Scholar
  28. [28] Liu Jiaying, Song Sijie, Liu Chunhui, Li Yanghao, and Hu Yueyu. 2020. A benchmark dataset and comparison study for multi-modal human action analytics. ACM Trans. Multim. Comput. Commun. Appl. 16, 2 (2020), 41:1–41:24.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. [29] Liu Jun, Wang Gang, Hu Ping, Duan Ling-Yu, and Kot Alex C.. 2017. Global context-aware attention LSTM networks for 3D action recognition. In CVPR, Honolulu, HI, USA, July 21-26. IEEE Computer Society, 3671–3680.Google ScholarGoogle Scholar
  30. [30] Liu Mengyuan and Yuan Junsong. 2018. Recognizing human actions as the evolution of pose estimation maps. In CVPR, Salt Lake City, UT, USA, June 18-22. Computer Vision Foundation/IEEE Computer Society, 1159–1168.Google ScholarGoogle Scholar
  31. [31] Liu Ziyu, Zhang Hongwen, Chen Zhenghao, Wang Zhiyong, and Ouyang Wanli. 2020. Disentangling and unifying graph convolutions for skeleton-based action recognition. In CVPR, Seattle, WA, USA, June 13-19. Computer Vision Foundation/IEEE, 140–149.Google ScholarGoogle Scholar
  32. [32] Luvizon Diogo C., Picard David, and Tabia Hedi. 2018. 2D/3D pose estimation and action recognition using multitask deep learning. In CVPR, Salt Lake City, UT, USA, June 18-22. Computer Vision Foundation/IEEE Computer Society, 5137–5146.Google ScholarGoogle Scholar
  33. [33] Perez-Rua Juan-Manuel, Vielzeuf Valentin, Pateux Stéphane, Baccouche Moez, and Jurie Frédéric. 2019. MFAS: Multimodal fusion architecture search. In CVPR, Long Beach, CA, USA, June 16-20. Computer Vision Foundation/IEEE, 6966–6975.Google ScholarGoogle Scholar
  34. [34] Rahmani Hossein and Bennamoun Mohammed. 2017. Learning action recognition model from depth and skeleton videos. In ICCV, Venice, Italy, October 22-29. IEEE Computer Society, 5833–5842.Google ScholarGoogle Scholar
  35. [35] Schindler Konrad and Gool Luc Van. 2008. Action snippets: How many frames does human action recognition require? In CVPR, Anchorage, Alaska, USA, 24-26 June. IEEE Computer Society.Google ScholarGoogle Scholar
  36. [36] Schuster Mike and Paliwal Kuldip K.. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing 45, 11 (1997), 26732681.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. [37] Shahroudy Amir, Liu Jun, Ng Tian-Tsong, and Wang Gang. 2016. NTU RGB+D: A large scale dataset for 3D human activity analysis. In CVPR, Las Vegas, NV, USA, June 27-30. IEEE Computer Society, 1010–1019.Google ScholarGoogle Scholar
  38. [38] Shahroudy A., Ng T., Gong Y., and Wang G.. 2018. Deep multimodal feature analysis for action recognition in RGB+D videos. IEEE Transactions on Pattern Analysis and Machine Intelligence 40, 5 (2018), 10451058.Google ScholarGoogle ScholarCross RefCross Ref
  39. [39] Shi Lei, Zhang Yifan, Cheng Jian, and Lu Hanqing. 2019. Skeleton-based action recognition with directed graph neural networks. In CVPR, Long Beach, CA, USA, June 16-20. Computer Vision Foundation/IEEE, 7912–7921.Google ScholarGoogle Scholar
  40. [40] Shi L., Zhang Y., Cheng J., and Lu H.. 2019. Two-stream adaptive graph convolutional networks for skeleton-based action recognition. In CVPR, Long Beach, CA, USA, June 16-20. Computer Vision Foundation/IEEE, 12026–12035.Google ScholarGoogle Scholar
  41. [41] Si Chenyang, Chen Wentao, Wang Wei, Wang Liang, and Tan Tieniu. 2019. An attention enhanced graph convolutional LSTM network for skeleton-based action recognition. In CVPR, Long Beach, CA, USA, June 16-20. Computer Vision Foundation/IEEE, 1227–1236.Google ScholarGoogle Scholar
  42. [42] Song S., Lan C., Xing J., Zeng W., and Liu J.. 2018. Spatio-temporal attention-based LSTM networks for 3D action recognition and detection. IEEE Transactions on Image Processing 27, 7 (2018), 34593471.Google ScholarGoogle ScholarCross RefCross Ref
  43. [43] Tang Yansong, Tian Yi, Lu Jiwen, Li Peiyang, and Zhou Jie. 2018. Deep progressive reinforcement learning for skeleton-based action recognition. In CVPR, Salt Lake City, UT, USA, June 18-22. Computer Vision Foundation/IEEE Computer Society, 5323–5332.Google ScholarGoogle Scholar
  44. [44] Trabelsi Rim, Varadarajan Jagannadan, Zhang Le, Jabri Issam, Pei Yong, Smach Fethi, Bouallegue Ammar, and Moulin Pierre. 2019. Understanding the dynamics of social interactions: A multi-modal multi-view approach. ACM Trans. Multim. Comput. Commun. Appl. 15, 1s (2019), 15:1–15:16.Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. [45] Tran Du, Bourdev Lubomir, Fergus Rob, Torresani Lorenzo, and Paluri Manohar. 2015. Learning spatiotemporal features with 3D convolutional networks. In ICCV, Santiago, Chile, December 7-13. IEEE Computer Society, 4489–4497.Google ScholarGoogle Scholar
  46. [46] Veeriah Vivek, Zhuang Naifan, and Qi Guo-Jun. 2015. Differential recurrent neural networks for action recognition. In ICCV, Santiago, Chile, December 7-13. IEEE Computer Society, 4041–4049.Google ScholarGoogle Scholar
  47. [47] Vemulapalli Raviteja, Arrate Felipe, and Chellappa Rama. 2014. Human action recognition by representing 3D skeletons as points in a lie group. In CVPR, Columbus, OH, USA, June 23-28. IEEE Computer Society, 588–595.Google ScholarGoogle Scholar
  48. [48] Wang Hongsong and Wang Liang. 2017. Modeling temporal dynamics and spatial configurations of actions using two-stream recurrent neural networks. In CVPR, Honolulu, HI, USA, July 21-26. IEEE Computer Society, 3633–3642.Google ScholarGoogle Scholar
  49. [49] Weng Junwu, Liu Mengyuan, Jiang Xudong, and Yuan Junsong. 2018. Deformable pose traversal convolution for 3D action and gesture recognition. In ECCV, Munich, Germany, September 8-14, Vol. 11211. Springer, 142–157.Google ScholarGoogle Scholar
  50. [50] Wu Di and Shao Ling. 2014. Leveraging hierarchical parametric networks for skeletal joints based action segmentation and recognition. In CVPR, Columbus, OH, USA, June 23-28. IEEE Computer Society, 724–731.Google ScholarGoogle Scholar
  51. [51] Wu Qianyu, Zhu Aichun, Cui Ran, Wang Tian, Hu Fangqiang, Bao Yaping, and Snoussi Hichem. 2021. Pose-guided inflated 3D ConvNet for action recognition in videos. Signal Process. Image Commun. 91 (2021), 116098.Google ScholarGoogle ScholarCross RefCross Ref
  52. [52] Wu Zuxuan, Jiang Yu-Gang, Wang Xi, Ye Hao, and Xue Xiangyang. 2016. Multi-stream multi-class fusion of deep networks for video classification. In ACM MM, Amsterdam, The Netherlands, October 15-19. ACM, 791–800.Google ScholarGoogle Scholar
  53. [53] Xia Lu, Chen Chia-Chih, and Aggarwal J. K.. 2012. View invariant human action recognition using histograms of 3D joints. In CVPR Workshops, Providence, RI, USA, June 16-21. IEEE Computer Society, 20–27.Google ScholarGoogle Scholar
  54. [54] Xie Chunyu, Li Ce, Zhang Baochang, Chen Chen, Han Jungong, and Liu Jianzhuang. 2018. Memory attention networks for skeleton-based action recognition. In IJCAI’18.Google ScholarGoogle Scholar
  55. [55] Xu Kelvin, Ba Jimmy, Kiros Ryan, Cho Kyunghyun, Courville Aaron C., Salakhutdinov Ruslan, Zemel Richard S., and Bengio Yoshua. 2015. Show, attend and tell: Neural image caption generation with visual attention. In ICML, Lille, France, 6-11 July, Vol. 37. JMLR.org, 2048–2057.Google ScholarGoogle Scholar
  56. [56] Yan Sijie, Xiong Yuanjun, and Lin Dahua. 2018. Spatial temporal graph convolutional networks for skeleton-based action recognition. In AAAI, New Orleans, Louisiana, USA, February 2-7, Sheila A. McIlraith and Kilian Q. Weinberger (Eds.). AAAI Press, 7444–7452.Google ScholarGoogle Scholar
  57. [57] Zhang Junxuan, Hu Haifeng, and Lu Xinlong. 2019. Moving foreground-aware visual attention and key volume mining for human action recognition. ACM Trans. Multim. Comput. Commun. Appl. 15, 3 (2019), 74:1–74:16.Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. [58] Zhang Pengfei, Lan Cuiling, Xing Junliang, Zeng Wenjun, Xue Jianru, and Zheng Nanning. 2017. View adaptive recurrent neural networks for high performance human action recognition from skeleton data. In ICCV, Venice, Italy, October 22-29. IEEE Computer Society, 2136–2145.Google ScholarGoogle Scholar
  59. [59] Zhang Pengfei, Lan Cuiling, Zeng Wenjun, Xing Junliang, Xue Jianru, and Zheng Nanning. 2020. Semantics-guided neural networks for efficient skeleton-based human action recognition. In CVPR, Seattle, WA, USA, June 13-19. Computer Vision Foundation / IEEE, 1109–1118.Google ScholarGoogle Scholar
  60. [60] Zhang Pengfei, Xue Jianru, Lan Cuiling, Zeng Wenjun, Gao Zhanning, and Zheng Nanning. 2020. EleAtt-RNN: Adding attentiveness to neurons in recurrent neural networks. IEEE Trans. Image Process. 29 (2020), 10611073.Google ScholarGoogle ScholarCross RefCross Ref
  61. [61] Zhang S., Yang Y., Xiao J., Liu X., Yang Y., Xie D., and Zhuang Y.. 2018. Fusing geometric features for skeleton-based action recognition using multilayer LSTM networks. IEEE Transactions on Multimedia 20, 9 (2018), 23302343.Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. [62] Zhang Xikun, Xu Chang, and Tao Dacheng. 2020. Context aware graph convolution for skeleton-based action recognition. In CVPR, Seattle, WA, USA, June 13-19. Computer Vision Foundation/IEEE, 14321–14330.Google ScholarGoogle Scholar
  63. [63] Zhang Yu and Yeung Dit-Yan. 2011. Multi-task learning in heterogeneous feature spaces. In AAAI, San Francisco, California, USA, August 7-11. AAAI Press.Google ScholarGoogle Scholar
  64. [64] Zhao Liming, Li Xi, Zhuang Yueting, and Wang Jingdong. 2017. Deeply-learned part-aligned representations for person re-identification. In ICCV, Venice, Italy, October 22-29. IEEE Computer Society, 3239–3248.Google ScholarGoogle Scholar
  65. [65] Zhao R., Ali H., and Smagt P. van der. 2017. Two-stream RNN/CNN for action recognition in 3D videos. In IROS, Vancouver, BC, Canada, September 24-28. IEEE, 4260–4267.Google ScholarGoogle Scholar
  66. [66] Zheng Yunpeng, Li Xuelong, and Lu Xiaoqiang. 2020. Unsupervised learning of human action categories in still images with deep representations. ACM Trans. Multim. Comput. Commun. Appl. 15, 4 (2020), 112:1–112:20.Google ScholarGoogle Scholar
  67. [67] Zhu K., Wang R., Zhao Q., Cheng J., and Tao D.. 2020. A cuboid CNN model with an attention mechanism for skeleton-based action recognition. IEEE Transactions on Multimedia 22, 11 (2020), 29772989.Google ScholarGoogle ScholarCross RefCross Ref
  68. [68] Zhu Wentao, Lan Cuiling, Xing Junliang, Zeng Wenjun, Li Yanghao, Shen Li, and Xie Xiaohui. 2016. Co-occurrence feature learning for skeleton based action recognition using regularized deep LSTM networks. In AAAI, Phoenix, Arizona, USA, February 12-17. AAAI Press, 3697–3704.Google ScholarGoogle Scholar
  69. [69] Zhu Xiaoguang, Huang Siran, Fan Wenjing, Cheng Yuhao, Shao Huaqing, and Liu Peilin. 2021. SDAN: Stacked diverse attention network for video action recognition. In 2021 IEEE International Symposium on Circuits and Systems (ISCAS), Daegu, South Korea, May 22-28. IEEE, 1–5.Google ScholarGoogle Scholar
  70. [70] Zolfaghari Mohammadreza, Oliveira Gabriel L., Sedaghat Nima, and Brox Thomas. 2017. Chained multi-stream networks exploiting pose, motion, and appearance for action classification and detection. In ICCV, Venice, Italy, October 22-29. IEEE Computer Society, 2923–2932.Google ScholarGoogle Scholar

Index Terms

  1. Skeleton Sequence and RGB Frame Based Multi-Modality Feature Fusion Network for Action Recognition

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          Full Access

          • Published in

            cover image ACM Transactions on Multimedia Computing, Communications, and Applications
            ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 18, Issue 3
            August 2022
            478 pages
            ISSN:1551-6857
            EISSN:1551-6865
            DOI:10.1145/3505208
            Issue’s Table of Contents

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 4 March 2022
            • Revised: 1 October 2021
            • Accepted: 1 October 2021
            • Received: 1 May 2021
            Published in tomm Volume 18, Issue 3

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article
            • Refereed

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          Full Text

          View this article in Full Text.

          View Full Text

          HTML Format

          View this article in HTML Format .

          View HTML Format
          About Cookies On This Site

          We use cookies to ensure that we give you the best experience on our website.

          Learn more

          Got it!