skip to main content
research-article

Soul Dancer: Emotion-Based Human Action Generation

Authors Info & Claims
Published:21 January 2020Publication History
Skip Abstract Section

Abstract

Body language is one of the most common ways of expressing human emotion. In this article, we make the first attempt to generate an action video with a specific emotion from a single person image. The goal of the emotion-based action generation task (EBAG) is to generate action videos expressing a specific type of emotion given a single reference image with a full human body. We divide the task into two parts and propose a two-stage framework to generate action videos with specified emotions. At the first stage, we propose an emotion-based pose sequence generation approach (EPOSE-GAN) for translating the emotion to a pose sequence. At the second stage, we generate the target video frames according to the three inputs including the source pose and the target pose as the motion information and the source image as the appearance reference by using conditional GAN model with an online training strategy. Our framework produces the pose sequence and transforms the action independently, which highlights the fundamental role that the high-level pose feature plays in generating action video with a specific emotion. The proposed method has been evaluated on the “Soul Dancer” dataset which is built for action emotion analysis and generation. The experimental results demonstrate that our framework can effectively solve the emotion-based action generation task. However, the gap in the details of the appearance between the generated action video and the real-world video still exists, which indicates that the emotion-based action generation task has great research potential.

References

  1. Kfir Aberman, Mingyi Shi, Jing Liao, Dani Lischinski, Baoquan Chen, and Daniel Cohen-Or. 2018. Deep video-based performance cloning. CoRR abs/1808.06847 (2018). arxiv:1808.06847 http://arxiv.org/abs/1808.06847Google ScholarGoogle Scholar
  2. Saima Aman and Stan Szpakowicz. 2007. Identifying expressions of emotion in text. In Proceedings of the International Conference on Text, Speech and Dialogue. Springer, 196--205.Google ScholarGoogle ScholarCross RefCross Ref
  3. Martin Arjovsky, Soumith Chintala, and Léon Bottou. 2017. Wasserstein generative adversarial networks. In Proceedings of the International Conference on Machine Learning. 214--223.Google ScholarGoogle Scholar
  4. Carlos Busso, Zhigang Deng, Serdar Yildirim, Murtaza Bulut, Chul Min Lee, Abe Kazemzadeh, Sungbok Lee, Ulrich Neumann, and Shrikanth Narayanan. 2004. Analysis of emotion recognition using facial expressions, speech and multimodal information. In Proceedings of the 6th International Conference on Multimodal Interfaces. ACM, 205--211.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh. 2017. Realtime multi-person 2d pose estimation using part affinity fields. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 7291--7299.Google ScholarGoogle ScholarCross RefCross Ref
  6. Joao Carreira, Pulkit Agrawal, Katerina Fragkiadaki, and Jitendra Malik. 2016. Human pose estimation with iterative error feedback. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4733--4742.Google ScholarGoogle ScholarCross RefCross Ref
  7. Soumaya Chaffar and Diana Inkpen. 2011. Using a heterogeneous dataset for emotion analysis in text. In Proceedings of the Canadian Conference on Artificial Intelligence. Springer, 62--67.Google ScholarGoogle ScholarCross RefCross Ref
  8. Jingwen Chen, Jiawei Chen, Hongyang Chao, and Ming Yang. 2018. Image blind denoising with generative adversarial network based noise modeling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3155--3164.Google ScholarGoogle ScholarCross RefCross Ref
  9. Junkai Chen, Zenghai Chen, Zheru Chi, and Hong Fu. 2018. Facial expression recognition in video with multiple feature fusion. IEEE Transactions on Affective Computing 9, 1 (2018), 38--50.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, and Jaegul Choo. 2018. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 8789--8797.Google ScholarGoogle ScholarCross RefCross Ref
  11. Samira Ebrahimi Kahou, Vincent Michalski, Kishore Konda, Roland Memisevic, and Christopher Pal. 2015. Recurrent neural networks for emotion recognition in video. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction (ICMI’15). ACM, New York, 467--474. DOI:https://doi.org/10.1145/2818346.2830596Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Paul Ekman. 1992. An argument for basic emotions. Cognition and Emotion 6, 3--4 (1992), 169--200. DOI:https://doi.org/10.1080/02699939208411068 arXiv:https://doi.org/10.1080/02699939208411068Google ScholarGoogle ScholarCross RefCross Ref
  13. Hao-Shu Fang, Shuqin Xie, Yu-Wing Tai, and Cewu Lu. 2017. RMPE: Regional Multi-person Pose Estimation. In Proceedings of the IEEE International Conference on Computer Vision. 2334--2343.Google ScholarGoogle ScholarCross RefCross Ref
  14. Joseph Redmon Ali Farhadi. [n.d.]. YOLOv3: An Incremental Improvement. 5.Google ScholarGoogle Scholar
  15. Beat Fasel and Juergen Luettin. 2003. Automatic facial expression analysis: A survey. Pattern Recognition 36, 1 (2003), 259--275.Google ScholarGoogle Scholar
  16. Pedro F. Felzenszwalb and Daniel P. Huttenlocher. 2004. Pictorial structures for object recognition. International Journal of Computer Vision 61 (2004), 55--79.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Ross Girshick. 2015. Fast r-CNN. In Proceedings of the IEEE International Conference on Computer Vision. 1440--1448.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in Neural Information Processing Systems. 2672--2680.Google ScholarGoogle Scholar
  19. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 770--778.Google ScholarGoogle ScholarCross RefCross Ref
  20. Eldar Insafutdinov, Leonid Pishchulin, Bjoern Andres, Mykhaylo Andriluka, and Bernt Schiele. 2016. DeeperCut: A deeper, stronger, and faster multi-person pose estimation model. In Proceedings of the European Conference on Computer Vision (ECCV).Google ScholarGoogle ScholarCross RefCross Ref
  21. Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. 2017. Image-to-image translation with conditional adversarial networks. In Proceedings of the2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), 5967--5976.Google ScholarGoogle ScholarCross RefCross Ref
  22. T. Kanade and J. F. Cohn and. 2000. Comprehensive database for facial expression analysis. In Proceedings of the 4th IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580). 46--53. DOI:https://doi.org/10.1109/AFGR.2000.840611Google ScholarGoogle ScholarCross RefCross Ref
  23. Efthymios Kouloumpis, Theresa Wilson, and Johanna Moore. 2011. Twitter sentiment analysis: The good the bad and the omg!. In Proceedings of the 5th International AAAI Conference on Weblogs and Social Media.Google ScholarGoogle Scholar
  24. Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, and Zehan Wang. 2017. Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 105--114.Google ScholarGoogle ScholarCross RefCross Ref
  25. Weiyuan Li and Hua Xu. 2014. Text-based emotion classification using emotion cause extraction. Expert Systems with Applications 41, 4 (2014), 1742--1749.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Bing Liu and Lei Zhang. 2012. A survey of opinion mining and sentiment analysis. In Mining Text Data. Springer, 415--463.Google ScholarGoogle Scholar
  27. M. Liu, S. Shan, R. Wang, and X. Chen. 2014. Learning expressionlets on spatio-temporal manifold for dynamic facial expression recognition. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition. 1749--1756. DOI:https://doi.org/10.1109/CVPR.2014.226Google ScholarGoogle Scholar
  28. P. Liu, S. Han, Z. Meng, and Y. Tong. 2014. Facial expression recognition via a boosted deep belief network. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vol. 00. 1805--1812. DOI:https://doi.org/10.1109/CVPR.2014.233Google ScholarGoogle Scholar
  29. Xin Lu, Poonam Suryanarayan, Reginald B. Adams, Jr., Jia Li, Michelle G. Newman, and James Z. Wang. 2012. On shape and the computability of emotions. In Proceedings of the 20th ACM International Conference on Multimedia (MM’12). ACM, New York, 229--238. DOI:https://doi.org/10.1145/2393347.2393384Google ScholarGoogle Scholar
  30. Liqian Ma, Qianru Sun, Stamatios Georgoulis, Luc Van Gool, Bernt Schiele, and Mario Fritz. 2018. Disentangled person image generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 99--108.Google ScholarGoogle ScholarCross RefCross Ref
  31. Navonil Majumder, Devamanyu Hazarika, A. Gelbukh, Erik Cambria, and Soujanya Poria. 2018. Multimodal sentiment analysis using hierarchical fusion with context modeling. Knowledge-Based Systems 161 (2018), 124--133.Google ScholarGoogle ScholarCross RefCross Ref
  32. Xudong Mao, Qing Li, Haoran Xie, Raymond YK Lau, Zhen Wang, and Stephen Paul Smolley. 2017. Least squares generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision. 2794--2802.Google ScholarGoogle ScholarCross RefCross Ref
  33. Michael Mathieu, Camille Couprie, and Yann LeCun. 2015. Deep multi-scale video prediction beyond mean square error. Arxiv Preprint Arxiv:1511.05440 (2015).Google ScholarGoogle Scholar
  34. Joseph A. Mikels, Barbara L. Fredrickson, Gregory R. Larkin, Casey M. Lindberg, Sam J. Maglio, and Patricia A. Reuter-Lorenz. 2005. Emotional category data on images from the international affective picture system. Behavior Research Methods 37, 4 (1 Nov 2005), 626--630. DOI:https://doi.org/10.3758/BF03192732Google ScholarGoogle Scholar
  35. Alejandro Newell, Zhiao Huang, and Jia Deng. 2017. Associative embedding: End-to-end learning for joint detection and grouping. In Advances in Neural Information Processing Systems. 2277--2287.Google ScholarGoogle Scholar
  36. Alejandro Newell, Kaiyu Yang, and Jia Deng. 2016. Stacked hourglass networks for human pose estimation. In Proceedings of the European Conference on Computer Vision. Springer, 483--499.Google ScholarGoogle ScholarCross RefCross Ref
  37. Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard L. Lewis, and Satinder Singh. 2015. Action-conditional video prediction using deep networks in Atari games. In Advances in Neural Information Processing Systems. 2863--2871.Google ScholarGoogle Scholar
  38. Leonid Pishchulin, Eldar Insafutdinov, Siyu Tang, Bjoern Andres, Mykhaylo Andriluka, Peter Gehler, and Bernt Schiele. 2016. DeepCut: Joint subset partition and labeling for multi-person pose estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Google ScholarGoogle ScholarCross RefCross Ref
  39. Changqin Quan and Fuji Ren. 2010. A blog emotion corpus for emotional expression analysis in Chinese. Computer Speech 8 Language 24, 4 (2010), 726--749.Google ScholarGoogle Scholar
  40. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster r-CNN: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems. 91--99.Google ScholarGoogle Scholar
  41. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. 2016. Improved techniques for training gans. In Advances in Neural Information Processing Systems. 2234--2242.Google ScholarGoogle Scholar
  42. Karen Simonyan and Andrew Zisserman. 2014. Two-stream convolutional networks for action recognition in videos. In Advances in Neural Information Processing Systems. 568--576.Google ScholarGoogle Scholar
  43. Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scales image recognition. In Arxiv:1409.1556 [cs]. http://arxiv.org/abs/1409.1556 arXiv: 1409.1556.Google ScholarGoogle Scholar
  44. Mohammad Soleymani, Sadjad Asghari-Esfeden, Yun Fu, and Maja Pantic. 2016. Analysis of EEG signals and facial expressions for continuous emotion detection. IEEE Transactions on Affective Computing 1 (2016), 17--28.Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. X. Song, S. Jiang, and L. Herranz. 2017. Multi-scale multi-feature context modeling for scene recognition in the semantic manifold. IEEE Transactions on Image Processing 26, 6 (June 2017), 2721--2735. DOI:https://doi.org/10.1109/TIP.2017.2686017Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Carlo Strapparava and Rada Mihalcea. 2008. Learning to identify emotions in text. In Proceedings of the 2008 ACM Symposium on Applied Computing. ACM, 1556--1560.Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Haisheng Su, Xu Zhao, and Tianwei Lin. 2018. Cascaded pyramid mining network for weakly supervised temporal action localization. Arxiv:1810.11794 [cs] (Oct. 2018). http://arxiv.org/abs/1810.11794 arXiv: 1810.11794.Google ScholarGoogle Scholar
  48. Y.-I. Tian, Takeo Kanade, and Jeffrey F. Cohn. 2001. Recognizing action units for facial expression analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 23, 2 (2001), 97--115.Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Ruben Villegas, Jimei Yang, Seunghoon Hong, Xunyu Lin, and Honglak Lee. 2017. Decomposing motion and content for natural video sequence prediction. Arxiv Preprint Arxiv:1706.08033 (2017).Google ScholarGoogle Scholar
  50. Ruben Villegas, Jimei Yang, Yuliang Zou, Sungryull Sohn, Xunyu Lin, and Honglak Lee. 2017. Learning to generate long-term future via hierarchical prediction. In Proceedings of the 34th International Conference on Machine Learning (Proceedings of Machine Learning Research), Doina Precup and Yee Whye Teh (Eds.), Vol. 70. PMLR, International Convention Centre, Sydney, Australia, 3560--3569. http://proceedings.mlr.press/v70/villegas17a.html.Google ScholarGoogle Scholar
  51. C. Vondrick and A. Torralba. 2017. Generating the Future with Adversarial Transformers. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2992--3000. DOI:https://doi.org/10.1109/CVPR.2017.319Google ScholarGoogle ScholarCross RefCross Ref
  52. Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Guilin Liu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. 2018. Video-to-video synthesis. In Advances in Neural Information Processing Systems (NeurIPS).Google ScholarGoogle Scholar
  53. Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. 2018. High-resolution image synthesis and semantic manipulation with conditional gans. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 8798--8807.Google ScholarGoogle ScholarCross RefCross Ref
  54. Shih-En Wei, Varun Ramakrishna, Takeo Kanade, and Yaser Sheikh. 2016. Convolutional pose machines. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4724--4732.Google ScholarGoogle ScholarCross RefCross Ref
  55. Bin Xiao, Haiping Wu, and Yichen Wei. 2018. Simple baselines for human pose estimation and tracking. In Proceedings of the European Conference on Computer Vision (ECCV). 466--481.Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Yuliang Xiu, Jiefeng Li, Haoyu Wang, Yinghong Fang, and Cewu Lu. 2018. Pose Flow: Efficient online pose tracking. In Proceedings of the British Machine Vision Conference.Google ScholarGoogle Scholar
  57. Yichao Yan, Jingwei Xu, Bingbing Ni, Wendong Zhang, and Xiaokang Yang. 2017. Skeleton-aided articulated motion generation. In Proceedings of the 25th ACM International Conference on Multimedia. ACM, 199--207.Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. P. Yang, Q. Liu, and D. N. Metaxas. 2010. Exploring facial expressions with compositional features. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2638--2644. DOI:https://doi.org/10.1109/CVPR.2010.5539978Google ScholarGoogle ScholarCross RefCross Ref
  59. Wenming Yang, Xuechen Zhang, Yapeng Tian, Wei Wang, and Jing-Hao Xue. 2018. Deep learning for single image super-resolution: A brief review. Arxiv:1808.03344 [cs] (Aug. 2018). http://arxiv.org/abs/1808.03344 arXiv: 1808.03344.Google ScholarGoogle Scholar
  60. Y. Yang, C. Deng, S. Gao, W. Liu, D. Tao, and X. Gao. 2017. Discriminative multi-instance multi-task learning for 3D action recognition. IEEE Transactions on Multimedia 19, 3 (March 2017), 519--529. DOI:https://doi.org/10.1109/TMM.2016.2626959Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. Y. Yang, C. Deng, D. Tao, S. Zhang, W. Liu, and X. Gao. 2017. Latent Max-Margin Multitask Learning With Skelets for 3-D Action Recognition. IEEE Transactions on Cybernetics 47, 2 (2017), 439--448. DOI:https://doi.org/10.1109/TCYB.2016.2519448Google ScholarGoogle Scholar
  62. Yanhua Yang, Ruishan Liu, Cheng Deng, and Xinbo Gao. 2016. Multi-task human action recognition via exploring super-category. Signal Processing 124 (2016), 36--44. DOI:https://doi.org/10.1016/j.sigpro.2015.10.035Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. Raymond A. Yeh, Chen Chen, Teck Yian Lim, Alexander G. Schwing, Mark Hasegawa-Johnson, and Minh N. Do. 2017. Semantic image in painting with deep generative models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 5485--5493.Google ScholarGoogle Scholar
  64. Mengyao Zhai, Jiacheng Chen, Ruizhi Deng, Lei Chen, Ligeng Zhu, and Greg Mori. 2017. Learning to forecast videos of human activity with multi-granularity models and adaptive rendering. Arxiv:1712.01955 [cs] (Dec. 2017). http://arxiv.org/abs/1712.01955 arXiv: 1712.01955.Google ScholarGoogle Scholar
  65. Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, and Dimitris N. Metaxas. 2017. Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision. 5907--5915.Google ScholarGoogle Scholar
  66. Shunxiang Zhang, Zhongliang Wei, Yin Wang, and Tao Liao. 2018. Sentiment analysis of chinese micro-blog text based on extended sentiment dictionary. Future Generation Computer Systems 81 (2018), 395--403.Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. S. Zhao, Y. Gao, G. Ding, and T. Chua. 2018. Real-time multimedia social event detection in microblog. IEEE Transactions on Cybernetics 48, 11 (2018), 3218--3231. DOI:https://doi.org/10.1109/TCYB.2017.2762344Google ScholarGoogle ScholarCross RefCross Ref
  68. Sicheng Zhao, Chuang Lin, Pengfei Xu, Sendong Zhao, Yuchen Guo, R. V. V. Murali Krishna, Guiguang Ding, and Kurt Keutzer. 2019. CycleEmotionGAN: Emotional semantic consistency preserved CycleGAN for adapting image emotions. In Proceedings of the 33rd AAAI Conference on Artificial Intelligence.Google ScholarGoogle ScholarCross RefCross Ref
  69. S. Zhao, H. Yao, Y. Gao, G. Ding, and T. Chua. 2018. Predicting personalized image emotion perceptions in social networks. IEEE Transactions on Affective Computing 9, 4 (2018), 526--540. DOI:https://doi.org/10.1109/TAFFC.2016.2628787Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. S. Zhao, H. Yao, Y. Gao, R. Ji, and G. Ding. 2017. Continuous probability distribution prediction of image emotions via multitask shared sparse regression. IEEE Transactions on Multimedia 19, 3 (March 2017), 632--645. DOI:https://doi.org/10.1109/TMM.2016.2617741Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. Sicheng Zhao, Xin Zhao, Guiguang Ding, and Kurt Keutzer. 2018. EmotionGAN: Unsupervised domain adaptation for learning discrete probability distributions of image emotions. In Proceedings of the 26th ACM International Conference on Multimedia (MM’18) (Seoul, Republic of Korea). ACM, New York, 1319--1327. DOI:https://doi.org/10.1145/3240508.3240591Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. Yan-Yan Zhao, Bing Qin, and Ting Liu. 2010. Sentiment analysis. Journal of Software 21, 8 (2010), 1834--1848.Google ScholarGoogle ScholarCross RefCross Ref
  73. Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision. 2223--2232.Google ScholarGoogle Scholar

Index Terms

  1. Soul Dancer: Emotion-Based Human Action Generation

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image ACM Transactions on Multimedia Computing, Communications, and Applications
        ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 15, Issue 3s
        Special Issue on Face Analysis for Applications and Special Issue on Affective Computing for Large-Scale Heterogeneous Multimedia Data
        November 2019
        304 pages
        ISSN:1551-6857
        EISSN:1551-6865
        DOI:10.1145/3368027
        Issue’s Table of Contents

        Copyright © 2020 ACM

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 21 January 2020
        • Accepted: 1 June 2019
        • Revised: 1 April 2019
        • Received: 1 January 2019
        Published in tomm Volume 15, Issue 3s

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Research
        • Refereed

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format .

      View HTML Format
      About Cookies On This Site

      We use cookies to ensure that we give you the best experience on our website.

      Learn more

      Got it!