skip to main content
research-article

Black-Box Diagnosis and Calibration on GAN Intra-Mode Collapse: A Pilot Study

Published:23 December 2021Publication History
Skip Abstract Section

Abstract

Generative adversarial networks (GANs) nowadays are capable of producing images of incredible realism. Two concerns raised are whether the state-of-the-art GAN’s learned distribution still suffers from mode collapse and what to do if so. Existing diversity tests of samples from GANs are usually conducted qualitatively on a small scale and/or depend on the access to original training data as well as the trained model parameters. This article explores GAN intra-mode collapse and calibrates that in a novel black-box setting: access to neither training data nor the trained model parameters is assumed. The new setting is practically demanded yet rarely explored and significantly more challenging. As a first stab, we devise a set of statistical tools based on sampling that can visualize, quantify, and rectify intra-mode collapse. We demonstrate the effectiveness of our proposed diagnosis and calibration techniques, via extensive simulations and experiments, on unconditional GAN image generation (e.g., face and vehicle). Our study reveals that the intra-mode collapse is still a prevailing problem in state-of-the-art GANs and the mode collapse is diagnosable and calibratable in black-box settings. Our codes are available at https://github.com/VITA-Group/BlackBoxGANCollapse.

REFERENCES

  1. [1] [n.d.]. 265 Bird Species. Retrieved from https://www.kaggle.com/gpiosenka/100-bird-species.Google ScholarGoogle Scholar
  2. [2] [n.d.]. Flowers-17 & Flowers-102. Retrieved from https://www.robots.ox.ac.uk/ vgg/data/flowers/.Google ScholarGoogle Scholar
  3. [3] [n.d.]. Flowers Recognition. Retrieved from https://www.kaggle.com/alxmamaev/flowers-recognition.Google ScholarGoogle Scholar
  4. [4] [n.d.]. Flowers Recognition. Retrieved from https://public.roboflow.com/classification/flowers/1.Google ScholarGoogle Scholar
  5. [5] [n.d.]. PlantCLEF 2021: Cross-domain Plant Identification. Retrieved from https://www.imageclef.org/PlantCLEF2021.Google ScholarGoogle Scholar
  6. [6] Amini Alexander, Soleimany Ava, Schwarting Wilko, Bhatia Sangeeta, and Rus Daniela. 2019. Uncovering and mitigating algorithmic bias through learned latent structure. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. 289295. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. [7] Arjovsky Martin, Chintala Soumith, and Bottou Léon. 2017. Wasserstein generative adversarial networks. In Proceedings of the International Conference on Machine Learning (ICML’17). Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. [8] Arora Sanjeev, Risteski Andrej, and Zhang Yi. 2018. Do GANs learn the distribution? some theory and empirics. In Proceedings of the International Conference on Learning Representations (ICLR’18).Google ScholarGoogle Scholar
  9. [9] Arora Sanjeev and Zhang Yi. 2017. Do gans actually learn the distribution? an empirical study. arXiv preprint arXiv:1706.08224.Google ScholarGoogle Scholar
  10. [10] Azadi Samaneh, Olsson Catherine, Darrell Trevor, Goodfellow Ian, and Odena Augustus. 2018. Discriminator rejection sampling. arXiv preprint arXiv:1810.06758.Google ScholarGoogle Scholar
  11. [11] Barratt Shane and Sharma Rishi. 2018. A note on the inception score. arXiv preprint arXiv:1801.01973.Google ScholarGoogle Scholar
  12. [12] Bau David, Zhu Jun-Yan, Wulff Jonas, Peebles William, Strobelt Hendrik, Zhou Bolei, and Torralba Antonio. 2019. Seeing what a gan cannot generate. In Proceedings of the International Conference on Computer Vision (ICCV’19).Google ScholarGoogle ScholarCross RefCross Ref
  13. [13] Borji Ali. 2019. Pros and cons of gan evaluation measures. Comput. Vis. Image Understand. 179 (2019), 4165.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. [14] Bowles Christopher, Gunn Roger, Hammers Alexander, and Rueckert Daniel. 2018. GANsfer learning: Combining labelled and unlabelled data for GAN based data augmentation. arXiv preprint arXiv:1811.10669.Google ScholarGoogle Scholar
  15. [15] Brock Andrew, Donahue Jeff, and Simonyan Karen. 2018. Large scale gan training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096.Google ScholarGoogle Scholar
  16. [16] Che Tong, Li Yanran, Jacob Athul Paul, Bengio Yoshua, and Li Wenjie. 2016. Mode regularized generative adversarial networks. arXiv preprint arXiv:1612.02136.Google ScholarGoogle Scholar
  17. [17] Deng Jiankang, Guo Jia, Niannan Xue, and Zafeiriou Stefanos. 2019. ArcFace: Additive angular margin loss for deep face recognition. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR’19).Google ScholarGoogle ScholarCross RefCross Ref
  18. [18] Deng Jiankang, Guo Jia, Yuxiang Zhou, Yu Jinke, Kotsia Irene, and Zafeiriou Stefanos. 2019. RetinaFace: Single-stage dense face localisation in the wild. arXiv preprint arXiv:1905.00641.Google ScholarGoogle Scholar
  19. [19] Deng Jiankang, Roussos Anastasios, Chrysos Grigorios, Ververas Evangelos, Kotsia Irene, Shen Jie, and Zafeiriou Stefanos. 2018. The Menpo benchmark for multi-pose 2D and 3D facial landmark localisation and tracking. Int. J. Comput. Vis. (2018). Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. [20] Durugkar Ishan, Gemp Ian, and Mahadevan Sridhar. 2016. Generative multi-adversarial networks. arXiv preprint arXiv:1611.01673.Google ScholarGoogle Scholar
  21. [21] Filipovych Roman, Davatzikos Christos, Initiative Alzheimer’s Disease Neuroimaging, et al. 2011. Semi-supervised pattern classification of medical images: Application to mild cognitive impairment (MCI). NeuroImage (2011).Google ScholarGoogle ScholarCross RefCross Ref
  22. [22] Ghosh Arnab, Kulharia Viveka, Namboodiri Vinay P., Torr Philip H. S., and Dokania Puneet K.. 2018. Multi-agent diverse generative adversarial networks. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR’18).Google ScholarGoogle ScholarCross RefCross Ref
  23. [23] Goëau Hervé, Bonnet Pierre, and Joly Alexis. 2016. Plant identification in an open-world (lifeclef 2016). In Conference and Labs of the Evaluation Forum (CLEF’16).Google ScholarGoogle Scholar
  24. [24] Goëau Hervé, Bonnet Pierre, and Joly Alexis. 2017. Plant identification based on noisy web data: The amazing performance of deep learning (LifeCLEF 2017). In Conference and Labs of the Evaluation Forum (CLEF’17).Google ScholarGoogle Scholar
  25. [25] Goëau Hervé, Bonnet Pierre, and Joly Alexis. 2020. Overview of lifeclef plant identification task 2020. In Conference and labs of the Evaluation Forum (CLEF’20).Google ScholarGoogle Scholar
  26. [26] Gong Xinyu, Chang Shiyu, Jiang Yifan, and Wang Zhangyang. 2019. Autogan: Neural architecture search for generative adversarial networks. In Proceedings of the International Conference on Computer Vision (ICCV’19).Google ScholarGoogle ScholarCross RefCross Ref
  27. [27] Goodfellow Ian. 2016. NIPS 2016 tutorial: Generative adversarial networks. arXiv preprint arXiv:1701.00160.Google ScholarGoogle Scholar
  28. [28] Goodfellow Ian, Pouget-Abadie Jean, Mirza Mehdi, Xu Bing, Warde-Farley David, Ozair Sherjil, Courville Aaron, and Bengio Yoshua. 2014. Generative adversarial nets. In Proceedings of the Conference on Neural Information Processing Systems (NeurIPS’14). Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. [29] Guo Jia, Deng Jiankang, Xue Niannan, and Zafeiriou Stefanos. 2018. Stacked dense U-nets with dual transformers for robust face alignment. In Proceedings of the British Machine Vision Conference (BMVC’18).Google ScholarGoogle Scholar
  30. [30] Heusel Martin, Ramsauer Hubert, Unterthiner Thomas, Nessler Bernhard, and Hochreiter Sepp. 2017. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Proceedings of the Conference on Neural Information Processing Systems (NeurIPS’17). Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. [31] Holstein Kenneth, Vaughan Jennifer Wortman, III Hal Daumé, Dudík Miro, and Wallach Hanna. 2018. Improving fairness in machine learning systems: What do industry practitioners need?arXiv preprint arXiv:1812.05239. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. [32] Huang He, Yu Philip S., and Wang Changhu. 2018. An introduction to image synthesis with generative adversarial nets. arXiv:1803.04469. Retrieved from https://arxiv.org/abs/1803.04469.Google ScholarGoogle Scholar
  33. [33] Jiang Yifan, Chang Shiyu, and Wang Zhangyang. 2021. Transgan: Two transformers can make one strong gan. arXiv preprint arXiv:2102.07074 1, 3 (2021).Google ScholarGoogle Scholar
  34. [34] Jiang Yifan, Gong Xinyu, Liu Ding, Cheng Yu, Fang Chen, Shen Xiaohui, Yang Jianchao, Zhou Pan, and Wang Zhangyang. 2021. Enlightengan: Deep light enhancement without paired supervision. IEEE Transactions on Image Processing 30 (2021), 23402349.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. [35] Karnewar Animesh and Wang Oliver. 2020. Msg-gan: Multi-scale gradients for generative adversarial networks. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR’20).Google ScholarGoogle ScholarCross RefCross Ref
  36. [36] Karras Tero, Aila Timo, Laine Samuli, and Lehtinen Jaakko. 2017. Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196.Google ScholarGoogle Scholar
  37. [37] Karras Tero, Laine Samuli, and Aila Timo. 2018. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 44014410.Google ScholarGoogle Scholar
  38. [38] Karras Tero, Laine Samuli, Aittala Miika, Hellsten Janne, Lehtinen Jaakko, and Aila Timo. 2020. Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 81108119.Google ScholarGoogle ScholarCross RefCross Ref
  39. [39] Kupyn Orest, Martyniuk Tetiana, Wu Junru, and Wang Zhangyang. 2019. Deblurgan-v2: Deblurring (orders-of-magnitude) faster and better. In Proceedings of the International Conference on Computer Vision (ICCV’19).Google ScholarGoogle ScholarCross RefCross Ref
  40. [40] Lin Zinan, Khetan Ashish, Fanti Giulia, and Oh Sewoong. 2018. Pacgan: The power of two samples in generative adversarial networks. In Proceedings of the Conference on Neural Information Processing Systems (NeurIPS’18). Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. [41] Liu Ming-Yu and Tuzel Oncel. 2016. Coupled generative adversarial networks. In Proceedings of the Conference on Neural Information Processing Systems (NeurIPS’16). Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. [42] Liu Shuangting, Zhang Jiaqi, Chen Yuxin, Liu Yifan, Qin Zengchang, and Wan Tao. 2019. Pixel level data augmentation for semantic image segmentation using generative adversarial networks. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’19).Google ScholarGoogle ScholarCross RefCross Ref
  43. [43] Liu Xiaobin, Zhang Shiliang, Huang Qingming, and Gao Wen. 2018. RAM: A region-aware deep model for vehicle re-identification. In Proceedings of the IEEE International Conference on Multimedia & Expo (ICME’18).Google ScholarGoogle ScholarCross RefCross Ref
  44. [44] Mao Xudong, Li Qing, Xie Haoran, Lau Raymond Y. K., Wang Zhen, and Smolley Stephen Paul. 2017. Least squares generative adversarial networks. In Proceedings of the International Conference on Computer Vision (ICCV’17).Google ScholarGoogle ScholarCross RefCross Ref
  45. [45] Mei Jian and Dong Hao. 2020. The DongNiao international birds 10000 dataset. (unpublished).Google ScholarGoogle Scholar
  46. [46] Metz Luke, Poole Ben, Pfau David, and Sohl-Dickstein Jascha. 2016. Unrolled Generative Adversarial Networks. arXiv preprint arXiv:1611.02163.Google ScholarGoogle Scholar
  47. [47] Miyato Takeru, Kataoka Toshiki, Koyama Masanori, and Yoshida Yuichi. 2018. Spectral normalization for generative adversarial networks. In Proceedings of the International Conference on Learning Representations (ICLR’18).Google ScholarGoogle Scholar
  48. [48] Nilsback Maria-Elena and Zisserman Andrew. 2008. Automated flower classification over a large number of classes. In Proceedings of the 6th Indian Conference on Computer Vision, Graphics & Image Processing. IEEE. Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. [49] Radford Alec, Metz Luke, and Chintala Soumith. 2015. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434.Google ScholarGoogle Scholar
  50. [50] Salimans Tim, Goodfellow Ian, Zaremba Wojciech, Cheung Vicki, Radford Alec, and Chen Xi. 2016. Improved techniques for training gans. In Proceedings of the Conference on Neural Information Processing Systems (NeurIPS’16). Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. [51] Santurkar Shibani, Schmidt Ludwig, and Mądry Aleksander. 2017. A classification-based study of covariate shift in gan distributions. In Proceedings of the International Conference on Machine Learning. PMLR, 4480–4489.Google ScholarGoogle Scholar
  52. [52] Sattigeri Prasanna, Hoffman Samuel C., Chenthamarakshan Vijil, and Varshney Kush R.. 2018. Fairness gan. arXiv preprint arXiv:1805.09910.Google ScholarGoogle Scholar
  53. [53] Savchenko Andrey V.. 2019. Efficient facial representations for age, gender and identity recognition in organizing photo albums using multi-output ConvNet. PeerJ Comput. Sci. 5 (2019), e197.Google ScholarGoogle ScholarCross RefCross Ref
  54. [54] Seeland Marco, Rzanny Michael, Alaqraa Nedal, Wäldchen Jana, and Mäder Patrick. 2017. Jena Flowers 30 Dataset. DOI: DOI: https://doi.org/10.7910/DVN/QDHYSTGoogle ScholarGoogle Scholar
  55. [55] Srivastava Akash, Valkov Lazar, Russell Chris, Gutmann Michael U., and Sutton Charles. 2017. Veegan: Reducing mode collapse in gans using implicit variational learning. In Proceedings of the Conference on Neural Information Processing Systems (NeurIPS’17). Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. [56] Sutskever Ilya, Jozefowicz Rafal, Gregor Karol, Rezende Danilo, Lillicrap Tim, and Vinyals Oriol. 2015. Towards principled unsupervised learning. arXiv preprint arXiv:1511.06440.Google ScholarGoogle Scholar
  57. [57] Tan Mingxing and Le Quoc. 2019. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the International Conference on Machine Learning (ICML’19). PMLR.Google ScholarGoogle Scholar
  58. [58] Turner Ryan, Hung Jane, Saatci Yunus, and Yosinski Jason. 2018. Metropolis-hastings generative adversarial networks. In Proceedings of the International Conference on Machine Learning. PMLR, 63456353.Google ScholarGoogle Scholar
  59. [59] Uplavikar Pritish M., Wu Zhenyu, and Wang Zhangyang. 2019. All-in-one underwater image enhancement using domain-adversarial learning. In Proceedings of the Conference on Computer Vision and Pattern Recognition Workshops (CVPR Workshops’19).Google ScholarGoogle Scholar
  60. [60] Van Horn Grant, Branson Steve, Farrell Ryan, Haber Scott, Barry Jessie, Ipeirotis Panos, Perona Pietro, and Belongie Serge. 2015. Building a bird recognition app and large scale dataset with citizen scientists: The fine print in fine-grained dataset collection. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR’15).Google ScholarGoogle ScholarCross RefCross Ref
  61. [61] Wang Angelina, Narayanan Arvind, and Russakovsky Olga. 2020. REVISE: A tool for measuring and mitigating bias in visual datasets. In Proceedings of the European Conference on Computer Vision (ECCV’20).Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. [62] Wang Haotao, Gui Shupeng, Yang Haichuan, Liu Ji, and Wang Zhangyang. 2020. GAN slimming: All-in-one GAN compression by a unified optimization framework. In Proceedings of the European Conference on Computer Vision (ECCV’20). Springer.Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. [63] Webster Ryan, Rabin Julien, Simon Loic, and Jurie Frederic. 2019. Detecting overfitting of deep generative networks via latent recovery. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1127311282.Google ScholarGoogle ScholarCross RefCross Ref
  64. [64] Welinder P., Branson S., Mita T., Wah C., Schroff F., Belongie S., and Perona P.. 2010. Caltech-UCSD Birds 200. Technical Report CNS-TR-2010-001. California Institute of Technology.Google ScholarGoogle Scholar
  65. [65] Wu Zhenyu, Hoang Duc, Lin Shih-Yao, Xie Yusheng, Chen Liangjian, Lin Yen-Yu, Wang Zhangyang, and Fan Wei. 2020. MM-Hand: 3D-aware multi-modal guided hand generation for 3D hand pose synthesis. In Proceedings of the ACM Multimedia Conference (MM’20). Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. [66] Wu Zhenyu, Suresh Karthik, Narayanan Priya, Xu Hongyu, Kwon Heesung, and Wang Zhangyang. 2019. Delving into robust object detection from unmanned aerial vehicles: A deep nuisance disentanglement approach. In Proceedings of the International Conference on Computer Vision (ICCV’19).Google ScholarGoogle ScholarCross RefCross Ref
  67. [67] Wu Zhenyu, Wang Haotao, Wang Zhaowen, Jin Hailin, and Wang Zhangyang. 2020. Privacy-preserving deep action recognition: An adversarial learning framework and a new dataset. IEEE Transactions on Pattern Analysis and Machine Intelligence.Google ScholarGoogle ScholarCross RefCross Ref
  68. [68] Wu Zhenyu, Wang Zhangyang, Wang Zhaowen, and Jin Hailin. 2018. Towards privacy-preserving visual recognition via adversarial training: A pilot study. In Proceedings of the European Conference on Computer Vision (ECCV’18).Google ScholarGoogle ScholarCross RefCross Ref
  69. [69] Xie Liyang, Lin Kaixiang, Wang Shu, Wang Fei, and Zhou Jiayu. 2018. Differentially private generative adversarial network. arXiv preprint arXiv:1802.06739.Google ScholarGoogle Scholar
  70. [70] Xu Depeng, Yuan Shuhan, Zhang Lu, and Wu Xintao. 2018. Fairgan: Fairness-aware generative adversarial networks. In Proceedings of the International Conference on Big Data (ICBD’18).Google ScholarGoogle ScholarCross RefCross Ref
  71. [71] Yang Shuai, Wang Zhangyang, and Liu Jiaying. 2021. Shape-Matching GAN++: Scale controllable dynamic artistic text style transfer. IEEE Transactions on Pattern Analysis and Machine Intelligence.Google ScholarGoogle ScholarCross RefCross Ref
  72. [72] Yang Shuai, Wang Zhangyang, Liu Jiaying, and Guo Zongming. 2020. Deep plastic surgery: Robust and controllable image editing with human-drawn sketches. In Proceedings of the European Conference on Computer Vision. Springer, 601617.Google ScholarGoogle ScholarDigital LibraryDigital Library
  73. [73] Yang Shuai, Wang Zhangyang, Wang Zhaowen, Xu Ning, Liu Jiaying, and Guo Zongming. 2019. Controllable artistic text style transfer via shape-matching gan. In Proceedings of the International Conference on Computer Vision (ICCV’19).Google ScholarGoogle ScholarCross RefCross Ref
  74. [74] Yu Fisher, Seff Ari, Zhang Yinda, Song Shuran, Funkhouser Thomas, and Xiao Jianxiong. 2015. Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365.Google ScholarGoogle Scholar
  75. [75] Zhang Han, Xu Tao, Li Hongsheng, Zhang Shaoting, Wang Xiaogang, Huang Xiaolei, and Metaxas Dimitris N.. 2018. Stackgan++: Realistic image synthesis with stacked generative adversarial networks. IEEE Transactions on Pattern Analysis and Machine Intelligence 41, 8 (2018), 19471962.Google ScholarGoogle ScholarCross RefCross Ref
  76. [76] Zhang Xiaofeng, Wang Zhangyang, Liu Dong, and Ling Qing. 2019. DADA: Deep adversarial data augmentation for extremely low data regime classification. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’19).Google ScholarGoogle ScholarCross RefCross Ref
  77. [77] Zheng Aihua, Lin Xianmin, Li Chenglong, He Ran, and Tang Jin. 2019. Attributes guided feature learning for vehicle re-identification. arXiv preprint arXiv:1905.08997.Google ScholarGoogle Scholar
  78. [78] Zheng Yuli, Wu Zhenyu, Yuan Ye, Chen Tianlong, and Wang Zhangyang. 2020. PCAL: A privacy-preserving intelligent credit risk modeling framework based on adversarial learning. (unpublished).Google ScholarGoogle Scholar
  79. [79] Zhou Zhiming, Zhang Weinan, and Wang Jun. 2017. Inception score, label smoothing, gradient vanishing and-log (d (x)) alternative. arXiv preprint arXiv:1708.Google ScholarGoogle Scholar
  80. [80] Zhuang Peiqin, Wang Yali, and Qiao Yu. 2020. Learning attentive pairwise interaction for fine-grained classification. In Proceedings of the AAAI Annual Conference on Artificial Intelligence (AAAI’20).Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Black-Box Diagnosis and Calibration on GAN Intra-Mode Collapse: A Pilot Study

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image ACM Transactions on Multimedia Computing, Communications, and Applications
        ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 17, Issue 3s
        October 2021
        324 pages
        ISSN:1551-6857
        EISSN:1551-6865
        DOI:10.1145/3492435
        Issue’s Table of Contents

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 23 December 2021
        • Accepted: 1 June 2021
        • Revised: 1 May 2021
        • Received: 1 December 2020
        Published in tomm Volume 17, Issue 3s

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Refereed

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Full Text

      View this article in Full Text.

      View Full Text

      HTML Format

      View this article in HTML Format .

      View HTML Format
      About Cookies On This Site

      We use cookies to ensure that we give you the best experience on our website.

      Learn more

      Got it!