skip to main content
research-article

A Spatial Relationship Preserving Adversarial Network for 3D Reconstruction from a Single Depth View

Authors Info & Claims
Published:04 March 2022Publication History
Skip Abstract Section

Abstract

Recovering the geometry of an object from a single depth image is an interesting yet challenging problem. While previous learning based approaches have demonstrated promising performance, they don’t fully explore spatial relationships of objects, which leads to unfaithful and incomplete 3D reconstruction. To address these issues, we propose a Spatial Relationship Preserving Adversarial Network (SRPAN) consisting of 3D Capsule Attention Generative Adversarial Network (3DCAGAN) and 2D Generative Adversarial Network (2DGAN) for coarse-to-fine 3D reconstruction from a single depth view of an object. Firstly, 3DCAGAN predicts the coarse geometry using an encoder-decoder based generator and a discriminator. The generator encodes the input as latent capsules represented as stacked activity vectors with local-to-global relationships (i.e., the contribution of components to the whole shape), and then decodes the capsules by modeling local-to-local relationships (i.e., the relationships among components) in an attention mechanism. Afterwards, 2DGAN refines the local geometry slice-by-slice, by using a generator learning a global structure prior as guidance, and stacked discriminators enforcing local geometric constraints. Experimental results show that SRPAN not only outperforms several state-of-the-art methods by a large margin on both synthetic datasets and real-world datasets, but also reconstructs unseen object categories with a higher accuracy.

REFERENCES

  1. [1] Afzal Hassan, Aouada Djamila, Mirbach Bruno, and Ottersten Björn E.. 2018. Full 3D reconstruction of non-rigidly deforming objects. ACM Trans. Multim. Comput. Commun. Appl. 14, 1s (2018), 123.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. [2] Bechtold Jan, Tatarchenko Maxim, Fischer Volker, and Brox Thomas. 2021. Fostering generalization in single-view 3D reconstruction by learning a hierarchy of local and global shape priors. In CVPR. 1588015889.Google ScholarGoogle Scholar
  3. [3] Chen Zhiqin and Zhang Hao. 2019. Learning implicit fields for generative shape modeling. In CVPR. 59395948.Google ScholarGoogle Scholar
  4. [4] Choy Christopher B., Xu Danfei, Gwak JunYoung, Chen Kevin, and Savarese Silvio. 2016. 3D-R2N2: A unified approach for single and multi-view 3D object reconstruction. In ECCV. 628644.Google ScholarGoogle Scholar
  5. [5] Dai Angela, Qi Charles Ruizhongtai, and Nießner Matthias. 2017. Shape completion using 3D-encoder-predictor CNNs and shape synthesis. In CVPR. 65456554.Google ScholarGoogle Scholar
  6. [6] Dong Weisheng, Shi Guangming, Li Xin, Peng Kefan, Wu Jinjian, and Guo Zhenhua. 2017. Color-guided depth recovery via joint local structural and nonlocal low-rank regularization. IEEE Trans. Multim. 19, 2 (2017), 293301.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. [7] Fan Haoqiang, Su Hao, and Guibas Leonidas J. Guibas. 2017. A point set generation network for 3D object reconstruction from a single image. In CVPR. 24632471.Google ScholarGoogle Scholar
  8. [8] Firman Michael, Aodha Oisin Mac, Julier Simon J., and Brostow Gabriel J.. 2016. Structured prediction of unobserved voxels from a single depth image. In CVPR. 54315440.Google ScholarGoogle Scholar
  9. [9] Gao Jingkun, Deng Bin, Qin Yuliang, Li Xiang, and Wang Hongqiang. 2019. Point cloud and 3-D surface reconstruction using cylindrical millimeter-wave holography. IEEE Trans. Instrum. Meas. 68, 12 (2019), 47654778.Google ScholarGoogle ScholarCross RefCross Ref
  10. [10] Gao Xiang, Shen Shuhan, Zhu Lingjie, Shi Tianxin, Wang Zhiheng, and Hu Zhanyi. 2020. Complete scene reconstruction by merging images and laser scans. IEEE Trans. Circuits Syst. Video Technol. 30, 10 (2020), 36883701.Google ScholarGoogle ScholarCross RefCross Ref
  11. [11] Girdhar Rohit, Fouhey David F., Rodriguez Mikel, and Gupta Abhinav. 2016. Learning a predictable and generative vector representation for objects. In ECCV. 484499.Google ScholarGoogle Scholar
  12. [12] Groueix Thibault, Fisher Matthew, Kim Vladimir G., Russell Bryan C., and Aubry Mathieu. 2018. A papier-Mâché approach to learning 3D surface generation. In CVPR. 216224.Google ScholarGoogle Scholar
  13. [13] Guo Jianwei, Xu Shibiao, Yan Dong-Ming, Cheng Zhanglin, Jaeger Marc, and Zhang Xiaopeng. 2020. Realistic procedural plant modeling from multiple view images. IEEE Trans. Vis. Comput. Graph 26, 2 (2020), 13721384.Google ScholarGoogle ScholarCross RefCross Ref
  14. [14] Guo Yulan, Bennamoun Mohammed, Sohel Ferdous Ahmed, Lu Min, and Wan Jianwei. 2015. An integrated framework for 3-D modeling, object detection, and pose estimation from point-clouds. IEEE Trans. Instrum. Meas. 64, 3 (2015), 683693.Google ScholarGoogle ScholarCross RefCross Ref
  15. [15] Gwak JunYoung, Choy Christopher B., Chandraker Manmohan, Garg Animesh, and Savarese Silvio. 2017. Weakly supervised 3D reconstruction with adversarial constraint. In 3DV. 263272.Google ScholarGoogle Scholar
  16. [16] Han Xiaoguang, Li Zhen, Huang Haibin, Kalogerakis Evangelos, and Yu Yizhou. 2017. High-resolution shape completion using deep neural networks for global structure and local geometry inference. In ICCV. 8593.Google ScholarGoogle Scholar
  17. [17] Hao Zhu, Liu Yebin, Fan Jingtao, Dai Qionghai, and Xun Cao. 2017. Video-based outdoor human reconstruction. IEEE Trans. Circuits Syst. Video Technol. 27, 4 (2017), 760770.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. [18] Kanazawa Angjoo, Black Michael J., Jacobs David W., and Malik Jitendra. 2018. End-to-end recovery of human shape and pose. In CVPR. 71227131.Google ScholarGoogle Scholar
  19. [19] Kanazawa Angjoo, Tulsiani Shubham, Efros Alexei A., and Malik Jitendra. 2018. Learning category-specific mesh reconstruction from image collections. In ECCV, Vol. 15. 386402.Google ScholarGoogle Scholar
  20. [20] Kingma Diederik P. and Ba Jimmy. 2015. Adam: A method for stochastic optimization. In ICLR. 115.Google ScholarGoogle Scholar
  21. [21] Kong Chen, Lin Chenhsuan, and Lucey Simon. 2017. Using locally corresponding CAD models for dense 3D reconstructions from a single image. In CVPR. 56035611.Google ScholarGoogle Scholar
  22. [22] Li Dongping, Shao Tianjia, Wu Hongzhi, and Zhou Kun. 2017. Shape completion from a single RGBD image. IEEE Trans. Vis. Comput. Graph 23, 7 (2017), 18091822.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. [23] Li Ruihui, Li Xianzhi, Fu Chi-Wing, Cohen-Or Daniel, and Heng Pheng-Ann. 2019. PU-GAN: A point cloud upsampling adversarial network. In ICCV. 72027211.Google ScholarGoogle Scholar
  24. [24] Li Yangyan, Dai Angela, Guibas Leonidas, and Nießner Matthias. 2015. Database-assisted object retrieval for real-time 3D reconstruction. Computer Graphics Forum 34, 2 (2015), 435446.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. [25] Lin Chen-Hsuan, Kong Chen, and Lucey Simon. 2018. Learning efficient point cloud generation for dense 3D object reconstruction. In AAAI. 71147121.Google ScholarGoogle Scholar
  26. [26] Liu Zhengning, Cao Yanpei, Kuang Zhengfei, Kobbelt Leif, and Hu Shimin. 2021. High-quality textured 3Dshape reconstruction with cascaded fully convolutional networks. IEEE Trans. Vis. Comput. Graph 21, 1 (2021), 8397.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. [27] Lv Chenlei, Lin Weisi, and Zhao: Baoquan. 2021. Voxel structure-based mesh reconstruction from a 3D point cloud. IEEE Trans. Multim. (2021), 115. DOI:.Google ScholarGoogle ScholarCross RefCross Ref
  28. [28] Mattausch Oliver, Panozzo Daniele, Mura Claudio, Sorkine-Hornung Olga, and Pajarola Renato. 2014. Object detection and classification from large-scale cluttered indoor scans. Computer Graphics Forum 33, 2 (2014), 1121.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. [29] Mescheder Lars M., Oechsle Michael, Niemeyer Michael, Nowozin Sebastian, and Geiger Andreas. 2019. Occupancy networks: Learning 3D reconstruction in function space. In CVPR. 44604470.Google ScholarGoogle Scholar
  30. [30] Kazhdan Hugues Hoppe and Michael M.. 2013. Screened Poisson surface reconstruction. ACM TOG 32, 3 (2013), 29:1–29:13.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. [31] Michalkiewicz Mateusz, Parisot Sarah, Tsogkas Stavros, Baktashmotlagh Mahsa, Eriksson Anders P., and Belilovsky Eugene. 2020. Few-shot single-view 3-D object reconstruction with compositional priors. In ECCV, Vol. 25. 614630.Google ScholarGoogle Scholar
  32. [32] Monszpart Aron, Mellado Nicolas, Brostow Gabriel J., and Mitra Niloy J.. 2015. RAPter: Rebuilding man-made scenes with regular arrangements of planes. ACM TOG 34, 4 (2015), 112.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. [33] Nan Liangliang, Xie Ke, and Sharf Andrei. 2012. A search-classify approach for cluttered indoor scene understanding. ACM TOG 31, 6 (2012), 137:1-137:10.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. [34] Nash Charlie and Williams Chris K. I.. 2017. The shape variational autoencoder: A deep generative model of part-segmented 3D objects. Computer Graphics Forum 36, 5 (2017), 112.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. [35] Newcombe Richard A., Izadi Shahram, Hilliges Otmar, Molyneaux David, Kim David, Davison Andrew J., Kohli Pushmeet, Shotton Jamie, Hodges Steve, and Fitzgibbon Andrew W.. 2011. KinectFusion: Real-time dense surface mapping and tracking. In ISMAR. 127136.Google ScholarGoogle Scholar
  36. [36] Park Jeong Joon, Florence Peter, Straub Julian, Newcombe Richard A., and Lovegrove Steven. 2019. DeepSDF: Learning continuous signed distance functions for shape representation. In CVPR. 165174.Google ScholarGoogle Scholar
  37. [37] Pinheiro Pedro O., Rostamzadeh Negar, and Ahn Sungjin. 2019. Domain-adaptive single-view 3D reconstruction. In ICCV. 76377646.Google ScholarGoogle Scholar
  38. [38] Pontes Jhony K., Kong Chen, Sridharan Sridha, Lucey Simon, Eriksson Anders, and Fookes Clinton. 2018. Image2Mesh: A learning framework for single image 3D reconstruction. In ACCV. 365381.Google ScholarGoogle Scholar
  39. [39] Ranjan Anurag, Bolkart Timo, Sanyal Soubhik, and Black Michael J.. 2018. Generating 3D faces using convolutional mesh autoencoders. In ECCV, Vol. 3. 725741.Google ScholarGoogle Scholar
  40. [40] Rezende Danilo Jimenez, Eslami SM Ali, Mohamed Shakir, Battaglia Peter, Jaderberg Max, and Heess Nicolas. 2016. Unsupervised learning of 3D structure from images. In NIPS. 49965004.Google ScholarGoogle Scholar
  41. [41] Riegler Gernot, Ulusoy Ali Osman, Bischof Horst, and Geige Andreas. 2017. OctNetFusion: Learning depth fusion from data. In 3DV. 5766.Google ScholarGoogle Scholar
  42. [42] Sabour Sara, Frosst Nicholas, and Hinton Geoffrey E.. 2017. Dynamic routing between capsules. In NIPS. 38563866.Google ScholarGoogle Scholar
  43. [43] Sharma Abhishek, Grau Oliver, and Fritz Mario. 2016. VCONV-DAE: Deep volumetric shape learning without object labels. In ECCV. 236250.Google ScholarGoogle Scholar
  44. [44] Shi Yifei, Long Pinxin, Xu Kai, Huang Hui, and Xiong Yueshan. 2016. Data-driven contextual modeling for 3D scene understanding. Comput. Graph. 55 (2016), 5567.Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. [45] Sipiran Ivan, Gregor Robert, and Schreck Tobias. 2014. Approximate symmetry detection in partial 3D meshes. Computer Graphics Forum 33, 7 (2014), 131140.Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. [46] Smith Edward and Meger David. 2017. Improved adversarial systems for 3D object generation and reconstruction. In CoRL. 8796.Google ScholarGoogle Scholar
  47. [47] Soltani Amir Arsalan, Huang Haibin, Wu Jiajun, Kulkarni Tejas D., and Tenenbaum Joshua B.. 2017. Synthesizing 3D shapes via modeling multi-view depth maps and silhouettes with deep generative networks. In CVPR. 25112519.Google ScholarGoogle Scholar
  48. [48] Song Shuran, Yu Fisher, Zeng Andy, Chang Angel X., Savva Manolis, and Funkhouser Thomas. 2017. Semantic scene completion from a single depth image. In CVPR. 17461754.Google ScholarGoogle Scholar
  49. [49] Speciale Pablo, Oswald Martin R., Cohen Andrea, and Pollefeys Marc. 2016. A symmetry prior for convex variational 3D reconstruction. In ECCV, Vol. 8. 313328.Google ScholarGoogle Scholar
  50. [50] Tang Jiapeng, Han Xiaoguang, Pan Junyi, Jia Kui, and Tong Xin. 2019. A skeleton-bridged deep learning approach for generating meshes of complex topologies from single RGB images. In CVPR. 45414550.Google ScholarGoogle Scholar
  51. [51] Tchapmi Lyne P., Kosaraju Vineet, Rezatofighi Hamid, Reid Ian D., and Savarese Silvio. 2019. TopNet: Structural point cloud decoder. In CVPR. 383392.Google ScholarGoogle Scholar
  52. [52] Tulsiani Shubham, Zhou Tinghui, Efros Alexei A., and Malik Jitendra. 2017. Multi-view supervision for single-view reconstruction via differentiable ray consistency. In CVPR. 209217.Google ScholarGoogle Scholar
  53. [53] Varley Jacob, DeChant Chad, Richardson Adam, Ruales Joaquín, and Allen Peter K.. 2017. Shape completion enabled robotic grasping. In IROS. 24422447.Google ScholarGoogle Scholar
  54. [54] Wang Lingjing and Fang Yi. 2017. Unsupervised 3D reconstruction from a single image via adversarial learning. arXiv:1711.09312.Google ScholarGoogle Scholar
  55. [55] Wang Nanyang, Zhang Yinda, Li Zhuwen, Fu Yanwei, Liu Wei, and Jiang Yu-Gang. 2018. Pixel2Mesh: Generating 3D mesh models from single RGB images. In ECCV, Vol. 11. 5571.Google ScholarGoogle Scholar
  56. [56] Wang Weiyue, Huang Qiangui, You Suya, Yang Chao, and Neumann Ulrich. 2017. Shape inpainting using 3D generative adversarial network and recurrent convolutional networks. In ICCV. 22982306.Google ScholarGoogle Scholar
  57. [57] Wang Xiaogang, Ang Marcelo H., and Lee Gim Hee. 2020. Cascaded refinement network for point cloud completion. In CVPR. 787796.Google ScholarGoogle Scholar
  58. [58] Wen Xin, Li Tianyang, Han Zhizhong, and Liu Yu-Shen. 2020. Point cloud completion by skip-attention network with hierarchical folding. In CVPR. 19361945.Google ScholarGoogle Scholar
  59. [59] Whelan Thomas, Leutenegger Stefan, Salas-Moreno Renato F., Glocker Ben, and Davison Andrew J.. 2015. ElasticFusion: Dense SLAM without a pose graph. In Robotics. 19.Google ScholarGoogle ScholarCross RefCross Ref
  60. [60] Wickramasinghe Udaranga, Remelli Edoardo, Knott Graham, and Fua Pascal. 2020. Voxel2Mesh: 3D mesh model generation from volumetric data. In MICCAI. 299308.Google ScholarGoogle Scholar
  61. [61] Wu Jiajun, Xue Tianfan, Lim Joseph J., Tian Yuandong, Tenenbaum Joshua B., Torralba Antonio, and Freeman William T.. 2016. Single image 3D interpreter network. In ECCV. 365382.Google ScholarGoogle Scholar
  62. [62] Wu Jiajun, Zhang Chengkai, Xue Tianfan, Freeman Bill, and Tenenbaum Josh. 2016. Learning a probabilistic latent space of object shapes via 3D generative-adversarial modeling. In NIPS. 8290.Google ScholarGoogle Scholar
  63. [63] Wu Jiajun, Zhang Chengkai, Zhang Xiuming, Zhang Zhoutong, Freeman William T., and Tenenbaum Joshua B.. 2018. Learning shape priors for single-view 3D completion and reconstruction. In ECCV, Vol. 11. 673691.Google ScholarGoogle Scholar
  64. [64] Wu Zhirong, Song Shuran, Khosla Aditya, Yu Fisher, Zhang Linguang, Tang Xiaoou, and Xiao Jianxiong. 2015. 3D ShapeNets: A deep representation for volumetric shapes. In CVPR. 19121920.Google ScholarGoogle Scholar
  65. [65] Xie Haozhe, Yao Hongxun, Sun Xiaoshuai, Zhou Shangchen, and Zhang Shengping. 2019. Pix2Vox: Context-aware 3D reconstruction from single and multi-view images. In ICCV. 26902698.Google ScholarGoogle Scholar
  66. [66] Xie Haozhe, Yao Hongxun, Zhang Shengping, Zhou Shangchen, and Sun Wenxiu. 2020. Pix2Vox++: Multi-scale context-aware 3D object reconstruction from single and multiple images. Int. J. Comput. Vis. 128, 12 (2020), 29192935.Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. [67] Xie Haozhe, Yao Hongxun, Zhou Shangchen, Mao Jiageng, Zhang Shengping, and Sun Wenxiu. 2020. GRNet: Gridding residual network for dense point cloud completion. In ECCV, Vol. 9. 365381.Google ScholarGoogle Scholar
  68. [68] Xu Qiangeng, Wang Weiyue, Ceylan Duygu, Mech Radomír, and Neumann Ulrich. 2019. DISN: Deep implicit surface network for high-quality single-view 3D reconstruction. In NIPS. 490500.Google ScholarGoogle Scholar
  69. [69] Yang Bo, Rosa Stefano, Markham Andrew, Trigoni Niki, and Wen Hongkai. 2019. 3D object dense reconstruction from a single depth view. IEEE Trans. Pattern Anal. Mach. Intell. 41, 12 (2019), 28202834.Google ScholarGoogle ScholarCross RefCross Ref
  70. [70] Yang Bo, Wang Sen, Markham Andrew, and Trigoni Niki. 2020. Robust attentional aggregation of deep feature sets for multi-view 3D reconstruction. Int. J. Comput. Vis. (2020), 5373.Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. [71] Yang Bo, Wen Hongkai, Wang Sen, Clark Ronald, Markham Andrew, and Trigoni Niki. 2017. 3D object reconstruction from a single depth view with adversarial learning. ICCV Workshop 112, 518 (2017), 679688.Google ScholarGoogle Scholar
  72. [72] Yang Guandao, Cui Yin, Belongie Serge J., and Hariharan Bharath. 2018. Learning single-view 3D reconstruction with limited pose supervision. In ECCV, Vol. 15. 90105.Google ScholarGoogle Scholar
  73. [73] Yang Shuo, Xu Min, Xie Haozhe, Perry Stuart W., and Xia Jiahao. 2021. Single-view 3D object reconstruction from shape priors in memory. In CVPR. 31523161.Google ScholarGoogle Scholar
  74. [74] Yao Yuan, Schertler Nico, Rosales Enrique, Rhodin Helge, Sigal Leonid, and Sheffer Alla. 2020. Front2Back: Single view 3D shape reconstruction via front to back prediction. In CVPR. 528537.Google ScholarGoogle Scholar
  75. [75] Yuan Wentao, Khot Tejas, Held David, Mertz Christoph, and Hebert Martial. 2018. PCN: Point completion network. In 3DV. 728737.Google ScholarGoogle Scholar
  76. [76] Zhang Han, Goodfellow Ian J., Metaxas Dimitris N., and Odena Augustus. 2019. Self-attention generative adversarial networks. In ICML, Vol. 9. 73547363.Google ScholarGoogle Scholar
  77. [77] Zhang Yang, Huo Kai, Liu Zhen, Zang Yu, Liu Yongxiang, Li Xiang, Zhang Qianyu, and Wang Cheng. 2020. PGNet: A part-based generative network for 3D object reconstruction. Knowledge Based System 194 (2020), 105574.Google ScholarGoogle ScholarCross RefCross Ref
  78. [78] Zhao Fang, Wang Wenhao, Liao Shengcai, and Shao Ling. 2021. Learning anchored unsigned distance functions with gradient direction alignment for single-view garment reconstruction. In ICCV. 1267412683.Google ScholarGoogle Scholar
  79. [79] Zhao Meihua, Xiong Gang, Zhou MengChu, Shen Zhen, and Wang Fei-Yue. 2021. 3D-RVP: A method for 3D object reconstruction from a single depth view using voxel and point. Neurocomputing 430 (2021), 94103.Google ScholarGoogle ScholarCross RefCross Ref
  80. [80] Zhao Tianhao, Li Songnan, Ngan King Ngi, and Wu Fanzi. 2019. 3-D reconstruction of human body shape from a single commodity depth camera. IEEE Trans. Multim. 21, 1 (2019), 114123.Google ScholarGoogle ScholarDigital LibraryDigital Library
  81. [81] Zhao Yongheng, Birdal Tolga, Deng Haowen, and Tombari Federico. 2019. 3D point capsule networks. In CVPR. 10091018.Google ScholarGoogle Scholar
  82. [82] Zhao Yongheng, Birdal Tolga, Lenssen Jan Eric, Menegatti Emanuele, Guibas Leonidas J., and Tombari Federico. 2020. Quaternion equivariant capsule networks for 3D point clouds. In ECCV. 119.Google ScholarGoogle Scholar
  83. [83] Zou Chuhang, Yumer Ersin, Yang Jimei, Ceylan Duygu, and Hoiem Derek. 2017. 3D-PRNN: Generating shape primitives with recurrent neural networks. In ICCV, Vol. 2. 900909.Google ScholarGoogle Scholar
  84. [84] Zubic Nikola and Lio Pietro. 2021. An effective loss function for generating 3D models from single 2D image without rendering. In Artificial Intelligence Applications and Innovations (AIAI). 309322.Google ScholarGoogle Scholar

Index Terms

  1. A Spatial Relationship Preserving Adversarial Network for 3D Reconstruction from a Single Depth View

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Multimedia Computing, Communications, and Applications
      ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 18, Issue 4
      November 2022
      497 pages
      ISSN:1551-6857
      EISSN:1551-6865
      DOI:10.1145/3514185
      • Editor:
      • Abdulmotaleb El Saddik
      Issue’s Table of Contents

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 4 March 2022
      • Accepted: 1 December 2021
      • Revised: 1 November 2021
      • Received: 1 June 2021
      Published in tomm Volume 18, Issue 4

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Refereed

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    View Full Text

    HTML Format

    View this article in HTML Format .

    View HTML Format
    About Cookies On This Site

    We use cookies to ensure that we give you the best experience on our website.

    Learn more

    Got it!