skip to main content
research-article

NeAT: neural adaptive tomography

Published:22 July 2022Publication History
Skip Abstract Section

Abstract

In this paper, we present Neural Adaptive Tomography (NeAT), the first adaptive, hierarchical neural rendering pipeline for tomography. Through a combination of neural features with an adaptive explicit representation, we achieve reconstruction times far superior to existing neural inverse rendering methods. The adaptive explicit representation improves efficiency by facilitating empty space culling and concentrating samples in complex regions, while the neural features act as a neural regularizer for the 3D reconstruction.

The NeAT framework is designed specifically for the tomographic setting, which consists only of semi-transparent volumetric scenes instead of opaque objects. In this setting, NeAT outperforms the quality of existing optimization-based tomography solvers while being substantially faster.

https://github.com/darglein/NeAT

Skip Supplemental Material Section

Supplemental Material

3528223.3530121.mp4

presentation

References

  1. Khaled Abujbara, Ramzi Idoughi, and Wolfgang Heidrich. 2021. Non-Linear Anisotropic Diffusion for Memory-Efficient Computed Tomography Super-Resolution Reconstruction. In 2021 International Conference on 3D Vision (3DV). IEEE, 175--185.Google ScholarGoogle Scholar
  2. Jonas Adler and Ozan Öktem. 2018. Learned primal-dual reconstruction. IEEE transactions on medical imaging 37, 6 (2018), 1322--1332.Google ScholarGoogle ScholarCross RefCross Ref
  3. Rushil Anirudh, Hyojin Kim, Jayaraman J Thiagarajan, K Aditya Mohan, Kyle Champley, and Timo Bremer. 2018. Lose the views: Limited angle CT reconstruction via implicit sinogram completion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 6343--6352.Google ScholarGoogle ScholarCross RefCross Ref
  4. Bradley Atcheson, Ivo Ihrke, Wolfgang Heidrich, Art Tevs, Derek Bradley, Marcus Magnor, and Hans-Peter Seidel. 2008. Time-resolved 3d capture of non-stationary gas flows. ACM transactions on graphics (TOG) 27, 5 (2008), 1--9.Google ScholarGoogle Scholar
  5. Daniel Otero Baguer, Johannes Leuschner, and Maximilian Schmidt. 2020. Computed tomography reconstruction using deep image prior and learned reconstruction methods. Inverse Problems 36, 9 (2020), 094004.Google ScholarGoogle ScholarCross RefCross Ref
  6. Jonathan T Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, and Pratul P Srinivasan. 2021. Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 5855--5864.Google ScholarGoogle ScholarCross RefCross Ref
  7. Semih Barutcu, Selin Aslan, Aggelos K Katsaggelos, and Doğa Gürsoy. 2021. Limited-angle computed tomography with deep image and physics priors. Scientific reports 11, 1 (2021), 1--12.Google ScholarGoogle Scholar
  8. Mark Boss, Raphael Braun, Varun Jampani, Jonathan T Barron, Ce Liu, and Hendrik Lensch. 2021. Nerd: Neural reflectance decomposition from image collections. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 12684--12694.Google ScholarGoogle ScholarCross RefCross Ref
  9. Sébastien Brisard, Marijana Serdar, and Paulo JM Monteiro. 2020. Multiscale X-ray tomography of cementitious materials: A review. Cement and Concrete Research 128 (2020), 105824.Google ScholarGoogle ScholarCross RefCross Ref
  10. Eric R Chan, Marco Monteiro, Petr Kellnhofer, Jiajun Wu, and Gordon Wetzstein. 2021. pi-gan: Periodic implicit generative adversarial networks for 3d-aware image synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5799--5809.Google ScholarGoogle ScholarCross RefCross Ref
  11. Hu Chen, Yi Zhang, Yunjin Chen, Junfeng Zhang, Weihua Zhang, Huaiqiang Sun, Yang Lv, Peixi Liao, Jiliu Zhou, and Ge Wang. 2018. LEARN: Learned experts' assessment-based reconstruction network for sparse-data CT. IEEE transactions on medical imaging 37, 6 (2018), 1333--1347.Google ScholarGoogle Scholar
  12. Julian Chibane and Gerard Pons-Moll. 2020. Implicit feature networks for texture completion from partial 3d data. In European Conference on Computer Vision. Springer, 717--725.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Jianguo Du, Guangming Zang, Balaji Mohan, Ramzi Idoughi, Jaeheon Sim, Tiegang Fang, Peter Wonka, Wolfgang Heidrich, and William L Roberts. 2021a. Study of spray structure from non-flash to flash boiling conditions with space-time tomography. Proceedings of the Combustion Institute 38, 2 (2021), 3223--3231.Google ScholarGoogle ScholarCross RefCross Ref
  14. Yilun Du, Yinan Zhang, Hong-Xing Yu, Joshua B Tenenbaum, and Jiajun Wu. 2021b. Neural radiance flow for 4d view synthesis and video processing. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 14324--14334.Google ScholarGoogle ScholarCross RefCross Ref
  15. Marie-Lena Eckert, Kiwon Um, and Nils Thuerey. 2019. ScalarFlow: a large-scale volumetric data set of real-world scalar transport flows for computer animation and machine learning. ACM Transactions on Graphics (TOG) 38, 6 (2019), 1--16.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Stefan Elfwing, Eiji Uchibe, and Kenji Doya. 2018. Sigmoid-weighted linear units for neural network function approximation in reinforcement learning. Neural Networks 107 (2018), 3--11.Google ScholarGoogle ScholarCross RefCross Ref
  17. SM Ali Eslami, Danilo Jimenez Rezende, Frederic Besse, Fabio Viola, Ari S Morcos, Marta Garnelo, Avraham Ruderman, Andrei A Rusu, Ivo Danihelka, Karol Gregor, et al. 2018. Neural scene representation and rendering. Science 360, 6394 (2018), 1204--1210.Google ScholarGoogle Scholar
  18. Lee A Feldkamp, Lloyd C Davis, and James W Kress. 1984. Practical cone-beam algorithm. Josa a 1, 6 (1984), 612--619.Google ScholarGoogle Scholar
  19. Yang Gao, Zhaoying Bian, Jing Huang, Yunwan Zhang, Shanzhou Niu, Qianjin Feng, Wufan Chen, Zhengrong Liang, and Jianhua Ma. 2014. Low-dose X-ray computed tomography image reconstruction with a combined low-mAs and sparse-view protocol. Optics express 22, 12 (2014), 15190--15210.Google ScholarGoogle Scholar
  20. Stephan J Garbin, Marek Kowalski, Matthew Johnson, Jamie Shotton, and Julien Valentin. 2021. FastNeRF: High-fidelity neural rendering at 200fps. arXiv preprint arXiv:2103.10380 (2021).Google ScholarGoogle Scholar
  21. Muhammad Usman Ghani and W Clem Karl. 2018. Deep learning-based sinogram completion for low-dose CT. In 2018 IEEE 13th Image, Video, and Multidimensional Signal Processing Workshop (IVMSP). IEEE, 1--5.Google ScholarGoogle Scholar
  22. James Gregson, Michael Krimerman, Matthias B Hullin, and Wolfgang Heidrich. 2012. Stochastic tomography and its applications in 3D imaging of mixing fluids. ACM Transactions on Graphics 31, 4 (2012), 52--1.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Samuel W Hasinoff and Kiriakos N Kutulakos. 2007. Photo-consistent reconstruction of semitransparent scenes by density-sheet decomposition. 29, 5 (2007), 870--885.Google ScholarGoogle Scholar
  24. Ji He, Yongbo Wang, and Jianhua Ma. 2020. Radon inversion via deep learning. IEEE transactions on medical imaging 39, 6 (2020), 2076--2087.Google ScholarGoogle ScholarCross RefCross Ref
  25. Peter Hedman, Pratul P. Srinivasan, Ben Mildenhall, Jonathan T. Barron, and Paul Debevec. 2021. Baking Neural Radiance Fields for Real-Time View Synthesis. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 5875--5884.Google ScholarGoogle ScholarCross RefCross Ref
  26. Jing Huang, Yunwan Zhang, Jianhua Ma, Dong Zeng, Zhaoying Bian, Shanzhou Niu, Qianjin Feng, Zhengrong Liang, and Wufan Chen. 2013. Iterative image reconstruction for sparse-view CT using normal-dose image induced total variation prior. PloS one 8, 11 (2013), e79709.Google ScholarGoogle ScholarCross RefCross Ref
  27. Xin Huang, Qi Zhang, Feng Ying, Hongdong Li, Xuan Wang, and Qing Wang. 2021. HDR-NeRF: High Dynamic Range Neural Radiance Fields. arXiv preprint arXiv:2111.14451 (2021).Google ScholarGoogle Scholar
  28. Yixing Huang, Oliver Taubmann, Xiaolin Huang, Viktor Haase, Guenter Lauritsch, and Andreas Maier. 2018. Scale-space anisotropic total variation for limited angle tomography. IEEE Transactions on Radiation and Plasma Medical Sciences 2, 4 (2018), 307--314.Google ScholarGoogle ScholarCross RefCross Ref
  29. Avinash C Kak and Malcolm Slaney. 2001. Principles of computerized tomographic imaging. SIAM.Google ScholarGoogle Scholar
  30. Eunhee Kang, Won Chang, Jaejun Yoo, and Jong Chul Ye. 2018. Deep convolutional framelet denosing for low-dose CT via wavelet residual network. IEEE transactions on medical imaging 37, 6 (2018), 1358--1369.Google ScholarGoogle ScholarCross RefCross Ref
  31. Timo Kiljunen, Touko Kaasalainen, Anni Suomalainen, and Mika Kortesniemi. 2015. Dental cone beam CT: A review. Physica Medica 31, 8 (2015), 844--860.Google ScholarGoogle ScholarCross RefCross Ref
  32. Sherman J Kisner, Eri Haneda, Charles A Bouman, Sondre Skatter, Mikhail Kourinny, and Simon Bedford. 2012. Model-based CT reconstruction from sparse views. In Second International Conference on Image Formation in X-Ray Computed Tomography. 444--447.Google ScholarGoogle Scholar
  33. David B Lindell, Julien NP Martel, and Gordon Wetzstein. 2021. Autoint: Automatic integration for fast neural volume rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 14556--14565.Google ScholarGoogle ScholarCross RefCross Ref
  34. Lingjie Liu, Jiatao Gu, Kyaw Zaw Lin, Tat-Seng Chua, and Christian Theobalt. 2020b. Neural sparse voxel fields. arXiv preprint arXiv:2007.11571 (2020).Google ScholarGoogle Scholar
  35. Zhengchun Liu, Tekin Bicer, Rajkumar Kettimuthu, Doga Gursoy, Francesco De Carlo, and Ian Foster. 2020a. TomoGAN: low-dose synchrotron x-ray tomography with generative adversarial networks: discussion. JOSA A 37, 3 (2020), 422--434.Google ScholarGoogle ScholarCross RefCross Ref
  36. Alice Lucas, Michael Iliadis, Rafael Molina, and Aggelos K Katsaggelos. 2018. Using deep neural networks for inverse problems in imaging: beyond analytical methods. IEEE Signal Processing Magazine 35, 1 (2018), 20--36.Google ScholarGoogle ScholarCross RefCross Ref
  37. Julien NP Martel, David B Lindell, Connor Z Lin, Eric R Chan, Marco Monteiro, and Gordon Wetzstein. 2021. ACORN: Adaptive Coordinate Networks for Neural Scene Representation. arXiv preprint arXiv:2105.02788 (2021).Google ScholarGoogle Scholar
  38. Ricardo Martin-Brualla, Noha Radwan, Mehdi SM Sajjadi, Jonathan T Barron, Alexey Dosovitskiy, and Daniel Duckworth. 2021. Nerf in the wild: Neural radiance fields for unconstrained photo collections. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 7210--7219.Google ScholarGoogle ScholarCross RefCross Ref
  39. Ben Mildenhall, Peter Hedman, Ricardo Martin-Brualla, Pratul Srinivasan, and Jonathan T Barron. 2021. NeRF in the Dark: High Dynamic Range View Synthesis from Noisy Raw Images. arXiv preprint arXiv:2111.13679 (2021).Google ScholarGoogle Scholar
  40. Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. 2020. Nerf: Representing scenes as neural radiance fields for view synthesis. In European conference on computer vision. Springer, 405--421.Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Michael Niemeyer, Lars Mescheder, Michael Oechsle, and Andreas Geiger. 2020. Differentiable volumetric rendering: Learning implicit 3d representations without 3d supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 3504--3515.Google ScholarGoogle ScholarCross RefCross Ref
  42. Merlin Nimier-David, Delio Vicini, Tizian Zeltner, and Wenzel Jakob. 2019. Mitsuba 2: A retargetable forward and inverse renderer. ACM Transactions on Graphics (TOG) 38, 6 (2019), 1--17.Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Michael Oechsle, Lars Mescheder, Michael Niemeyer, Thilo Strauss, and Andreas Geiger. 2019. Texture fields: Learning texture representations in function space. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 4531--4540.Google ScholarGoogle ScholarCross RefCross Ref
  44. Xiaochuan Pan, Emil Y Sidky, and Michael Vannier. 2009. Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction? Inverse problems 25, 12 (2009), 123009.Google ScholarGoogle Scholar
  45. Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Love-grove. 2019. Deepsdf: Learning continuous signed distance functions for shape representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 165--174.Google ScholarGoogle ScholarCross RefCross Ref
  46. Daniël M Pelt, Kees Joost Batenburg, and James A Sethian. 2018. Improving tomographic reconstruction from limited data using mixed-scale dense convolutional neural networks. Journal of Imaging 4, 11 (2018), 128.Google ScholarGoogle ScholarCross RefCross Ref
  47. Agnese Piovesan, Valérie Vancauwenberghe, Tim Van De Looverbosch, Pieter Verboven, and Bart Nicolaï. 2021. X-ray computed tomography for 3D plant imaging. Trends in Plant Science 26, 11 (2021), 1171--1185.Google ScholarGoogle ScholarCross RefCross Ref
  48. Shelley D Rawson, Jekaterina Maksimcuka, Philip J Withers, and Sarah H Cartmell. 2020. X-ray computed tomography in life sciences. BMC biology 18, 1 (2020), 1--15.Google ScholarGoogle Scholar
  49. Christian Reiser, Songyou Peng, Yiyi Liao, and Andreas Geiger. 2021. KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs. arXiv preprint arXiv:2103.13744 (2021).Google ScholarGoogle Scholar
  50. Darius Rückert, Linus Franke, and Marc Stamminger. 2021. Adop: Approximate differentiable one-pixel point rendering. arXiv preprint arXiv:2110.06635 (2021).Google ScholarGoogle Scholar
  51. Shunsuke Saito, Zeng Huang, Ryota Natsume, Shigeo Morishima, Angjoo Kanazawa, and Hao Li. 2019. Pifu: Pixel-aligned implicit function for high-resolution clothed human digitization. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 2304--2314.Google ScholarGoogle ScholarCross RefCross Ref
  52. Katja Schwarz, Yiyi Liao, Michael Niemeyer, and Andreas Geiger. 2020. Graf: Generative radiance fields for 3d-aware image synthesis. arXiv preprint arXiv:2007.02442 (2020).Google ScholarGoogle Scholar
  53. Emil Y Sidky and Xiaochuan Pan. 2008. Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization. Physics in Medicine & Biology 53, 17 (2008), 4777.Google ScholarGoogle ScholarCross RefCross Ref
  54. Vincent Sitzmann, Julien NP Martel, Alexander W Bergman, David B Lindell, and Gordon Wetzstein. 2020. Implicit neural representations with periodic activation functions. arXiv preprint arXiv:2006.09661 (2020).Google ScholarGoogle Scholar
  55. Vincent Sitzmann, Michael Zollhöfer, and Gordon Wetzstein. 2019. Scene representation networks: Continuous 3d-structure-aware neural scene representations. arXiv preprint arXiv:1906.01618 (2019).Google ScholarGoogle Scholar
  56. Pratul P Srinivasan, Boyang Deng, Xiuming Zhang, Matthew Tancik, Ben Mildenhall, and Jonathan T Barron. 2021. NeRV: Neural reflectance and visibility fields for relighting and view synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 7495--7504.Google ScholarGoogle ScholarCross RefCross Ref
  57. Shih-Yang Su, Frank Yu, Michael Zollhoefer, and Helge Rhodin. 2021. A-NeRF: Surface-free Human 3D Pose Refinement via Neural Rendering. arXiv preprint arXiv:2102.06199 (2021).Google ScholarGoogle Scholar
  58. Yu Sun, Jiaming Liu, Mingyang Xie, Brendt Wohlberg, and Ulugbek S Kamilov. 2021. Coil: Coordinate-based internal learning for tomographic imaging. IEEE Transactions on Computational Imaging (2021).Google ScholarGoogle Scholar
  59. Chao Tang, Wenkun Zhang, Ziheng Li, Ailong Cai, Linyuan Wang, Lei Li, Ningning Liang, and Bin Yan. 2019. Projection super-resolution based on convolutional neural network for computed tomography. In 15th International Meeting on Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine, Vol. 11072. International Society for Optics and Photonics, 1107233.Google ScholarGoogle ScholarCross RefCross Ref
  60. Bram Van Ginneken, BM Ter Haar Romeny, and Max A Viergever. 2001. Computer-aided diagnosis in chest radiography: a survey. IEEE Transactions on medical imaging 20, 12 (2001), 1228--1241.Google ScholarGoogle ScholarCross RefCross Ref
  61. Lívia Vásárhelyi, Zoltán Kónya, Ákos Kukovecz, and Róbert Vajtai. 2020. Microcomputed tomography-based characterization of advanced materials: a review. Materials Today Advances 8 (2020), 100084.Google ScholarGoogle ScholarCross RefCross Ref
  62. Zirui Wang, Shangzhe Wu, Weidi Xie, Min Chen, and Victor Adrian Prisacariu. 2021. NeRF-: Neural Radiance Fields Without Known Camera Parameters. arXiv preprint arXiv:2102.07064 (2021).Google ScholarGoogle Scholar
  63. Wenqi Xian, Jia-Bin Huang, Johannes Kopf, and Changil Kim. 2021. Space-time neural irradiance fields for free-viewpoint video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 9421--9431.Google ScholarGoogle ScholarCross RefCross Ref
  64. Fanbo Xiang, Zexiang Xu, Milos Hasan, Yannick Hold-Geoffroy, Kalyan Sunkavalli, and Hao Su. 2021. NeuTex: Neural Texture Mapping for Volumetric Neural Rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 7119--7128.Google ScholarGoogle ScholarCross RefCross Ref
  65. Moran Xu, Dianlin Hu, Fulin Luo, Fenglin Liu, Shaoyu Wang, and Weiwen Wu. 2020. Limited-Angle X-Ray CT Reconstruction Using Image Gradient L0-Norm With Dictionary Learning. IEEE Transactions on Radiation and Plasma Medical Sciences 5, 1 (2020), 78--87.Google ScholarGoogle ScholarCross RefCross Ref
  66. Lin Yen-Chen, Pete Florence, Jonathan T Barron, Alberto Rodriguez, Phillip Isola, and Tsung-Yi Lin. 2020. iNeRF: Inverting neural radiance fields for pose estimation. arXiv preprint arXiv:2012.05877 (2020).Google ScholarGoogle Scholar
  67. Seunghwan Yoo, Xiaogang Yang, Mark Wolfman, Doga Gursoy, and Aggelos K Katsaggelos. 2019. Sinogram Image Completion for Limited Angle Tomography With Generative Adversarial Networks. In 2019 IEEE International Conference on Image Processing (ICIP). IEEE, 1252--1256.Google ScholarGoogle Scholar
  68. Alex Yu, Ruilong Li, Matthew Tancik, Hao Li, Ren Ng, and Angjoo Kanazawa. 2021. Plenoctrees for real-time rendering of neural radiance fields. arXiv preprint arXiv:2103.14024 (2021).Google ScholarGoogle Scholar
  69. Guangming Zang, Mohamed Aly, Ramzi Idoughi, Peter Wonka, and Wolfgang Heidrich. 2018a. Super-resolution and sparse view CT reconstruction. In Proceedings of the European Conference on Computer Vision (ECCV). 137--153.Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. Guangming Zang, Ramzi Idoughi, Rui Li, Peter Wonka, and Wolfgang Heidrich. 2021. IntraTomo: Self-supervised Learning-based Tomography via Sinogram Synthesis and Prediction. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 1960--1970.Google ScholarGoogle ScholarCross RefCross Ref
  71. Guangming Zang, Ramzi Idoughi, Ran Tao, Gilles Lubineau, Peter Wonka, and Wolfgang Heidrich. 2018b. Space-time tomography for continuously deforming objects. (2018).Google ScholarGoogle Scholar
  72. Guangming Zang, Ramzi Idoughi, Ran Tao, Gilles Lubineau, Peter Wonka, and Wolfgang Heidrich. 2019. Warp-and-project tomography for rapidly deforming objects. ACM Transactions on Graphics (TOG) 38, 4 (2019), 1--13.Google ScholarGoogle ScholarDigital LibraryDigital Library
  73. Guangming Zang, Ramzi Idoughi, Congli Wang, Anthony Bennett, Jianguo Du, Scott Skeen, William L Roberts, Peter Wonka, and Wolfgang Heidrich. 2020. TomoFluid: reconstructing dynamic fluid from sparse view videos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1870--1879.Google ScholarGoogle ScholarCross RefCross Ref
  74. Kai Zhang, Fujun Luan, Qianqian Wang, Kavita Bala, and Noah Snavely. 2021. PhySG: Inverse Rendering with Spherical Gaussians for Physics-based Material Editing and Relighting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5453--5462.Google ScholarGoogle ScholarCross RefCross Ref
  75. Kai Zhang, Gernot Riegler, Noah Snavely, and Vladlen Koltun. 2020. Nerf++: Analyzing and improving neural radiance fields. arXiv preprint arXiv:2010.07492 (2020).Google ScholarGoogle Scholar

Index Terms

  1. NeAT: neural adaptive tomography

                Recommendations

                Comments

                Login options

                Check if you have access through your login credentials or your institution to get full access on this article.

                Sign in

                Full Access

                • Published in

                  cover image ACM Transactions on Graphics
                  ACM Transactions on Graphics  Volume 41, Issue 4
                  July 2022
                  1978 pages
                  ISSN:0730-0301
                  EISSN:1557-7368
                  DOI:10.1145/3528223
                  Issue’s Table of Contents

                  Copyright © 2022 ACM

                  Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

                  Publisher

                  Association for Computing Machinery

                  New York, NY, United States

                  Publication History

                  • Published: 22 July 2022
                  Published in tog Volume 41, Issue 4

                  Permissions

                  Request permissions about this article.

                  Request Permissions

                  Check for updates

                  Qualifiers

                  • research-article

                PDF Format

                View or Download as a PDF file.

                PDF

                eReader

                View online with eReader.

                eReader