skip to main content
research-article

SADnet: Semi-supervised Single Image Dehazing Method Based on an Attention Mechanism

Published:16 February 2022Publication History
Skip Abstract Section

Abstract

Many real-life tasks such as military reconnaissance and traffic monitoring require high-quality images. However, images acquired in foggy or hazy weather pose obstacles to the implementation of these real-life tasks; consequently, image dehazing is an important research problem. To meet the requirements of practical applications, a single image dehazing algorithm has to be able to effectively process real-world hazy images with high computational efficiency. In this article, we present a fast and robust semi-supervised dehazing algorithm named SADnet for practical applications. SADnet utilizes both synthetic datasets and natural hazy images for training, so it has good generalizability for real-world hazy images. Furthermore, considering the uneven distribution of haze in the atmospheric environment, a Channel-Spatial Self-Attention (CSSA) mechanism is presented to enhance the representational power of the proposed SADnet. Extensive experimental results demonstrate that the presented approach achieves good dehazing performances and competitive running times compared with other state-of-the-art image dehazing algorithms.

REFERENCES

  1. [1] Ancuti Codruta O., Ancuti Cosmin, Hermans Chris, and Bekaert Philippe. 2010. A fast semi-inverse approach to detect and remove the haze from a single image. In Proceedings of the 10th Asian Conference on Computer Vision (ACCV’10). 501514. DOI:DOI: DOI: https://doi.org/10.1007/978-3-642-19309-5_39 Google ScholarGoogle ScholarCross RefCross Ref
  2. [2] Ang Jun Chin, Mirzal Andri, Haron Habibollah, and Hamed Haza Nuzly Abdull. 2016. Supervised, unsupervised, and semi-supervised feature selection: A review on gene selection. IEEE/ACM Trans. Comput. Biol. Bioinform. 13, 5 (2016), 971989. DOI:DOI: DOI: https://doi.org/10.1109/TCBB.2015.2478454 Google ScholarGoogle ScholarCross RefCross Ref
  3. [3] Berman Dana, Treibitz Tali, and Avidan Shai. 2016. Non-local image dehazing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’16). 16741682. DOI:DOI: DOI: https://doi.org/10.1109/CVPR.2016.185Google ScholarGoogle ScholarCross RefCross Ref
  4. [4] Cai Bolun, Xu Xiangmin, Jia Kui, Qing Chunmei, and Tao Dacheng. 2016. DehazeNet: An end-to-end system for single image haze removal. IEEE Trans. Image Process. 25, 11 (2016), 51875198. DOI:DOI: DOI: https://doi.org/10.1109/TIP.2016.2598681 Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. [5] Chen Bo-Hao and Huang Shih-Chia. 2015. An advanced visibility restoration algorithm for single hazy images. ACM Trans. Multim. Comput. Commun. Applic. 11 (2015). DOI:DOI: DOI: https://doi.org/10.1145/2726947 Google ScholarGoogle ScholarCross RefCross Ref
  6. [6] Chen Dongdong, He Mingming, Fan Qingnan, Liao Jing, Zhang Liheng, Hou Dongdong, Yuan Lu, and Hua Gang. 2019. Gated context aggregation network for image dehazing and deraining. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV’19). 13751383. DOI:DOI: DOI: https://doi.org/10.1109/WACV.2019.00151Google ScholarGoogle ScholarCross RefCross Ref
  7. [7] Chen Long, Zhang Hanwang, Xiao Jun, Nie Liqiang, Shao Jian, Liu Wei, and Chua Tat-Seng. 2017. SCA-CNN: Spatial and channel-wise attention in convolutional networks for image captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17). 62986306. DOI:DOI: DOI: https://doi.org/10.1109/CVPR.2017.667Google ScholarGoogle ScholarCross RefCross Ref
  8. [8] Cong Xiaofeng, Gui Jie, Miao Kaichao, Zhang Jun, Wang Bing, and Chen Peng. 2020. Discrete haze level dehazing network. In Proceedings of the 28th ACM International Conference on Multimedia (MM’20). New York, NY, 18281836. DOI:DOI: DOI: https://doi.org/10.1145/3394171.3413876 Google ScholarGoogle ScholarCross RefCross Ref
  9. [9] Deng Jia, Dong Wei, Socher Richard, Li Li-Jia, Li Kai, and Li Fei-Fei. 2009. ImageNet: A large-scale hierarchical image database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’09). 248255. DOI:DOI: DOI: https://doi.org/10.1109/CVPR.2009.5206848Google ScholarGoogle ScholarCross RefCross Ref
  10. [10] Devlin Jacob, Chang Ming-Wei, Lee Kenton, and Toutanova Kristina. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv (2018). DOI:DOI: DOI: https://doi.org/arXiv:1810.04805Google ScholarGoogle Scholar
  11. [11] Engin Deniz, Genc Anil, and Ekenel Hazim Kemal. 2018. Cycle-dehaze: Enhanced cycleGAN for single image dehazing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW’18). 9389388. DOI:DOI: DOI: https://doi.org/10.1109/CVPRW.2018.00127Google ScholarGoogle ScholarCross RefCross Ref
  12. [12] Fattal Raanan. 2008. Single image dehazing. ACM Trans. Graph. 27, 3 (2008). DOI:DOI: DOI: https://doi.org/10.1145/1360612.1360671Google ScholarGoogle ScholarCross RefCross Ref
  13. [13] Fattal Raanan. 2015. Dehazing using color-lines. ACM Trans. Graph. 34, 1 (2015). DOI:DOI: DOI: https://doi.org/10.1145/2651362 Google ScholarGoogle ScholarCross RefCross Ref
  14. [14] Gandelsman Yosef, Shocher Assaf, and Irani Michal. 2019. “Double-DIP”: Unsupervised image decomposition via coupled deep-image-priors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’19). 1101811027. DOI:DOI: DOI: https://doi.org/10.1109/CVPR.2019.01128Google ScholarGoogle ScholarCross RefCross Ref
  15. [15] Gao Yuan, Ma Jiayi, and Yuille Alan L.. 2017. Semi-supervised sparse representation based classification for face recognition with insufficient labeled samples. IEEE Trans. Image Process. 26, 5 (2017), 25452560. DOI:DOI: DOI: https://doi.org/10.1109/TIP.2017.2675341 Google ScholarGoogle ScholarCross RefCross Ref
  16. [16] Gibson Kristofor B., Vo Dung T., and Nguyen Truong Q.. 2012. An investigation of dehazing effects on image and video coding. IEEE Trans. Image Process. 21, 2 (2012), 662673. DOI:DOI: DOI: https://doi.org/10.1109/TIP.2011.2166968 Google ScholarGoogle ScholarCross RefCross Ref
  17. [17] Golts Alona, Freedman Daniel, and Elad Michael. 2020. Unsupervised single image dehazing using dark channel prior loss. IEEE Trans. Image Process. 29 (2020), 26922701. DOI:DOI: DOI: https://doi.org/10.1109/TIP.2019.2952032Google ScholarGoogle ScholarCross RefCross Ref
  18. [18] He Kaiming, Sun Jian, and Tang Xiaoou. 2011. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 33, 12 (2011), 23412353. DOI:DOI: DOI: https://doi.org/10.1109/TPAMI.2010.168Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. [19] He Kaiming, Sun Jian, and Tang Xiaoou. 2013. Guided image filtering. IEEE Trans. Pattern Mach. Intell. 35, 6 (2013), 13971409. DOI:DOI: DOI: https://doi.org/10.1109/TPAMI.2012.213 Google ScholarGoogle ScholarCross RefCross Ref
  20. [20] Hu Jie, Shen Li, Albanie Samuel, Sun Gang, and Wu Enhua. 2020. Squeeze-and-excitation networks. IEEE Trans. Pattern Anal. Mach. Intell. 42, 8 (2020), 20112023. DOI:DOI: DOI: https://doi.org/10.1109/TPAMI.2019.2913372Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. [21] Huang Qi-Xing, Su Hao, and Guibas Leonidas. 2013. Fine-grained semi-supervised labeling of large shape collections. ACM Trans. Graph. 32, 6 (2013). DOI:DOI: DOI: https://doi.org/10.1145/2508363.2508364 Google ScholarGoogle ScholarCross RefCross Ref
  22. [22] Itti Laurent, Koch Christof, and Niebur Ernst. 1998. A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20, 11 (1998), 12541259. DOI:DOI: DOI: https://doi.org/10.1109/34.730558 Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. [23] Kingma Diederik P. and Ba Jimmy. 2014. Adam: A method for stochastic optimization. arXiv (2014). DOI:DOI: DOI: https://doi.org/arXiv:1412.6980Google ScholarGoogle Scholar
  24. [24] Kuznietsov Yevhen, Stückler Jörg, and Leibe Bastian. 2017. Semi-supervised deep learning for monocular depth map prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17). 22152223. DOI:DOI: DOI: https://doi.org/10.1109/CVPR.2017.238Google ScholarGoogle ScholarCross RefCross Ref
  25. [25] Li Boyun, Gou Yuanbiao, Liu Jerry Zitao, Zhu Hongyuan, Zhou Joey Tianyi, and Peng Xi. 2020. Zero-shot image dehazing. IEEE Trans. Image Process. 29 (2020), 84578466. DOI:DOI: DOI: https://doi.org/10.1109/TIP.2020.3016134Google ScholarGoogle ScholarCross RefCross Ref
  26. [26] Li Boyi, Peng Xiulian, Wang Zhangyang, Xu Jizheng, and Feng Dan. 2017. AOD-Net: All-in-one dehazing network. In Proceedings of the IEEE International Conference on Computer Vision (ICCV’17). 47804788. DOI:DOI: DOI: https://doi.org/10.1109/ICCV.2017.511Google ScholarGoogle ScholarCross RefCross Ref
  27. [27] Li Boyi, Ren Wenqi, Fu Dengpan, Tao Dacheng, Feng Dan, Zeng Wenjun, and Wang Zhangyang. 2019. Benchmarking single-image dehazing and beyond. IEEE Trans. Image Process. 28, 1 (2019), 492505. DOI:DOI: DOI: https://doi.org/10.1109/TIP.2018.2867951 Google ScholarGoogle ScholarCross RefCross Ref
  28. [28] Li Lerenhan, Dong Yunlong, Ren Wenqi, Pan Jinshan, Gao Changxin, Sang Nong, and Yang Ming-Hsuan. 2020. Semi-supervised image dehazing. IEEE Trans. Image Process. 29 (2020), 27662779. DOI:DOI: DOI: https://doi.org/10.1109/TIP.2019.2952690Google ScholarGoogle ScholarCross RefCross Ref
  29. [29] Li Runde, Pan Jinshan, Li Zechao, and Tang Jinhui. 2018. Single image dehazing via conditional generative adversarial network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’18). 82028211. DOI:DOI: DOI: https://doi.org/10.1109/CVPR.2018.00856Google ScholarGoogle ScholarCross RefCross Ref
  30. [30] Li Xin, Bing Lidong, Lam Wai, and Shi Bei. 2018. Transformation networks for target-oriented sentiment classification. arXiv (2018). DOI:DOI: DOI: https://doi.org/arXiv:1805.01086Google ScholarGoogle Scholar
  31. [31] Li Yangxi, Hu Han, Li Jin, Luo Yong, and Wen Yonggang. 2020. Semi-supervised online multi-task metric learning for visual recognition and retrieval. In Proceedings of the 28th ACM International Conference on Multimedia (MM’20). 33773385. DOI:DOI: DOI: https://doi.org/10.1145/3394171.3413948 Google ScholarGoogle ScholarCross RefCross Ref
  32. [32] Liu Xiaohong, Ma Yongrui, Shi Zhihao, and Chen Jun. 2019. GridDehazeNet: Attention-based multi-scale network for image dehazing. In Proceedings of the IEEE International Conference on Computer Vision (ICCV’19). 73137322. DOI:DOI: DOI: https://doi.org/10.1109/ICCV.2019.00741Google ScholarGoogle ScholarCross RefCross Ref
  33. [33] Liu Yang, Pan Jinshan, Ren Jimmy, and Su Zhixun. 2019. Learning deep priors for image dehazing. In Proceedings of the IEEE International Conference on Computer Vision (ICCV’19). 24922500. DOI:DOI: DOI: https://doi.org/10.1109/ICCV.2019.00258Google ScholarGoogle ScholarCross RefCross Ref
  34. [34] Meng Gaofeng, Wang Ying, Duan Jiangyong, Xiang Shiming, and Pan Chunhong. 2013. Efficient image dehazing with boundary constraint and contextual regularization. In Proceedings of the IEEE International Conference on Computer Vision (ICCV’13). 617624. DOI:DOI: DOI: https://doi.org/10.1109/ICCV.2013.82 Google ScholarGoogle ScholarCross RefCross Ref
  35. [35] Narasimhan Srinivasa G. and Nayar Shree K.. 2002. Vision and the atmosphere. Int. J. Comput. Vis. 48, 3 (2002), 233254. DOI:DOI: DOI: https://doi.org/10.1023/A:1016328200723 Google ScholarGoogle ScholarCross RefCross Ref
  36. [36] Pan Jinshan, Sun Deqing, Pfister Hanspeter, and Yang Ming-Hsuan. 2016. Blind image deblurring using dark channel prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’16). 16281636. DOI:DOI: DOI: https://doi.org/10.1109/CVPR.2016.180Google ScholarGoogle ScholarCross RefCross Ref
  37. [37] Paszke Adam, Gross Sam, Chintala Soumith, Chanan Gregory, Yang Edward, DeVito Zachary, Lin Zeming, al. et2017. Automatic differentiation in Pytorch. In Proceedings of the 31st International Conference on Neural Information Processing Systems Workshops (NIPSW’17). 14.Google ScholarGoogle Scholar
  38. [38] Qin Xu, Wang Zhilin, Bai Yuanchao, Xie Xiaodong, and Jia Huizhu. 2020. FFA-Net: Feature fusion attention network for single image dehazing. Proc. 34th AAAI Conf. Artif. Intell. 34, 7 (2020), 1190811915.Google ScholarGoogle ScholarCross RefCross Ref
  39. [39] Ren Wenqi, Liu Si, Zhang Hua, Pan Jinshan, Cao Xiaochun, and Yang Ming-Hsuan. 2016. Single image dehazing via multi-scale convolutional neural networks. In Proceedings of the 14th European Conference on Computer Vision (ECCV’16). 154169. DOI:DOI: DOI: https://doi.org/10.1007/978-3-319-46475-6_10Google ScholarGoogle ScholarCross RefCross Ref
  40. [40] Ren Wenqi, Ma Lin, Zhang Jiawei, Pan Jinshan, Cao Xiaochun, Liu Wei, and Yang Ming-Hsuan. 2018. Gated fusion network for single image dehazing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’18). 32533261. DOI:DOI: DOI: https://doi.org/10.1109/CVPR.2018.00343Google ScholarGoogle ScholarCross RefCross Ref
  41. [41] Ren Wenqi, Pan Jinshan, Zhang Hua, Cao Xiaochun, and Yang Ming-Hsuan. 2020. Single image dehazing via multi-scale convolutional neural networks with holistic edges. Int. J. Comput. Vis. 128 (2020), 240259. DOI:DOI: DOI: https://doi.org/10.1007/s11263-019-01235-8Google ScholarGoogle ScholarCross RefCross Ref
  42. [42] Ronnebergerm Olaf, Fischer Philipp, and Brox Thomas. 2015. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the 18th International Conference on Medical Image Computing and Computer-assisted Intervention (MICCAI’15). 234241. DOI:DOI: DOI: https://doi.org/10.1007/978-3-319-24574-4_28Google ScholarGoogle Scholar
  43. [43] Simonyan Karen and Zisserman Andrew. 2014. Very deep convolutional networks for large-scale image recognition. arXiv (2014). DOI:DOI: DOI: https://doi.org/arXiv:1409.1556Google ScholarGoogle Scholar
  44. [44] Souly Nasim, Spampinato Concetto, and Shah Mubarak. 2017. Semi supervised semantic segmentation using generative adversarial network. In Proceedings of the IEEE International Conference on Computer Vision (ICCV’17). 56895697. DOI:DOI: DOI: https://doi.org/10.1109/ICCV.2017.606Google ScholarGoogle ScholarCross RefCross Ref
  45. [45] Sun Ziyi, Zhang Yunfeng, Bao Fangxun, Shao Kai, Liu Xinxin, and Zhang Caiming. 2021. ICycleGAN: Single image dehazing based on iterative dehazing model and CycleGAN. Comput. Vis. Image Underst. 203 (2021), 103133. DOI:DOI: DOI: https://doi.org/10.1016/j.cviu.2020.103133Google ScholarGoogle ScholarCross RefCross Ref
  46. [46] Tan Robby T.. 2008. Visibility in bad weather from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’08). 18. DOI:DOI: DOI: https://doi.org/10.1109/CVPR.2008.4587643Google ScholarGoogle ScholarCross RefCross Ref
  47. [47] Tang Chang, Liu Xinwang, Wang Pichao, Zhang Changqing, Li Miaomiao, and Wang Lizhe. 2019. Adaptive hypergraph embedded semi-supervised multi-label image annotation. IEEE Trans. Multim. 21, 11 (2019), 28372849. DOI:DOI: DOI: https://doi.org/10.1109/TMM.2019.2909860Google ScholarGoogle ScholarCross RefCross Ref
  48. [48] Vaswani Ashish, Shazeer Noam, Parmar Niki, Uszkorei Jakob, Jones Llion, Gomez Aidan N., Kaiser Łukasz, and Polosukhin Illia. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS’17). 60006010. DOI:DOI: DOI: https://doi.org/10.5555/3295222.3295349 Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. [49] Wang Fei, Jiang Mengqing, Qian Chen, Yang Shuo, Li Cheng, Zhang Honggang, Wang Xiaogang, and Tang Xiaoou. 2017. Residual attention network for image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17). 64506458. DOI:DOI: DOI: https://doi.org/10.1109/CVPR.2017.683Google ScholarGoogle ScholarCross RefCross Ref
  50. [50] Wang Ping, Fan Qinglan, Zhang Yunfeng, Bao Fangxun, and Zhang Caiming. 2019. A novel dehazing method for color fidelity and contrast enhancement on mobile Devices. IEEE Trans. Consum. Electron. 65, 1 (2019), 4756. DOI:DOI: DOI: https://doi.org/10.1109/TCE.2018.2884794 Google ScholarGoogle ScholarCross RefCross Ref
  51. [51] Wang Zhengyang and Ji Shuiwang. 2018. Smoothed dilated convolutions for improved dense prediction. In Proceedings of the 24th ACM International Conference on Knowledge Discovery and Data Mining (KDD’18). 24862495. DOI:DOI: DOI: https://doi.org/10.1007/s10618-021-00765-5 Google ScholarGoogle ScholarCross RefCross Ref
  52. [52] Woo Sanghyun, Park Jongchan, Lee Joon-Young, and Kweon In So. 2018. CBAM: Convolutional block attention module. In Proceedings of the 15th European Conference on Computer Vision (ECCV’18). 286301. DOI:DOI: DOI: https://doi.org/10.1007/978-3-030-01234-2_1Google ScholarGoogle ScholarCross RefCross Ref
  53. [53] Wu Hao and Prasad Saurabh. 2018. Semi-supervised deep learning using pseudo labels for hyperspectral image classification. IEEE Trans. Image Process. 27, 3 (2018), 12591270. DOI:DOI: DOI: https://doi.org/10.1109/TIP.2017.2772836 Google ScholarGoogle ScholarCross RefCross Ref
  54. [54] Wu Qingbo, Ren Wenqi, and Cao Xiaochun. 2020. Learning interleaved cascade of shrinkage fields for joint image dehazing and denoising. IEEE Trans. Image Process. 29 (2020), 17881801. DOI:DOI: DOI: https://doi.org/10.1109/TIP.2019.2942504Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. [55] Yang Xitong, Xu Zheng, and Luo Jiebo. 2018. Towards perceptual image dehazing by physics-based disentanglement and adversarial training. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI’18). 74857492. Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. [56] Zhang Han, Goodfellow Ian, Metaxas Dimitris, and Odena Augustus. 2018. Self-attention generative adversarial networks. arXiv (2018). DOI:DOI: DOI: https://doi.org/arXiv:1805.08318Google ScholarGoogle Scholar
  57. [57] Zhang He and Patel Vishal M.. 2018. Densely connected pyramid dehazing network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’18). 31943203. DOI:DOI: DOI: https://doi.org/10.1109/CVPR.2018.00337Google ScholarGoogle ScholarCross RefCross Ref
  58. [58] Zhang Yanfu, Ding Li, and Sharma Gaurav. 2017. HazeRD: An outdoor scene dataset and benchmark for single image dehazing. In Proceedings of the IEEE International Conference on Image Processing (ICIP’17). 32053209. DOI:DOI: DOI: https://doi.org/10.1109/ICIP.2017.8296874Google ScholarGoogle ScholarCross RefCross Ref
  59. [59] Zhang Yulun, Li Kunpeng, Li Kai, Wang Lichen, Zhong Bineng, and Fu Yun. 2018. Image super-resolution using very deep residual channel attention networks. In Proceedings of the 15th European Conference on Computer Vision (ECCV’18). 286301. DOI:DOI: DOI: https://doi.org/10.1007/978-3-030-01234-2_18Google ScholarGoogle ScholarCross RefCross Ref
  60. [60] Zhang Yunfeng, Wang Ping, Fan Qinglan, Bao Fangxun, Yao Xunxiang, and Zhang Caiming. 2020. Single image numerical iterative dehazing method based on local physical features. IEEE Trans. Circ. Syst. Vid. Technol. 30, 10 (2020), 35443557. DOI:DOI: DOI: https://doi.org/10.1109/TCSVT.2019.2939853Google ScholarGoogle ScholarCross RefCross Ref
  61. [61] Zhu Qingsong, Mai Jiaming, and Shao Ling. 2015. A fast single image haze removal algorithm using color attenuation prior. IEEE Trans. Image Process. 24, 11 (2015), 35223533. DOI:DOI: DOI: https://doi.org/10.1109/TIP.2015.2446191Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. [62] Zhu Xiaojin and Goldberg Andrew B.. 2009. Introduction to Semi-supervised Learning. Morgan and Claypool, San Mateo, CA.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. SADnet: Semi-supervised Single Image Dehazing Method Based on an Attention Mechanism

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM Transactions on Multimedia Computing, Communications, and Applications
          ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 18, Issue 2
          May 2022
          494 pages
          ISSN:1551-6857
          EISSN:1551-6865
          DOI:10.1145/3505207
          Issue’s Table of Contents

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 16 February 2022
          • Accepted: 1 July 2021
          • Revised: 1 June 2021
          • Received: 1 December 2020
          Published in tomm Volume 18, Issue 2

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Refereed

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Full Text

        View this article in Full Text.

        View Full Text

        HTML Format

        View this article in HTML Format .

        View HTML Format
        About Cookies On This Site

        We use cookies to ensure that we give you the best experience on our website.

        Learn more

        Got it!