skip to main content
research-article

UID2021: An Underwater Image Dataset for Evaluation of No-Reference Quality Assessment Metrics

Published:27 February 2023Publication History
Skip Abstract Section

Abstract

Achieving subjective and objective quality assessment of underwater images is of high significance in underwater visual perception and image/video processing. However, the development of underwater image quality assessment (UIQA) is limited for the lack of publicly available underwater image datasets with human subjective scores and reliable objective UIQA metrics. To address this issue, we establish a large-scale underwater image dataset, dubbed UID2021, for evaluating no-reference (NR) UIQA metrics. The constructed dataset contains 60 multiply degraded underwater images collected from various sources, covering six common underwater scenes (i.e., bluish scene, blue-green scene, greenish scene, hazy scene, low-light scene, and turbid scene), and their corresponding 900 quality improved versions are generated by employing 15 state-of-the-art underwater image enhancement and restoration algorithms. Mean opinion scores with 52 observers for each image of UID2021 are also obtained by using the pairwise comparison sorting method. Both in-air and underwater-specific NR IQA algorithms are tested on our constructed dataset to fairly compare their performance and analyze their strengths and weaknesses. Our proposed UID2021 dataset enables ones to evaluate NR UIQA algorithms comprehensively and paves the way for further research on UIQA. The dataset is available at https://github.com/Hou-Guojia/UID2021.

REFERENCES

  1. [1] Yang Miao, Hu Jintong, Li Chongyi, Rohde Gustavo, Du Yixiang, and Hu Ke. 2019. An in-depth survey of underwater image enhancement and restoration. IEEE Access 7 (Aug. 2019), 123638123657.Google ScholarGoogle ScholarCross RefCross Ref
  2. [2] Han Min, Lyu Zhiyu, Qiu Tie, and Xu Meiling. 2020. A review on intelligence dehazing and color restoration for underwater images. IEEE Trans. Syst. Man Cybern. Syst. 50, 5 (May 2020), 18201832.Google ScholarGoogle ScholarCross RefCross Ref
  3. [3] Anwar Saeed and Li Chongyi. 2020. Diving deeper into underwater image enhancement: A survey. Signal Process. Image Commun. 89 (Nov. 2020), 115978.Google ScholarGoogle ScholarCross RefCross Ref
  4. [4] Yang Miao and Sowmya Arcot. 2015. An underwater color image quality evaluation metric. IEEE Trans. Image Process. 24, 12 (Dec. 2015), 60626071.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. [5] Panetta Karen, Gao Chen, and Again Sos. 2016. Human-visual-system-inspired underwater image quality measures. IEEE J. Ocean. Eng. 41, 3 (July 2016), 541551.Google ScholarGoogle ScholarCross RefCross Ref
  6. [6] Wang Yan, Li Na, Li Zongying, Gu Zhaorui, Zheng Haiyong, Zheng Bing, and Sun Mengnan. 2017. An imaging-inspired no-reference underwater color image quality assessment metric. Comput. Electron. Eng. 70 (Dec. 2017), 904913.Google ScholarGoogle ScholarCross RefCross Ref
  7. [7] Yang Ning, Zhong Qihang, Li Kun, Cong Runmin, Zhao Yao, and Kwong Sam. 2021. A reference-free underwater image quality assessment metric in frequency domain. Signal Process. Image Commun. 94 (March 2021), 116218.Google ScholarGoogle ScholarCross RefCross Ref
  8. [8] Li Chongyi, Guo Chunle, Ren Wenqi, Cong Runmin, Hou Junhui, Kwong Sam, and Tao Dacheng. 2019. An underwater image enhancement benchmark dataset and beyond. IEEE Trans. Image Process. 29 (Nov. 2019), 43764389.Google ScholarGoogle ScholarCross RefCross Ref
  9. [9] Liu Risheng, Fan Xin, Zhu Ming, Hou Minjun, and Luo Zhongxuan. 2020. Real-world underwater enhancement: Challenges, benchmarks, and solutions under natural light. IEEE Trans. Circuits Syst. Video Technol. 30, 12 (Jan. 2020), 48614875.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. [10] Berman Dana, Levy Deborah, Avidan Shai, and Treibitz Tali. 2021. Underwater single image color restoration using haze-lines and a new quantitative dataset. IEEE Trans. Pattern Anal. Mach. Intell. 43, 8 (Aug. 2021), 28222837.Google ScholarGoogle Scholar
  11. [11] Duarte Amanda, Codevilla Felipe, Gaya Joel De O., and Botelho Silvia S. C.. 2016. A dataset to evaluate underwater image restoration methods. In Proceedings of the 2016 MTS/IEEE OCEANS Conference (OCEANS’16). 16.Google ScholarGoogle ScholarCross RefCross Ref
  12. [12] Sánchez-Ferreira C., Coelho L. S., Ayala H. V. H., Farias M. C. Q., and Llanos C. H.. 2019. Bio-inspired optimization algorithms for real underwater image restoration. Signal Process. Image Commun. 77 (Sept. 2019), 4965.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. [13] Li Chongyi, Anwar Saeed, and Porikli Fatih. 2020. Underwater scene prior inspired deep underwater image and video enhancement. Pattern Recognit. 98 (Feb. 2020), 107038.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. [14] Hou Guojia, Zhao Xin, Pan Zhenkuan, Yang Huan, Tan Lu, and Li Jingming. 2020. Benchmarking underwater image enhancement and restoration, and beyond. IEEE Access 8 (July 2020), 122078122091.Google ScholarGoogle ScholarCross RefCross Ref
  15. [15] Wu Di, Yuan Fei, and Cheng En. 2020. Underwater no-reference image quality assessment for display module of ROV. Sci. Program. 2 (Aug. 2020), 115.Google ScholarGoogle Scholar
  16. [16] Guo Pengfei, He Lang, Liu Shuangyin, Zeng Delu, and Liu Hantao. 2021. Underwater image quality assessment: Subjective and objective methods. IEEE Trans. Multimedia 24 (April 2021), 19801989.Google ScholarGoogle ScholarCross RefCross Ref
  17. [17] Conrad Bovik Alan, Farooq Sabir Muhammad, and Sheikh Hamid R.. 2006. A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Trans. Image Process. 15, 11 (Nov. 2006), 34403451.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. [18] Horita Y., Shibata K., and Kawayoke Y.. 2023. MICT Image Quality Evaluation Database. Retrieved January 10, 2023 from https://computervisiononline.com/dataset/1105138668.Google ScholarGoogle Scholar
  19. [19] Ponomarenko Nikolay, Lukin Vladimir, Zelensky Alexander, Egiazarian Karen, Astola Jaakko, Carli Marco, and Battisti Federica. 2009. TID2008: A database for evaluation of full-reference visual quality assessment metrics. Adv. Modern Radioelectron. 10, 4 (2009), 3045.Google ScholarGoogle Scholar
  20. [20] Virtanen Toni, Nuutinen Mikko, Vaahteranoksa Mikko, Oittinen Pirkko, and Häkkinen Jukka. 2015. CID2013: A database for evaluating no-reference image quality assessment algorithms. IEEE Trans. Image Process. 24, 1 (Jan. 2015), 390402.Google ScholarGoogle ScholarCross RefCross Ref
  21. [21] Sun Wen, Zhou Fei, and Liao Qingmin. 2017. MDID: A multiply distorted image database for image quality assessment. Pattern Recognit. 61 (Jan. 2017), 153168.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. [22] Ma Kede, Duanmu Zhengfang, Wu Qingbo, Wang Zhou, Yong Hongwei, Li Hongliang, and Zhang Lei. 2017. Waterloo exploration database: New challenges for image quality assessment models. IEEE Trans. Image Process. 26, 2 (Feb. 2017), 10041016.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. [23] Jayaraman Dinesh, Mittal Anish, Moorthy Anush K., and Bovik Alan C.. 2012. Objective quality assessment of multiply distorted images. In Proceedings of the 2012 Conference Record of the 46th Asilomar Conference on Signals, Systems, and Computers (ASILOMAR’12). 16931697.Google ScholarGoogle ScholarCross RefCross Ref
  24. [24] Ponomarenko Nikolay, Jin Lina, Ieremeiev Oleg, Lukin Vladimir, Egiazarian Karen, Astola Jaakko, Vozel Benoit, et al. 2015. Image database TID2013: Peculiarities, results and perspectives. Signal Process. Image Commun. 30 (Jan. 2015), 5777.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. [25] Corchs Silvia and Gasparini Francesca. 2017. A multidistortion database for image quality. In Proceedings of the International Workshop on Computational Color Imaging (CCIW’17). 95104. http://www.mmsp.unimib.it/image-quality/.Google ScholarGoogle ScholarCross RefCross Ref
  26. [26] Ciancio Alexandre, Targino da Costa André Luiz N., da Silva Eduardo A. B., Said Amir, Samadani Ramin, and Obrador Pere. 2011. No-reference blur assessment of digital pictures based on multifeature classifiers. IEEE Trans. Image Process. 20, 1 (Jan. 2011), 6475.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. [27] Ghadiyaram Deepti and Bovik Alan C.. 2016. Massive online crowdsourced study of subjective and objective picture quality. IEEE Trans. Image Process. 25, 1 (Jan. 2016), 372387.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. [28] Lin Hanhe, Hosu Vlad, and Saupe Dietmar. 2018. KonIQ-10k: Towards an ecologically valid and large-scale IQA database. arXiv:1803.08489.Google ScholarGoogle Scholar
  29. [29] Ninassi Alexandre, Callet Patrick Le, and Autrusseau Florent. 2006. Pseudo no reference image quality metric using perceptual data hiding. In Proceedings of SPIE Human Vision and Electronic Imaging.Google ScholarGoogle ScholarCross RefCross Ref
  30. [30] Larson Eric Cooper and Chandler Damon Michael. 2010. Most apparent distortion: Full-reference image quality assessment and the role of strategy. J. Electron. Imag. 19, 1 (Jan. 2010), 011006.Google ScholarGoogle ScholarCross RefCross Ref
  31. [31] Codevilla Felipe, Gaya Joel De O., Duarte Filho Nelson, and Costa Botelho Silvia S. C.. 2015. Achieving turbidity robustness on underwater images local feature detection. Int. J. Comput. Vis. 60, 2 (Sept. 2015), 91110.Google ScholarGoogle Scholar
  32. [32] Ma Yupeng, Feng Xiaoyi, Chao Lujing, Huang Dong, Xia Zhaoqiang, and Jiang Xiaoyue. 2018. A new database for evaluating underwater image processing methods. In Proceedings of the 2018 8th International Conference on Image Processing Theory, Tools, and Applications (IPTA’18). 16.Google ScholarGoogle ScholarCross RefCross Ref
  33. [33] Moorthy Anush Krishna and Bovik Alan Conrad. 2010. A two-step framework for constructing blind image quality indices. IEEE Signal Process. Lett. 17, 5 (May 2010), 513516.Google ScholarGoogle ScholarCross RefCross Ref
  34. [34] Mittal Anish, Krishna Moorthy Anush, and Conrad Bovik Alan. 2012. No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 21, 12 (Dec. 2012), 46954708.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. [35] Mittal Anish, Soundararajan Rajiv, and Bovik Alan C.. 2013. Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20, 3 (March 2013), 209212.Google ScholarGoogle ScholarCross RefCross Ref
  36. [36] Saad Michele A., Bovik Alan C., and Charrier Christophe. 2012. Blind image quality assessment: A natural scene statistics approach in the DCT domain. IEEE Trans. Image Process. 21, 8 (Aug. 2012), 3339-3352.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. [37] Ma Kede, Liu Wentao, Zhang Kai, Duanmu Zhengfang, Wang Zhou, and Zuo Wangmeng. 2018. End-to-end blind image quality assessment using deep neural networks. IEEE Trans. Image Process. 27, 3 (March 2018), 12021213.Google ScholarGoogle ScholarCross RefCross Ref
  38. [38] Zhu Hancheng, Li Leida, Wu Jinjian, Dong Weisheng, and Shi Guangming. 2020. MetaIQA: Deep meta-learning for no-reference image quality assessment. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR’20). 1413114140.Google ScholarGoogle ScholarCross RefCross Ref
  39. [39] Wu Jinjian, Ma Jupo, Liang Fuhu, Dong Weisheng, Shi Guangming, and Lin Weisi. 2020. End-to-end blind image quality prediction with cascaded deep neural network. IEEE Trans. Image Process. 29 (June 2020), 74147426.Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. [40] Kang Le, Ye Peng, Li Yi, and Doermann David. 2014. Convolutional neural networks for no-reference image quality assessment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’14). 17331740.Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. [41] Su Shaolin, Yan Qingsen, Zhu Yu, Zhang Cheng, Ge Xin, Sun Jinqiu, and Zhang Yanning. 2020. Blindly assess image quality in the wild guided by a self-adaptive hyper network. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR’20). 36643673.Google ScholarGoogle ScholarCross RefCross Ref
  42. [42] Zhang Weixia, Ma Kede, Yan Jia, Deng Dexiang, and Wang Zhou. 2020. Blind image quality assessment using a deep bilinear convolutional neural network. IEEE Trans. Circuits Syst. Video Technol. 30, 1 (Jan. 2020), 3647.Google ScholarGoogle ScholarCross RefCross Ref
  43. [43] Bosse Sebastian, Maniry Dominique, Müller Klaus-Robert, and Wiegand Thomas. 2018. Deep neural networks for no-reference and full-reference image quality assessment. IEEE Trans. Image Process. 27, 1 (Jan. 2018), 206219.Google ScholarGoogle ScholarCross RefCross Ref
  44. [44] Zhang Weixia, Ma Kede, Zhai Guangtao, and Yang Xiaokang. 2021. Uncertainty-aware blind image quality assessment in the laboratory and wild. IEEE Trans. Image Process. 30 (2021), 34743486.Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. [45] Zhang Weixia, Li Dingquan, Ma Chao, Zhai Guangtao, Yang Xiaokang, and Ma Kede. 2022. Continual learning for blind image quality assessment. IEEE Trans. Pattern Anal. Mach. Intell. Early access, 2022.Google ScholarGoogle Scholar
  46. [46] Wang Jing, Fan Haotian, Hou Xiaoxia, Xu Yitian, Li Tao, Lu Xuechao, and Fu Lean. 2022. MSTRIQ: No reference image quality assessment based on swin transformer with multi-stage fusion. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW’22). 12681277.Google ScholarGoogle ScholarCross RefCross Ref
  47. [47] Ferzli Rony and Karam Lina J.. 2019. A no-reference objective image sharpness metric based on the notion of just noticeable blur (JNB). IEEE Trans. Image Process. 18, 4 (April 2009), 717728.Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. [48] Narvekar Niranjan D. and Karam Lina J.. 2011. A no-reference image blur metric based on the cumulative probability of blur detection (CPBD). IEEE Trans. Image Process. 20, 9 (Sept. 2011), 26782683.Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. [49] Li Leida, Lin Weisi, Wang Xuesong, Yang Gaobo, Bahrami Khosro, and Kot Alex C.. 2016. No-reference image blur assessment based on discrete orthogonal moments. IEEE Trans. Image Cybern. 46, 1 (Jan. 2016), 3950.Google ScholarGoogle ScholarCross RefCross Ref
  50. [50] Kwon Choi Lark, You Jaehee, and Bovik Alan Conrad. 2015. Referenceless prediction of perceptual fog density and perceptual image defogging. IEEE Trans. Image Process. 24, 11 (Nov. 2015), 38883901.Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. [51] Liu Lixiong, Wang Tianshu, and Huang Hua. 2019. Pre-attention and spatial dependency driven no-reference image quality assessment. IEEE Trans. Multimedia 21, 9 (Sept. 2019), 23052318.Google ScholarGoogle Scholar
  52. [52] Yan Chenggang, Teng Tong, Liu Yutao, Zhang Yongbing, Wang Haoqian, and Ji Xiangyang. 2021. Precise no-reference image quality evaluation based on distortion identification. ACM Trans. Multimed. Comput. Commun. Appl. 17, 110 (Oct. 2021), 121.Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. [53] Zheng Yannan, Chen Weiling, Lin Rongfu, Zhao Tiesong, and Callet Patrick Le. 2022. UIF: An objective quality assessment for underwater image enhancement. IEEE Trans. Image Process. 31 (2022), 54565468.Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. [54] Jiang Qiuping, Gu Yuese, Li Chongyi, Cong Runmin, and Shao Feng. 2022. Underwater image enhancement quality evaluation: Benchmark dataset and objective metric. IEEE Trans. Circuits Syst. Video Technol. 32, 9 (Sept. 2022), 59595974.Google ScholarGoogle ScholarCross RefCross Ref
  55. [55] Fu Zhenqi, Fu Xueyang, Huang Yue, and Ding Xinghao. 2022. Twice Mixing: A rank learning based quality assessment approach for underwater image enhancement. Signal Process. Image Commun. 102 (March 2022), 116622.Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. [56] Guo Chunle, Wu Ruiqi, Jin Xin, Han Linghao, Chai Zhi, Zhang Weidong, and Li Chongyi. 2022. Underwater Ranker: Learn which is better and how to be better. arXiv:2208.06857.Google ScholarGoogle Scholar
  57. [57] Galdran Adrian, Pardo David, Picón Artzai, and Alvarez-Gila Aitor. 2015. Automatic red channel underwater image restoration. J. Vis. Commun. Image Represent. 26 (Jan. 2015), 132145.Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. [58] Hou Guojia, Pan Zhenkuan, Wang Guodong, Yang Huan, and Duan Jinming. 2019. An efficient nonlocal variational method with application to underwater image restoration. Neurocomputing 369 (Dec. 2019), 106121.Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. [59] Li Chongyi, Guo Jichang, Cong Runmin, Pang Yanwei, and Wang Bo. 2016. Underwater image enhancement by dehazing with minimum information loss and histogram distribution prior. IEEE Trans. Image Process. 25, 12 (Dec. 2016), 56645677.Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. [60] Li Xinjie, Hou Guojia, Tan Lu, and Liu Wanquan. 2020. A hybrid framework for underwater image enhancement. IEEE Access 8 (Oct. 2020), 21693536.Google ScholarGoogle Scholar
  61. [61] Zhang Weidong, Zhuang Peixian, Sun Haihan, Li Guohou, Kwong Sam, and Li Chongyi. 2022. Underwater image enhancement via minimal color loss and locally adaptive contrast enhancement. IEEE Trans. Image Process. 31 (June 2022), 39974010.Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. [62] Ancuti Codruta O., Ancuti Cosmin, De Vleeschouwer Christophe, and Bekaert Philippe. 2018. Color balance and fusion for underwater image enhancement. IEEE Trans. Image Process. 27, 1 (Jan. 2018), 379393.Google ScholarGoogle ScholarCross RefCross Ref
  63. [63] Hope Nicholas. n.d. Bubble Vision Underwater Imaging. Retrieved January 10, 2023 from https://bubblevision.com.Google ScholarGoogle Scholar
  64. [64] Zhuang Peixian, Li Chongyi, and Wu Jiamin. 2021. Bayesian retinex underwater image enhancement. Eng. Appl. Artif. Intell. 101 (May 2021), 104171.Google ScholarGoogle ScholarCross RefCross Ref
  65. [65] Iqbal Kashif, Salam Rosalina Abdul, Osman Azam, and Zawawi Talib Abdullah. 2007. Underwater image enhancement using an integrated colour model. IAENG Int. J. Comput. Sci. 34, 2 (March 2007), 239244.Google ScholarGoogle Scholar
  66. [66] Fu Xueyang and Cao Xiangyong. 2020. Underwater image enhancement with global–local networks and compressed-histogram equalization. Signal Process. Image Commun. 86 (Aug. 2020), 115892.Google ScholarGoogle ScholarCross RefCross Ref
  67. [67] Hou Guojia, Pan Zhenkuan, Huang Baoxiang, Wang Guodong, and Luan Xin. 2018. Hue preserving-based approach for underwater colour image enhancement. IET Image Process. 12, 2 (Feb. 2018), 292298.Google ScholarGoogle ScholarCross RefCross Ref
  68. [68] Peng Yan-Tsung and Cosman Pamela C.. 2017. Underwater image restoration based on image blurriness and light absorption. IEEE Trans. Image Process. 26, 4 (April 2017), 15791594.Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. [69] Marques Tunai Porto and Albu Alexandra Branzan. 2020. L2UWE: A framework for the efficient enhancement of low-light underwater images using local contrast and multi-scale fusion. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW’20). 538539.Google ScholarGoogle ScholarCross RefCross Ref
  70. [70] Fu Xueyang, Fan Zhiwen, Ling Mei, Huang Yue, and Ding Xinghao. 2017. Two-step approach for single underwater image enhancement. In Proceedings of the 2017 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS’17). 789794.Google ScholarGoogle ScholarCross RefCross Ref
  71. [71] Li Chongyi, Anwar Saeed, Hou Junhui, Cong Runmin, Guo Chunle, and Ren Wenqi. 2021. Underwater image enhancement via medium transmission-guided multi-color space embedding. IEEE Trans. Image Process. 30 (May 2021), 49855000.Google ScholarGoogle ScholarCross RefCross Ref
  72. [72] Xie Jun, Hou Guojia, Wang Guodong, and Pan Zhenkuan. 2022. A variational framework for underwater image dehazing and deblurring. IEEE Trans. Circuits Syst. Video Technol. 32, 6 (June 2022), 35143526.Google ScholarGoogle ScholarCross RefCross Ref
  73. [73] Hou Guojia, Li Jingming, Wang Guodong, Yang Huan, Huang Baoxiang, and Pan Zhenkuan. 2020. A novel dark channel prior guided variational framework for underwater image restoration. J. Vis. Commun. Image Represent. 66 (Jan. 2020), 102732.Google ScholarGoogle ScholarDigital LibraryDigital Library
  74. [74] Fu Xueyang, Zhuang Peixian, Huang Yue, Liao Yinghao, Zhang Xiao-Ping, and Ding Xinghao. 2014. A retinex-based enhancing approach for single underwater image. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP’14). 45724576.Google ScholarGoogle ScholarCross RefCross Ref
  75. [75] Winkler Stefan. 2012. Analysis of public image and video databases for quality assessment. IEEE J. Sel. Top. Signal. Process. 6, 6 (Aug. 2012), 616625.Google ScholarGoogle ScholarCross RefCross Ref
  76. [76] Wang Yang, Cao Yang, Zhang Jing, Wu Feng, and Zha Zhengjun. 2021. Leveraging deep statistics for underwater image enhancement. ACM Trans. Multimed. Comput. Commun. Appl. 17, 116 (Oct. 2021), 120.Google ScholarGoogle ScholarDigital LibraryDigital Library
  77. [77] Li Xinjie, Hou Guojia, Li Kunqian, and Pan Zhenkuan. 2022. Enhancing underwater image via adaptive color and contrast enhancement, and denoising. Eng. Appl. Artif. Intell. 111 (May 2022), 104759.Google ScholarGoogle ScholarCross RefCross Ref
  78. [78] Qi Qi, Li Kunqian, Zheng Haiyong, Gao Xiang, Hou Guojia, and Sun Kun. 2022. SGUIE-Net: Semantic attention guided underwater image enhancement with multi-scale perception. IEEE Trans. Image Process. 31 (Oct. 2022), 68166830.Google ScholarGoogle ScholarDigital LibraryDigital Library
  79. [79] Zhuang Peixian, Wu Jiamin, Porikli Fatih, and Li Chongyi. 2022. Underwater image enhancement with hyper-Laplacian reflectance priors. IEEE Trans. Image Process. 31 (Aug. 2022), 54425455.Google ScholarGoogle ScholarDigital LibraryDigital Library
  80. [80] Zhou Jingchun, Yang Tongyu, Chu Weishen, and Zhang Weishi. 2022. Underwater image restoration via backscatter pixel prior and color compensation. Eng. Appl. Artif. Intell. 111 (May 2022), 104785.Google ScholarGoogle ScholarCross RefCross Ref
  81. [81] Yuan Jieyu, Cai Zhanchuan, and Cao Wei. 2022. TEBCF: Real-world underwater image texture enhancement model based on blurriness and color fusion. IEEE Trans. Geosci. Remote Sens. 60 (Oct. 2021), 4204315.Google ScholarGoogle ScholarCross RefCross Ref
  82. [82] Li Kuanqin, Wu Li, Qi Qi, Liu Wenjie, Gao Xiang, Zhou Liqin, and Song Dalei. 2022. Beyond single reference for training: Underwater image enhancement via comparative learning. IEEE Trans. Circuits Syst. Video Technol. Early access, November 28, 2022.Google ScholarGoogle ScholarCross RefCross Ref
  83. [83] Li Nan, Hou Guojia, Liu Yuhai, Pan Zhenkuan, and Tan Lu. 2022. Single underwater image enhancement using integrated variational model, Digit. Signal Process. 129 (2022), 103660.Google ScholarGoogle ScholarDigital LibraryDigital Library
  84. [84] Alireza Golestaneh S., Dadsetan Saba, and Kitani Kris M.. 2022. No-reference image quality assessment via transformers, relative ranking, and self-consistency. In Proceedings of the 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV’22). 39893999.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. UID2021: An Underwater Image Dataset for Evaluation of No-Reference Quality Assessment Metrics

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Multimedia Computing, Communications, and Applications
      ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 19, Issue 4
      July 2023
      263 pages
      ISSN:1551-6857
      EISSN:1551-6865
      DOI:10.1145/3582888
      • Editor:
      • Abdulmotaleb El Saddik
      Issue’s Table of Contents

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 27 February 2023
      • Online AM: 6 January 2023
      • Accepted: 23 December 2022
      • Revised: 19 December 2022
      • Received: 19 April 2022
      Published in tomm Volume 19, Issue 4

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    View Full Text

    HTML Format

    View this article in HTML Format .

    View HTML Format
    About Cookies On This Site

    We use cookies to ensure that we give you the best experience on our website.

    Learn more

    Got it!