skip to main content
research-article

Perceptual Quality Assessment of Low-light Image Enhancement

Published:12 November 2021Publication History
Skip Abstract Section

Abstract

Low-light image enhancement algorithms (LIEA) can light up images captured in dark or back-lighting conditions. However, LIEA may introduce various distortions such as structure damage, color shift, and noise into the enhanced images. Despite various LIEAs proposed in the literature, few efforts have been made to study the quality evaluation of low-light enhancement. In this article, we make one of the first attempts to investigate the quality assessment problem of low-light image enhancement. To facilitate the study of objective image quality assessment (IQA), we first build a large-scale low-light image enhancement quality (LIEQ) database. The LIEQ database includes 1,000 light-enhanced images, which are generated from 100 low-light images using 10 LIEAs. Rather than evaluating the quality of light-enhanced images directly, which is more difficult, we propose to use the multi-exposure fused (MEF) image and stack-based high dynamic range (HDR) image as a reference and evaluate the quality of low-light enhancement following a full-reference (FR) quality assessment routine. We observe that distortions introduced in low-light enhancement are significantly different from distortions considered in traditional image IQA databases that are well-studied, and the current state-of-the-art FR IQA models are also not suitable for evaluating their quality. Therefore, we propose a new FR low-light image enhancement quality assessment (LIEQA) index by evaluating the image quality from four aspects: luminance enhancement, color rendition, noise evaluation, and structure preserving, which have captured the most key aspects of low-light enhancement. Experimental results on the LIEQ database show that the proposed LIEQA index outperforms the state-of-the-art FR IQA models. LIEQA can act as an evaluator for various low-light enhancement algorithms and systems. To the best of our knowledge, this article is the first of its kind comprehensive low-light image enhancement quality assessment study.

REFERENCES

  1. [1] Barten Peter G. J.. 2003. Formula for the contrast sensitivity of the human eye. In Image Quality and System Performance, Vol. 5294. International Society for Optics and Photonics, 231238.Google ScholarGoogle Scholar
  2. [2] Bruce Neil D. B.. 2014. Special section on graphics interface: ExpoBlend: Information preserving exposure blending based on normalized log-domain entropy. Comput. Graph. 39 (2014), 1223. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. [3] BT RECOMMENDATION ITU-R. 2002. Methodology for the subjective assessment of the quality of television pictures. International Telecommunication Union.Google ScholarGoogle Scholar
  4. [4] Cai Jianrui, Gu Shuhang, and Zhang Lei. 2018. Learning a deep single image contrast enhancer from multi-exposure images. IEEE Trans. Image Process. 27, 4 (2018), 20492062.Google ScholarGoogle ScholarCross RefCross Ref
  5. [5] Canny John. 1987. A computational approach to edge detection. In Readings in Computer Vision. Elsevier, 184203. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. [6] Celik Turgay and Tjahjadi Tardi. 2011. Contextual and variational contrast enhancement. IEEE Trans. Image Process. 20, 12 (2011), 34313441. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. [7] Chien Y.. 1974. Pattern classification and scene analysis. IEEE Trans. Automat. Control 19, 4 (1974), 462463.Google ScholarGoogle ScholarCross RefCross Ref
  8. [8] Dabov Kostadin, Foi Alessandro, Katkovnik Vladimir, and Egiazarian Karen O.. 2007. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16 (2007), 20802095. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. [9] Dong Xuan, Wang Guan, Pang Yi, Li Weixin, Wen Jiangtao, Meng Wei, and Lu Yao. 2011. Fast efficient algorithm for enhancement of low lighting video. In Proceedings of the IEEE International Conference on Multimedia and Expo. IEEE, 16. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. [10] Drago Frédéric, Myszkowski Karol, Annen Thomas, and Chiba Norishige. 2003. Adaptive logarithmic mapping for displaying high contrast scenes. Comput. Graph. Forum 22 (2003), 419426.Google ScholarGoogle ScholarCross RefCross Ref
  11. [11] Fu Xueyang, Liao Yinghao, Zeng Delu, Huang Yue, Zhang Xiao-Ping, and Ding Xinghao. 2015. A probabilistic method for image enhancement with simultaneous illumination and reflectance estimation. IEEE Trans. Image Process. 24, 12 (2015), 49654977.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. [12] Fu Xueyang, Zeng Delu, Huang Yue, Liao Yinghao, Ding Xinghao, and Paisley John. 2016. A fusion-based enhancing method for weakly illuminated images. Sig. Process. 129 (2016), 8296. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. [13] Fu Xueyang, Zeng Delu, Huang Yue, Zhang Xiao-Ping, and Ding Xinghao. 2016. A weighted variational model for simultaneous reflectance and illumination estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 27822790.Google ScholarGoogle ScholarCross RefCross Ref
  14. [14] Gao Zhongpai, Zhai Guangtao, Deng Hongwei, and Yang Xiaokang. 2020. Extended geometric models for stereoscopic 3D with vertical screen disparity. Displays 65 (2020), 101972.Google ScholarGoogle ScholarCross RefCross Ref
  15. [15] Gu Ke, Lin Weisi, Zhai Guangtao, Yang Xiaokang, Zhang Wenjun, and Chen Chang Wen. 2016. No-reference quality metric of contrast-distorted images based on information maximization. IEEE Trans. Cybern. 47, 12 (2016), 45594565.Google ScholarGoogle ScholarCross RefCross Ref
  16. [16] Gu Ke, Wang Shiqi, Zhai Guangtao, Ma Siwei, Yang Xiaokang, Lin Weisi, Zhang Wenjun, and Gao Wen. 2016. Blind quality assessment of tone-mapped images via analysis of information, naturalness, and structure. IEEE Trans. Multimedia 18, 3 (2016), 432443. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. [17] Gu Ke, Zhai Guangtao, Yang Xiaokang, and Zhang Wenjun. 2014. Using free energy principle for blind image quality assessment. IEEE Trans. Multimedia 17, 1 (2014), 5063.Google ScholarGoogle ScholarCross RefCross Ref
  18. [18] Guo Xiaojie, Li Yu, and Ling Haibin. 2017. LIME: Low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 26, 2 (2017), 982993. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. [19] Hu Jun, Gallo Orazio, Pulli Kari, and Sun Xiaobai. 2013. HDR deghosting: How to deal with saturation? In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 11631170. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. [20] Ibrahim Haidi and Kong Nicholas Sia Pik. 2007. Brightness preserving dynamic histogram equalization for image contrast enhancement. IEEE Trans. Consum. Electron. 53, 4 (2007), 17521758. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. [21] Jain Ramesh, Kasturi Rangachar, and Schunck Brian G.. 1995. Machine Vision. Vol. 5. McGraw-Hill New York. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. [22] Jiang Qiuping, Shao Feng, Lin Weisi, and Jiang Gangyi. 2019. BLIQUE-TMI: Blind quality evaluator for tone-mapped images based on local and global feature analyses. IEEE Trans. Circ. Syst. Vid. Technol. 29, 2 (2019), 323335. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. [23] Jobson Daniel J., Rahman Ziaur, and Woodell Glenn A.. 1997. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 6, 7 (1997), 965976. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. [24] Jobson Daniel J., Rahman Ziaur, and Woodell Glenn A.. 1997. Properties and performance of a center/surround retinex. IEEE Trans. Image Process. 6, 3 (1997), 451462. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. [25] Kou Fei, Li Zhengguo, Wen Changyun, and Chen Weihai. 2017. Multi-scale exposure fusion via gradient domain guided image filtering. In Proceedings of the IEEE International Conference on Multimedia and Expo (ICME). 11051110.Google ScholarGoogle ScholarCross RefCross Ref
  26. [26] Kundu Debarati, Ghadiyaram Deepti, Bovik Alan C., and Evans B. L.. 2017. No-reference quality assessment of tone-mapped HDR pictures. IEEE Trans. Image Process. 26, 6 (2017), 29572971. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. [27] Larson Eric Cooper and Chandler Damon Michael. 2010. Most apparent distortion: Full-reference image quality assessment and the role of strategy. J. Electron. Imag. 19, 1 (2010), 011006.Google ScholarGoogle ScholarCross RefCross Ref
  28. [28] Lee Chulwoo, Lee Chul, and Kim Chang-Su. 2013. Contrast enhancement based on layered difference representation of 2D histograms. IEEE Trans. Image Process. 22, 12 (2013), 53725384. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. [29] Li Mading, Liu Jiaying, Yang Wenhan, Sun Xiaoyan, and Guo Zongming. 2018. Structure-revealing low-light image enhancement via robust retinex model. IEEE Trans. Image Process. 27, 6 (2018), 28282841.Google ScholarGoogle ScholarCross RefCross Ref
  30. [30] Li Qiaohong, Lin Weisi, Xu Jingtao, and Fang Yuming. 2016. Blind image quality assessment using statistical structural and luminance features. IEEE Trans. Multimedia 18, 12 (2016), 24572469. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. [31] Li Shutao, Kang Xudong, and Hu Jianwen. 2013. Image fusion with guided filtering. IEEE Trans. Image Process. 22, 7 (2013), 28642875.Google ScholarGoogle ScholarCross RefCross Ref
  32. [32] Liu Anmin, Lin Weisi, and Narwaria Manish. 2011. Image quality assessment based on gradient similarity. IEEE Trans. Image Process. 21, 4 (2011), 15001512. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. [33] Lore Kin Gwn, Akintayo Adedotun, and Sarkar Soumik. 2017. LLNet: A deep autoencoder approach to natural low-light image enhancement. Pattern Recog. 61 (2017), 650662. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. [34] Lv Feifan, Lu Feng, Wu Jianhua, and Lim Chongsoon. 2018. MBLLEN: Low-light image/video enhancement using CNNs. In Proceedings of the British Machine Vision Conference.Google ScholarGoogle Scholar
  35. [35] Lyu Siwei and Simoncelli Eero P.. 2008. Nonlinear image representation using divisive normalization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 18.Google ScholarGoogle Scholar
  36. [36] Ma Kede, Li Hui, Yong Hongwei, Wang Zhou, Meng Deyu, and Zhang Lei. 2017. Robust multi-exposure image fusion: A structural patch decomposition approach. IEEE Trans. Image Process. 26, 5 (2017), 25192532. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. [37] Mahmoudpour Saeed and Schelkens Peter. 2019. A multi-attribute blind quality evaluator for tone-mapped images. IEEE Trans. Multimedia (2019), 11.Google ScholarGoogle Scholar
  38. [38] Mantiuk Rafał K., Myszkowski Karol, and Seidel Hans-Peter. 2005. A perceptual framework for contrast processing of high dynamic range images. ACM Transactions on Applied Perception (TAP) 3, 3 (2006), 286–308. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. [39] Mertens Tom, Kautz Jan, and Reeth F. Van. 2009. Exposure fusion: A simple and practical alternative to high dynamic range photography. Comput. Graph. For. 28, 1 (2009), 161171.Google ScholarGoogle Scholar
  40. [40] Min Xiongkuo, Gu Ke, Zhai Guangtao, Liu Jing, Yang Xiaokang, and Chen Chang Wen. 2017. Blind quality assessment based on pseudo-reference image. IEEE Trans. Multimedia 20, 8 (2017), 20492062.Google ScholarGoogle ScholarCross RefCross Ref
  41. [41] Min Xiongkuo, Ma Kede, Gu Ke, Zhai Guangtao, Wang Zhou, and Lin Weisi. 2017. Unified blind quality assessment of compressed natural, graphic, and screen content images. IEEE Trans. Image Process. 26, 11 (2017), 54625474.Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. [42] Min Xiongkuo, Zhai Guangtao, Gu Ke, Liu Yutao, and Yang Xiaokang. 2018. Blind image quality estimation via distortion aggravation. IEEE Trans. Broadcast. 64, 2 (2018), 508517.Google ScholarGoogle ScholarCross RefCross Ref
  43. [43] Min Xiongkuo, Zhai Guangtao, Gu Ke, Zhu Yucheng, Zhou Jiantao, Guo Guodong, Yang Xiaokang, Guan Xinping, and Zhang Wenjun. 2019. Quality evaluation of image dehazing methods using synthetic hazy images. IEEE Trans. Multimedia (2019).Google ScholarGoogle ScholarCross RefCross Ref
  44. [44] Min Xiongkuo, Zhai Guangtao, Zhou Jiantao, Farias Mylene C. Q., and Bovik Alan Conrad. 2020. Study of subjective and objective quality assessment of audio-visual signals. IEEE Trans. Image Process. 29 (2020), 60546068.Google ScholarGoogle ScholarCross RefCross Ref
  45. [45] Min Xiongkuo, Zhai Guangtao, Zhou Jiantao, Zhang Xiao-Ping, Yang Xiaokang, and Guan Xinping. 2020. A multimodal saliency model for videos with high audio-visual correspondence. IEEE Trans. Image Process. 29 (2020), 38053819.Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. [46] Min Xiongkuo, Zhou Jiantao, Zhai Guangtao, Callet Patrick Le, Yang Xiaokang, and Guan Xinping. 2020. A metric for light field reconstruction, compression, and display quality evaluation. IEEE Trans. Image Process. 29 (2020), 37903804.Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. [47] Mittal Anish, Moorthy Anush K., and Bovik Alan C.. 2012. No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 21 (2012), 46954708. Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. [48] Mittal Anish, Moorthy Anush Krishna, and Bovik Alan Conrad. 2012. No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 21, 12 (2012), 46954708. Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. [49] Mittal Anish, Soundararajan Rajiv, and Bovik Alan C.. 2012. Making a completely blind image quality analyzer. IEEE Sig. Process. Lett. 20, 3 (2012), 209212.Google ScholarGoogle ScholarCross RefCross Ref
  50. [50] Muthukrishnan R. and Radha Miyilsamy. 2011. Edge detection techniques for image segmentation. Int. J. Comput. Sci. Inf. Technol. 3, 6 (2011), 259.Google ScholarGoogle ScholarCross RefCross Ref
  51. [51] Nafchi Hossein Ziaei, Shahkolaei Atena, Moghaddam Reza Farrahi, and Cheriet Mohamed. 2015. FSITM: A feature similarity index for tone-mapped images. IEEE Sig. Process. Lett. 22, 8 (2015), 10261029.Google ScholarGoogle ScholarCross RefCross Ref
  52. [52] Oh Taehyun, Lee Joonyoung, Tai Yuwing, and Kweon In So. 2015. Robust high dynamic range imaging by rank minimization. IEEE Trans. Pattern Anal. Mach. Intell. 37, 6 (2015), 12191232.Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. [53] Ojala Timo, Pietikäinen Matti, and Mäenpää Topi. 2002. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell.7 (2002), 971987. Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. [54] Photomatrix. 2020. Commercially available HDR Processing Software. Retrieved from http://www.hdrsoft.com/.Google ScholarGoogle Scholar
  55. [55] Ponomarenko Nikolay, Jin Lina, Ieremeiev Oleg, Lukin Vladimir, Egiazarian Karen, Astola Jaakko, Vozel Benoit, Chehdi Kacem, Carli Marco, Battisti Federica et al. 2015. Image database TID2013: Peculiarities, results and perspectives. Sig. Process.: Image Commun. 30 (2015), 5777. Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. [56] Ponomarenko Nikolay, Lukin Vladimir, Zelensky Alexander, Egiazarian Karen, Carli Marco, and Battisti Federica. 2009. TID2008—A database for evaluation of full-reference visual quality assessment metrics. Advanc. Mod. Radioelectron. 10, 4 (2009), 3045.Google ScholarGoogle Scholar
  57. [57] Raman Shanmuganathan and Chaudhuri Subhasis. 2009. Bilateral filter based compositing for variable exposure photography. In Proceedings of the Eurographics. 1–4.Google ScholarGoogle Scholar
  58. [58] Reinhard Erik, Stark Michael M., Shirley Peter, and Ferwerda James A.. 2002. Photographic tone reproduction for digital images. In Proceedings of the 29th annual Conference on Computer graphics and Interactive Techniques. 267–276. Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. [59] Roberts Lawrence G.. 1963. Machine Perception of Three-dimensional Solids. Ph.D. Dissertation. Massachusetts Institute of Technology.Google ScholarGoogle Scholar
  60. [60] Rubner Yossi, Tomasi Carlo, and Guibas Leonidas J.. 2000. The earth mover’s distance as a metric for image retrieval. Int. J. Comput. Vis. 40, 2 (2000), 99121. Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. [61] Sen Pradeep, Kalantari Nima Khademi, Yaesoubi Maziar, Darabi Soheil, Goldman Dan B., and Shechtman Eli. 2012. Robust patch-based HDR reconstruction of dynamic scenes. ACM Trans. Graph. 31, 6 (2012). Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. [62] Sheikh Hamid R., Sabir Muhammad F., and Bovik Alan C.. 2006. A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Trans. Image Process. 15, 11 (2006), 34403451. Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. [63] Shen Jianbing, Zhao Ying, Yan Shuicheng, and Li Xuelong. 2014. Exposure fusion using boosting Laplacian pyramid. IEEE Trans. Syst., Man, Cybern. 44, 9 (2014), 15791590.Google ScholarGoogle Scholar
  64. [64] Shen Liang, Yue Zihan, Feng Fan, Chen Quan, Liu Shihao, and Ma Jie. 2017. MSR-net: Low-light image enhancement using deep convolutional network. arXiv:1711.02488 (2017).Google ScholarGoogle Scholar
  65. [65] Shen Rui, Cheng Irene, Shi Jianbo, and Basu Anup. 2011. Generalized random walks for fusion of multi-exposure images. IEEE Trans. Image Process. 20, 12 (2011), 36343646. Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. [66] Song Qing and Cosman Pamela C.. 2018. Luminance enhancement and detail preservation of images and videos adapted to ambient illumination. IEEE Trans. Image Process. 27 (2018), 49014915.Google ScholarGoogle ScholarCross RefCross Ref
  67. [67] Sun Wei, Gu Ke, Ma Siwei, Zhu Wenhan, Liu Ning, and Zhai Guangtao. 2018. A large-scale compressed 360-degree spherical image database: From subjective quality evaluation to objective model comparison. In Proceedings of the IEEE 20th International Workshop on Multimedia Signal Processing (MMSP). IEEE, 16.Google ScholarGoogle ScholarCross RefCross Ref
  68. [68] Sun Wei, Gu Ke, Zhai Guangtao, Ma Siwei, Lin Weisi, and Calle Patrick Le. 2017. CVIQD: Subjective quality evaluation of compressed virtual reality images. In Proceedings of the IEEE International Conference on Image Processing (ICIP). IEEE, 34503454.Google ScholarGoogle ScholarCross RefCross Ref
  69. [69] Sun Wei, Min Xiongkuo, Zhai Guangtao, Gu Ke, Duan Huiyu, and Ma Siwei. 2019. MC360IQA: A multi-channel CNN for blind 360-degree image quality assessment. IEEE J. Select. Topics Sig. Process. 14, 1 (2019), 6477.Google ScholarGoogle ScholarCross RefCross Ref
  70. [70] Sun Wei, Min Xiongkuo, Zhai Guangtao, Gu Ke, Ma Siwei, and Yang Xiaokang. 2020. Dynamic backlight scaling considering ambient luminance for mobile videos on LCD displays. IEEE Trans. Mob. Comput. (2020).Google ScholarGoogle Scholar
  71. [71] Sun Wei, Zhai Guangtao, Min Xiongkuo, Liu Yutao, Ma Siwei, Liu Jing, Zhou Jiantao, and Liu Xianming. 2017. Dynamic backlight scaling considering ambient luminance for mobile energy saving. In Proceedings of the IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2530.Google ScholarGoogle ScholarCross RefCross Ref
  72. [72] Wang Ruixing, Zhang Qing, Fu Chi-Wing, Shen Xiaoyong, Zheng Wei-Shi, and Jia Jiaya. 2019. Underexposed photo enhancement using deep illumination estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 68496857.Google ScholarGoogle ScholarCross RefCross Ref
  73. [73] Wang Shuhang, Zheng Jin, Hu Hai-Miao, and Li Bo. 2013. Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Trans. Image Process. 22, 9 (2013), 35383548. Google ScholarGoogle ScholarDigital LibraryDigital Library
  74. [74] Wang Zhou, Bovik Alan C., Sheikh Hamid R., Simoncelli Eero P. et al. 2004. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 13, 4 (2004), 600612. Google ScholarGoogle ScholarDigital LibraryDigital Library
  75. [75] Wang Zhou, Simoncelli Eero P., and Bovik Alan C.. 2003. Multiscale structural similarity for image quality assessment. In Proceedings of the 37th Asilomar Conference on Signals, Systems & Computers. IEEE, 13981402.Google ScholarGoogle ScholarCross RefCross Ref
  76. [76] Wu Qingbo, Wang Zhou, and Li Hongliang. 2015. A highly efficient method for blind image quality assessment. In Proceedings of the IEEE International Conference on Image Processing (ICIP). IEEE, 339343.Google ScholarGoogle ScholarDigital LibraryDigital Library
  77. [77] Xiang Tao, Yang Ying, and Guo Shangwei. 2019. Blind night-time image quality assessment: Subjective and objective approaches. IEEE Trans. Multimedia 22, 5 (2019), 12591272.Google ScholarGoogle ScholarCross RefCross Ref
  78. [78] Xue Wufeng, Zhang Lei, Mou Xuanqin, and Bovik Alan C.. 2013. Gradient magnitude similarity deviation: A highly efficient perceptual image quality index. IEEE Trans. Image Process. 23, 2 (2013), 684695. Google ScholarGoogle ScholarDigital LibraryDigital Library
  79. [79] Yan Bo, Bare Bahetiyaer, Ma Chenxi, Li Ke, and Tan Weimin. 2019. Deep objective quality assessment driven single image super-resolution. IEEE Trans. Multimedia 21 (2019), 29572971.Google ScholarGoogle ScholarCross RefCross Ref
  80. [80] Yan Zhisheng, Liu Qian, Zhang Tong, and Chen Chang Wen. 2018. CrowdDBS: A crowdsourced brightness scaling optimization for display energy reduction in mobile video. IEEE Trans. Mob. Comput. 17, 11 (2018), 25362549.Google ScholarGoogle ScholarDigital LibraryDigital Library
  81. [81] Yeganeh Hojatollah and Wang Zhou. 2013. Objective quality assessment of tone-mapped images. IEEE Trans. Image Process. 22, 2 (2013), 657667. Google ScholarGoogle ScholarDigital LibraryDigital Library
  82. [82] Ying Zhenqiang, Li Ge, Ren Yurui, Wang Ronggang, and Wang Wenmin. 2017. A new image contrast enhancement algorithm using exposure fusion framework. In Proceedings of the International Conference on Computer Analysis of Images and Patterns. Springer, 3646.Google ScholarGoogle ScholarCross RefCross Ref
  83. [83] Ying Zhenqiang, Li Ge, Ren Yurui, Wang Ronggang, and Wang Wenmin. 2017. A new low-light image enhancement algorithm using camera response model. In Proceedings of the IEEE International Conference on Computer Vision. 30153022.Google ScholarGoogle ScholarCross RefCross Ref
  84. [84] Zhai Guangtao and Min Xiongkuo. 2020. Perceptual image quality assessment: A survey. Sci. China Series F: Inf. Sci. 63, 11 (2020).Google ScholarGoogle Scholar
  85. [85] Zhai Guangtao, Min Xiongkuo, and Liu Ning. 2019. Free-energy principle inspired visual quality assessment: An overview. Dig. Sig. Process. 91 (2019), 1120.Google ScholarGoogle ScholarCross RefCross Ref
  86. [86] Zhai Guangtao, Wu Xiaolin, Yang Xiaokang, Lin Weisi, and Zhang Wenjun. 2011. A psychovisual quality metric in free-energy principle. IEEE Trans. Image Process. 21, 1 (2011), 4152. Google ScholarGoogle ScholarDigital LibraryDigital Library
  87. [87] Zhang Lin, Shen Ying, and Li Hongyu. 2014. VSI: A visual saliency-induced index for perceptual image quality assessment. IEEE Trans. Image Process. 23, 10 (2014), 42704281.Google ScholarGoogle ScholarCross RefCross Ref
  88. [88] Zhang Lin, Zhang Lei, Mou Xuanqin, and Zhang David. 2011. FSIM: A feature similarity index for image quality assessment. IEEE Trans. Image Process. 20, 8 (2011), 23782386. Google ScholarGoogle ScholarDigital LibraryDigital Library
  89. [89] Zhang Richard, Isola Phillip, Efros Alexei A., Shechtman Eli, and Wang Oliver. 2018. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 586595.Google ScholarGoogle ScholarCross RefCross Ref
  90. [90] Zhang Wei and Cham Waikuen. 2012. Gradient-directed multiexposure composition. IEEE Trans. Image Process. 21, 4 (2012), 23182323. Google ScholarGoogle ScholarDigital LibraryDigital Library
  91. [91] Zheng Yinqiang, Zhang Mingfang, and Lu Feng. 2020. Optical flow in the dark. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Perceptual Quality Assessment of Low-light Image Enhancement

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Multimedia Computing, Communications, and Applications
      ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 17, Issue 4
      November 2021
      529 pages
      ISSN:1551-6857
      EISSN:1551-6865
      DOI:10.1145/3492437
      Issue’s Table of Contents

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 12 November 2021
      • Accepted: 1 March 2021
      • Revised: 1 January 2021
      • Received: 1 July 2019
      Published in tomm Volume 17, Issue 4

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Refereed

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    View Full Text

    HTML Format

    View this article in HTML Format .

    View HTML Format
    About Cookies On This Site

    We use cookies to ensure that we give you the best experience on our website.

    Learn more

    Got it!