skip to main content
research-article

Decoupled Low-Light Image Enhancement

Authors Info & Claims
Published:04 March 2022Publication History
Skip Abstract Section

Abstract

The visual quality of photographs taken under imperfect lightness conditions can be degenerated by multiple factors, e.g., low lightness, imaging noise, color distortion, and so on. Current low-light image enhancement models focus on the improvement of low lightness only, or simply deal with all the degeneration factors as a whole, therefore leading to sub-optimal results. In this article, we propose to decouple the enhancement model into two sequential stages. The first stage focuses on improving the scene visibility based on a pixel-wise non-linear mapping. The second stage focuses on improving the appearance fidelity by suppressing the rest degeneration factors. The decoupled model facilitates the enhancement in two aspects. On the one hand, the whole low-light enhancement can be divided into two easier subtasks. The first one only aims to enhance the visibility. It also helps to bridge the large intensity gap between the low-light and normal-light images. In this way, the second subtask can be described as the local appearance adjustment. On the other hand, since the parameter matrix learned from the first stage is aware of the lightness distribution and the scene structure, it can be incorporated into the second stage as the complementary information. In the experiments, our model demonstrates the state-of-the-art performance in both qualitative and quantitative comparisons, compared with other low-light image enhancement models. In addition, the ablation studies also validate the effectiveness of our model in multiple aspects, such as model structure and loss function.

REFERENCES

  1. [1] Ba Jimmy Lei, Kiros Jamie Ryan, and Hinton Geoffrey E.. 2016. Layer normalization. arXiv preprint arXiv:1607.06450 (2016).Google ScholarGoogle Scholar
  2. [2] Cai Bolun, Xu Xianming, Guo Kailing, Jia Kui, Hu Bin, and Tao Dacheng. 2017. A joint intrinsic-extrinsic prior model for retinex. In Proceedings of the International Conference on Computer Vision (ICCV’17). IEEE, 40204029. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  3. [3] Cai Jianrui, Gu Shuhang, and Zhang Lei. 2018. Learning a deep single image contrast enhancer from multi-exposure images. IEEE Trans. Image Process. 27, 4 (Apr. 2018), 20492062. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  4. [4] Dong Xuan, Wang Guan, Pang Yi, Li Weixin, Wen Jiangtao, Meng Wei, and Lu Yao. 2011. Fast efficient algorithm for enhancement of low lighting video. In Proceedings of the IEEE International Conference on Multimedia and Expo (ICME’11). IEEE, 16. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. [5] Fu Xueyang, Zeng Delu, Huang Yue, Liao Yinghao, Ding Xinghao, and Paisley John. 2016. A fusion-based enhancing method for weakly illuminated images. Signal Process. 129 (Dec. 2016), 8296. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. [6] Fu Xueyang, Zeng Delu, Huang Yue, Zhang Xiao-Ping, and Ding Xinghao. 2016. A weighted variational model for simultaneous reflectance and illumination estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’16). IEEE, 27822790. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  7. [7] Gu Ke, Zhai Guangtao, Yang Xiaokang, and Zhang Wenjun. 2015. Using free energy principle for blind image quality assessment. IEEE Trans. Multimedia 17, 1 (Jan. 2015), 5063. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  8. [8] Guo Chunle, Li Chongyi, Guo Jichang, Loy Chen Change, Hou Junhui, Kwong Sam, and Cong Runmin. 2020. Zero-reference deep curve estimation for low-light image enhancement. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR’2020). IEEE, 17771786. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  9. [9] Guo Xiaojie, Li Yu, and Ling Haibin. 2017. LIME: Low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 26, 2 (Feb. 2017), 982993. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. [10] Hao Shijie, Guo Yanrong, and Wei Zhongliang. 2019. Lightness-aware contrast enhancement for images with different illumination conditions. Multimedia Tools Appl. 78, 3 (Feb. 2019), 38173830. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. [11] Hao Shijie, Han Xu, Guo Yanrong, Xu Xin, and Wang Meng. 2020. Low-light image enhancement with semi-decoupled decomposition. IEEE Trans. Multimedia 22, 12 (Dec. 2020), 30253038. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. [12] He Kaiming, Zhang Xiangyu, Ren Shaoqing, and Sun Jian. 2015. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE International Conference on Computer Vision (ICCV’15). IEEE, 10261034. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. [13] Hu Yuanming, Wang Baoyuan, and Lin Stephen. 2017. FC4: Fully convolutional color constancy with confidence-weighted pooling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17). IEEE, 330339. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  14. [14] Ioffe Sergey and Szegedy Christian. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning (ICML’15). JMLR.org, 448456.Google ScholarGoogle Scholar
  15. [15] Jiang Y., Gong X., Liu D., Cheng Y., Fang C., Shen X., Yang J., Zhou P., and Wang Z.. 2021. EnlightenGAN: Deep light enhancement without paired supervision. IEEE Trans. Image Process. 30 (Jan. 2021), 23402349. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. [16] Ledig Christian, Theis Lucas, Huszár Ferenc, Caballero Jose, Cunningham Andrew, Acosta Alejandro, Aitken Andrew, Tejani Alykhan, Totz Johannes, Wang Zehan, and Shi Wenzhe. 2017. Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17). IEEE, 105114. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  17. [17] Lee Chulwoo, Lee Chul, and Kim Chang-Su. 2012. Contrast enhancement based on layered difference representation. In Proceedings of the IEEE International Conference on Image Processing (ICIP’12). IEEE, 965968. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  18. [18] Lee Chulwoo, Lee Chul, and Kim Chang-Su. 2013. Contrast enhancement based on layered difference representation of 2D histograms. IEEE Trans. Image Process. 22, 12 (Dec. 2013), 53725384. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. [19] Li Chongyi, Guo Chunle, Han Linghao, Jiang Jun, Cheng Ming-Ming, Gu Jinwei, and Loy Chen Change. 2021. Low-light image and video enhancement using deep learning: A survey. (2021). arxiv:cs.CV/2104.10729Google ScholarGoogle Scholar
  20. [20] Li Fei, Zheng Jiangbin, and Zhang Yuan-fang. 2021. Generative adversarial network for low-light image enhancement. IET Image Process. (2021).Google ScholarGoogle Scholar
  21. [21] Li Mading, Liu Jiaying, Yang Wenhan, Sun Xiaoyan, and Guo Zongming. 2018. Structure-revealing low-light image enhancement via robust retinex model. IEEE Trans. Image Process. 27, 6 (Jun. 2018), 28282841.Google ScholarGoogle Scholar
  22. [22] Li Miao, Zhou Dongming, Nie Rencan, Xie Shidong, and Liu Yanyu. 2021. AMBCR: Low-light image enhancement via attention guided multi-branch construction and Retinex theory. IET Image Process. (2021).Google ScholarGoogle Scholar
  23. [23] Li Yu and Brown Michael S.. 2014. Single image layer separation using relative smoothness. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’14). IEEE, 27522759. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. [24] Lore Kin Gwn, Akintayo Adedotun, and Sarkar Soumik. 2017. LLNet: A deep autoencoder approach to natural low-light image enhancement. Pattern Recognit. 61 (Jan. 2017), 650662. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. [25] Ma Kede, Li Hui, Yong Hongwei, Wang Zhou, Meng Deyu, and Zhang Lei. 2017. Robust multi-exposure image fusion: A structural patch decomposition approach. IEEE Trans. Image Process. 26, 5 (May. 2017), 25192532. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. [26] Mittal Anish, Moorthy Anush K., and Bovik Alan C.. 2011. Blind/referenceless image spatial quality evaluator. In Proceedings of the Asilomar Conference on Signals, Systems and Computers (ACSCC’11). IEEE, 723727. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  27. [27] Mittal Anish, Soundararajan Rajiv, and Bovik Alan C.. 2013. Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20, 3 (May. 2013), 209212. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  28. [28] Ren Xutong, Li Mading, Cheng Wen-Huang, and Liu Jiaying. 2018. Joint enhancement and denoising method via sequential decomposition. In Proceedings of the IEEE International Symposium on Circuits and Systems (ISCAS’18). 15. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  29. [29] Ren X., Yang W., Cheng W. H., and Liu J.. 2020. LR3M: Robust low-light enhancement via low-rank regularized Retinex model. IEEE Trans. Image Process. 29, 12 (Apr. 2020), 58625876. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  30. [30] Reza Ali M.. 2004. Realization of the contrast limited adaptive histogram equalization (CLAHE) for real-time image enhancement. J. VLSI Signal Process. 38, 1 (Aug. 2004), 3544.Google ScholarGoogle ScholarCross RefCross Ref
  31. [31] Ronneberger Olaf, Fischer Philipp, and Brox Thomas. 2015. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI’15). Springer, 234241. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  32. [32] Rother Carsten, Kiefel Martin, Zhang Lumin, Schölkopf Bernhard, and Gehler Peter V.. 2011. Recovering intrinsic images with a global sparsity prior on reflectance. In Proceedings of the Conference on Neural Information Processing Systems (NeurIPS’11). 765773.Google ScholarGoogle Scholar
  33. [33] Sandoub Ghada, Atta Randa, Ali Hesham Arafat, and Abdel-Kader Rabab Farouk. 2021. A low-light image enhancement method based on bright channel prior and maximum colour channel. IET Image Process. (2021).Google ScholarGoogle ScholarCross RefCross Ref
  34. [34] Simonyan Karen and Zisserman Andrew. 2015. Very deep convolutional networks for large-scale image recognition. In Proceedings of the International Conference on Learning Representations (ICLR’15).Google ScholarGoogle Scholar
  35. [35] Venkatanath N., Praneeth D., Bh Maruthi Chandrasekhar, Channappayya Sumohana S., and Medasani Swarup S.. 2015. Blind image quality evaluation using perception based features. In Proceedings of the IEEE National Conference on Communications (NCC’15). IEEE, 16. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  36. [36] Vonikakis V., Andreadis I., and Gasteratos A.. 2008. Fast centre-surround contrast modification. IET Image Process. 2, 1 (Feb. 2008), 1934. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  37. [37] Wang Qiuhong, Fu Xueyang, Zhang Xiao-Ping, and Ding Xinghao. 2016. A fusion-based method for single backlit image enhancement. In Proceedings of the IEEE International Conference on Image Processing (ICIP’16). IEEE, 40774081. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  38. [38] Wang Ruixing, Zhang Qing, Fu Chi-Wing, Shen Xiaoyong, Zheng Wei-Shi, and Jia Jiaya. 2019. Underexposed photo enhancement using deep illumination estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’19). IEEE, 68426850. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  39. [39] Wang Wenjing, Wei Chen, Yang Wenhan, and Liu Jiaying. 2018. GLADNet: Low-light enhancement network with global awareness. In Proceedings of the IEEE International Conference on Automatic Face & Gesture Recognition (FG’18). IEEE, 751755. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. [40] Wang Zhou, Bovik Alan C., Sheikh Hamid R., and Simoncelli Eero P.. 2004. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 13, 4 (Apr. 2004), 600612. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. [41] Wei Chen, Wang Wenjing, Yang Wenhan, and Liu Jiaying. 2018. Deep Retinex decomposition for low-light enhancement. In Proceedings of the British Machine Vision Conference (BMVC’18). BMVA Press, 112.Google ScholarGoogle Scholar
  42. [42] Yang Wenhan, Wang Shiqi, Fang Yuming, Wang Yue, and Liu Jiaying. 2020. From fidelity to perceptual quality: A semi-supervised approach for low-light image enhancement. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’2020). IEEE, 30633072. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  43. [43] Yang Wenhan, Wang Shiqi, Fang Yuming, Wang Yue, and Liu Jiaying. 2021. Band representation-based semi-supervised low-light image enhancement: Bridging the gap between signal fidelity and perceptual quality. IEEE Trans. Image Process. 30 (2021), 34613473. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. [44] Zhang Qing, Yuan Ganzhao, Xiao Chunxia, Zhu Lei, and Zheng Wei-Shi. 2018. High-quality exposure correction of underexposed photos. In Proceedings of the 26th ACM International Conference on Multimedia (ACM MM’18). ACM, 582590. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. [45] Zhang Yonghua, Zhang Jiawan, and Guo Xiaojie. 2019. Kindling the darkness: A practical low-light image enhancer. In Proceedings of the 27th ACM International Conference on Multimedia (ACM MM’19). ACM, 16321640. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Decoupled Low-Light Image Enhancement

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Multimedia Computing, Communications, and Applications
      ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 18, Issue 4
      November 2022
      497 pages
      ISSN:1551-6857
      EISSN:1551-6865
      DOI:10.1145/3514185
      • Editor:
      • Abdulmotaleb El Saddik
      Issue’s Table of Contents

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 4 March 2022
      • Revised: 1 November 2021
      • Accepted: 1 November 2021
      • Received: 1 May 2021
      Published in tomm Volume 18, Issue 4

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Refereed

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    View Full Text

    HTML Format

    View this article in HTML Format .

    View HTML Format
    About Cookies On This Site

    We use cookies to ensure that we give you the best experience on our website.

    Learn more

    Got it!