skip to main content
research-article

Exploring Image Enhancement for Salient Object Detection in Low Light Images

Published:31 March 2021Publication History
Skip Abstract Section

Abstract

Low light images captured in a non-uniform illumination environment usually are degraded with the scene depth and the corresponding environment lights. This degradation results in severe object information loss in the degraded image modality, which makes the salient object detection more challenging due to low contrast property and artificial light influence. However, existing salient object detection models are developed based on the assumption that the images are captured under a sufficient brightness environment, which is impractical in real-world scenarios. In this work, we propose an image enhancement approach to facilitate the salient object detection in low light images. The proposed model directly embeds the physical lighting model into the deep neural network to describe the degradation of low light images, in which the environment light is treated as a point-wise variate and changes with local content. Moreover, a Non-Local-Block Layer is utilized to capture the difference of local content of an object against its local neighborhood favoring regions. To quantitative evaluation, we construct a low light Images dataset with pixel-level human-labeled ground-truth annotations and report promising results on four public datasets and our benchmark dataset.

References

  1. Codruta Orniana Ancuti and Cosmin Ancuti. 2013. Single image dehazing by multi-scale fusion. IEEE Trans. Image Proc. 22, 8 (May 2013), 3271–3282. DOI:https://doi.org/10.1109/TIP.2013.2262284Google ScholarGoogle ScholarCross RefCross Ref
  2. Sten Andler. 2018. Depth-aware stereo video retargeting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’18). IEEE, 6517–6525.Google ScholarGoogle Scholar
  3. Ali Borji, Ming-Ming Cheng, Huaizu Jiang, and Jia Li. 2015. Salient object detection: A benchmark. IEEE Trans. Image Proc. 24, 12 (Dec. 2015), 5706–5722. DOI:https://doi.org/10.1109/TIP.2015.2487833Google ScholarGoogle ScholarCross RefCross Ref
  4. Antoni Buades, Bartomeu Coll, and J.-M. Morel. 2005. A non-local algorithm for image denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’05). IEEE Computer Society, 60–65. DOI:https://doi.org/10.1109/CVPR.2005.38 Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Kai-Yueh Chang, Tyng-Luh Liu, Hwann-Tzong Chen, and Shang-Hong Lai. 2011. Fusing generic objectness and visual saliency for salient object detection. In Proceedings of the International Conference on Computer Vision (ICCV’11). IEEE Computer Society, 914–921. DOI:https://doi.org/10.1109/ICCV.2011.6126333 Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Chen Chen, Qifeng Chen, Jia Xu, and Vladlen Koltun. 2018. Learning to see in the dark. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’18). IEEE, 3291–3300.Google ScholarGoogle ScholarCross RefCross Ref
  7. Lei Chen, Le Wu, Zhenzhen Hu, and Meng Wang. 2019. Quality-aware unpaired image-to-image translation. IEEE Trans. Multimedia 21, 10 (Oct. 2019), 2664–2674. DOI:https://doi.org/10.1109/TMM.2019.2907052Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Yu-Sheng Chen, Yu-Ching Wang, Man-Hsin Kao, and Yung-Yu Chuang. 2018. Deep photo enhancer: Unpaired learning for image enhancement from photographs with GANs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’18). IEEE, 6306–6314. DOI:https://doi.org/10.1109/CVPR.2018.00660Google ScholarGoogle ScholarCross RefCross Ref
  9. Guang Deng. 2010. A generalized unsharp masking algorithm. IEEE Trans. Image Proc. 20, 5 (Nov. 2010), 1249–1261. DOI:https://doi.org/10.1109/TIP.2010.2092441 Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Zijun Deng, Xiaowei Hu, Lei Zhu, Xuemiao Xu, Jing Qin, Guoqiang Han, and Pheng-Ann Heng. 2018. R3Net: Recurrent residual refinement network for saliency detection. In Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI’18). AAAI Press, 640–690. DOI:https://doi.org/10.24963/ijcai.2018/95 Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Xuan Dong, Guan Wang, Yi Pang, Weixin Li, Jiangtao Wen, Wei Meng, and Yao Lu. 2011. Fast efficient algorithm for enhancement of low lighting video. In Proceedings of the ACM SIGGRAPH 2010 Posters (SIGGRAPH’10). Association for Computing Machinery, 1–6. DOI:https://doi.org/10.1145/1836845.1836920 Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Hany Farid. 2001. Blind inverse gamma correction. IEEE Trans. Image Proc. 10, 10 (Oct. 2001), 1428–1433. DOI:https://doi.org/10.1109/83.951529 Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Wei Feng, Ruize Han, Qing Guo, Zhu, and Song Wang. 2019. Dynamic saliency-aware regularization for correlation filter-based object tracking. IEEE Trans. Image Proc. 28, 7 (Jan. 2019), 3232–3245. DOI:https://doi.org/10.1109/TIP.2019.2895411Google ScholarGoogle ScholarCross RefCross Ref
  14. Xueyang Fu, Delu Zeng, Yue Huang, Xiao-Ping Zhang, and Xinghao Ding. 2016. A weighted variational model for simultaneous reflectance and illumination estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’16). IEEE, 2782–2790. DOI:https://doi.org/10.1109/CVPR.2016.304Google ScholarGoogle ScholarCross RefCross Ref
  15. Xiaojie Guo, Yu Li, and Haibin Ling. 2016. LIME: Low-light image enhancement via illumination map estimation. IEEE Trans. Image Proc. 26, 2 (Feb. 2016), 982–993. DOI:https://doi.org/10.1109/TIP.2016.2639450 Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Kaiming He, Sun Jian, and Xiaoou Tang. 2011. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 33, 12 (Dec. 2011), 2341–2353. DOI:https://doi.org/10.1109/TPAMI.2010.168 Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Kaiming He, Jian Sun, and Xiaoou Tang. 2011. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 33, 12 (Dec. 2011), 2341–2353. DOI:https://doi.org/10.1109/TPAMI.2010.168 Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Edwin H. Land. 1986. An alternative technique for the computation of the designator in the Retinex theory of color vision. Proc. Nat. Acad. Sci. United States Amer. 83, 10 (May 1986), 3078–3080. DOI:https://doi.org/10.1073/pnas.83.10.3078Google ScholarGoogle ScholarCross RefCross Ref
  19. Qibin Hou, Ming-Ming Cheng, Xiaowei Hu, Ali Borji, Zhuowen Tu, and Philip H. S. Torr. 2019. Deeply supervised salient object detection with short connections. IEEE Trans. Pattern Anal. Mach. Intell. 41, 4 (Jan. 2019), 815–828. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q. Weinberger. 2017. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17). IEEE, 4700–4708. DOI:https://doi.org/10.1109/CVPR.2017.243Google ScholarGoogle Scholar
  21. Jie Huang, Pengfei Zhu, Mingrui Geng, Jiewen Ran, Xingguang Zhou, Chen Xing, Pengfei Wan, and Xiangyang Ji. 2018. Range scaling global U-net for perceptual image enhancement on mobile devices. In Proceedings of the European Conference on Computer Vision (ECCV’18). Springer-Verlag, Munich, Germany, 230–242.Google ScholarGoogle Scholar
  22. Peng Jiang, Haibin Ling, Jingyi Yu, and Jingliang Peng. 2013. Salient region detection by UFO: Uniqueness, focusness and objectness. In Proceedings of the IEEE International Conference on Computer Vision (ICCV’13). IEEE, 1976–1983. DOI:https://doi.org/10.1109/ICCV.2013.248 Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Yifan Jiang, Xinyu Gong, Ding Liu, Yu Cheng, Chen Fang, Xiaohui Shen, Jianchao Yang, Pan Zhou, and Zhangyang Wang. 2019. EnlightenGAN: Deep light enhancement without paired supervision. arXiv preprint arXiv:1906.06972 (2019).Google ScholarGoogle Scholar
  24. Daniel J. Jobson, Zia ur Rahman, and Glenn A. Woodell. 1997. A multiscale Retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Proc. 6, 7 (Aug. 1997), 965–976. DOI:https://doi.org/10.1109/83.597272 Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Kajal Kansal, A. V. Subramanyam, Zheng Wang, and Shin’Ichi Satoh. 2020. SDL: Spectrum-disentangled representation learning for visible-infrared person re-identification. IEEE Trans. Circ. Syst. Vid. Technol. DOI:https://doi.org/10.1109/TCSVT.2019.2963721Google ScholarGoogle Scholar
  26. Chulwoo Lee, Chul Lee, and Chang-Su Kim. 2013. Contrast enhancement based on layered difference representation of 2D histograms. IEEE Trans. Image Proc. 22, 12 (Sept. 2013), 5372–5384. DOI:https://doi.org/10.1109/ICIP.2012.6467022 Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Guanbin Li and Yizhou Yu. 2015. Visual saliency based on multiscale deep features. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’15). IEEE, 5455–5463. DOI:https://doi.org/10.1109/CVPR.2015.7299184Google ScholarGoogle Scholar
  28. Lin Li, Ronggang Wang, Wenmin Wang, and Wen Gao. 2015. A low-light image enhancement method for both denoising and contrast enlarging. In Proceedings of the International Conference on Image Processing (ICIP’15). IEEE, 3730–3734. DOI:https://doi.org/10.1109/ICIP.2015.7351501Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Xi Li, Liming Zhao, Lina Wei, Ming-Hsuan Yang, Fei Wu, Yueting Zhuang, Haibin Ling, and Jingdong Wang. 2016. DeepSaliency: Multi-task deep neural network model for salient object detection. IEEE Trans. Image Proc. 25, 8 (Aug. 2016), 3919–3930. DOI:https://doi.org/10.1109/TIP.2016.2579306Google ScholarGoogle ScholarCross RefCross Ref
  30. Yin Li, Xiaodi Hou, Christof Koch, James M. Rehg, and Alan L. Yuille. 2014. The secrets of salient object segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’14). IEEE Computer Society, 280–287. DOI:https://doi.org/10.1109/CVPR.2014.43 Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Tie Liu, Zejian Yuan, Jian Sun, Jingdong Wang, Nanning Zheng, Xiaoou Tang, and Heung-Yeung Shum. 2011. Learning to detect a salient object. IEEE Trans. Pattern Anal. Mach. Intell. 33, 2 (Feb. 2011), 353–367. DOI:https://doi.org/10.1109/CVPR.2007.383047 Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Zhiming Luo, Akshaya Mishra, Andrew Achkar, Justin Eichel, Shaozi Li, and Pierre-Marc Jodoin. 2017. Non-local deep features for salient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17). IEEE, 6609–6617. DOI:https://doi.org/10.1109/CVPR.2017.698Google ScholarGoogle ScholarCross RefCross Ref
  33. Feifan Lv and Feng Lu. 2019. Attention guided low-light image enhancement with a large scale low-light simulation dataset. arXiv preprint arXiv:1908.00682 (2019).Google ScholarGoogle Scholar
  34. D. Martin, C. Fowlkes, D. Tal, and J. Malik. 2001. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proceedings of the IEEE International Conference on Computer Vision (ICCV’01). IEEE, 416–423. DOI:https://doi.org/10.1109/ICCV.2001.937655Google ScholarGoogle Scholar
  35. Arcot J. Preetham, Peter Shirley, and Brian Smits. 1999. A practical analytic model for daylight. In Proceedings of the 26th Conference on Computer Graphics and Interactive Techniques (SIGGRAPH’99). ACM Press/Addison-Wesley Publishing Co., 91–100. DOI:https://doi.org/10.1145/311535.311545 Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Xuebin Qin, Zichen Zhang, Chenyang Huang, Chao Gao, Masood Dehghan, and Martin Jagersand. 2019. BASNet: Boundary-aware salient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’19). IEEE, 7479–7489.Google ScholarGoogle ScholarCross RefCross Ref
  37. Yao Qin, Huchuan Lu, Yiqun Xu, and He Wang. 2015. Saliency detection via cellular automata. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’15). IEEE, 110–119. DOI:https://doi.org/10.1109/CVPR.2015.7298606Google ScholarGoogle Scholar
  38. Wenqi Ren, Sifei Liu, Lin Ma, Qianqian Xu, Xiangyu Xu, Xiaochun Cao, Junping Du, and Ming-Hsuan Yang. 2019. Low-light image enhancement via a deep hybrid network. IEEE Trans. Image Proc. 28, 9 (Apr. 2019), 4364–4375. DOI:https://doi.org/10.1109/TIP.2019.2910412Google ScholarGoogle ScholarCross RefCross Ref
  39. Yurui Ren, Zhenqiang Ying, Thomas H. Li, and Ge Li. 2018. LECARM: Low-light image enhancement using the camera response model. IEEE Trans. Circ. Syst. Vid. Technol. 29, 4 (Apr. 2018), 968–981. DOI:https://doi.org/10.1109/TCSVT.2018.2828141Google ScholarGoogle Scholar
  40. Olaf Ronneberger, Philipp Fischer, and Thomas Brox. 2015. U-Net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention (MICCAI’15). Springer-Verlag, Munich, Germany, 234–241. DOI:https://doi.org/10.1007/978-3-319-24574-4_28Google ScholarGoogle ScholarCross RefCross Ref
  41. Carlo Tomasi and Roberto Manduchi. 1998. Bilateral filtering for gray and color images. In Proceedings of the 6th International Conference on Computer Vision (ICCV’98). IEEE, 839–846. DOI:https://doi.org/10.1109/ICCV.1998.710815 Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Linzhao Wang, Lijun Wang, Huchuan Lu, Pingping Zhang, and Xiang Ruan. 2018. Salient object detection with recurrent fully convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 41, 7 (June 2018), 1734–1746. DOI:https://doi.org/10.1109/TPAMI.2018.2846598Google ScholarGoogle Scholar
  43. Meng Wang, Richang Hong, Guangda Li, Zheng-Jun Zha, Shuicheng Yan, and Tat-Seng Chua. 2012. Event driven web video summarization by tag localization and key-shot identification. IEEE Trans. Multimedia 14, 4 (Aug. 2012), 975–985. DOI:https://doi.org/10.1109/TMM.2012.2185041 Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Meng Wang, Richang Hong, Xiao-Tong Yuan, Shuicheng Yan, and Tat-Seng Chua. 2012. Movie2Comics: Towards a lively video content presentation. IEEE Trans. Multimedia 14, 3 (June 2012), 858–870. DOI:https://doi.org/10.1109/TMM.2012.2187181 Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Meng Wang, Xueliang Liu, and Xindong Wu. 2015. Visual classification by \(\) -hypergraph modeling. IEEE Trans. Knowl. Data Eng. 27, 9 (Sept. 2015), 2564–2574. DOI:https://doi.org/10.1109/TKDE.2015.2415497Google ScholarGoogle ScholarCross RefCross Ref
  46. Wenjing Wang, Chen Wei, Wenhan Yang, and Jiaying Liu. 2018. GLADNet: Low-light enhancement network with global awareness. In Proceedings of the IEEE International Conference on Automatic Face & Gesture Recognition (FG’18). IEEE, 751–755.Google ScholarGoogle ScholarCross RefCross Ref
  47. Xiaolong Wang, Ross Girshick, Abhinav Gupta, and Kaiming He. 2018. Non-local neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’18). IEEE, 7794–7803.Google ScholarGoogle ScholarCross RefCross Ref
  48. Yang Wang. 2020. Survey on deep multi-modal data analytics: Collaboration, rivalry and fusion. arXiv preprint arXiv:2006.08159 (2020).Google ScholarGoogle Scholar
  49. Yang Wang, Xuemin Lin, Lin Wu, and Wenjie Zhang. 2017. Effective multi-query expansions: Collaborative deep networks for robust landmark retrieval. IEEE Trans. Image Proc. 26, 3 (Mar. 2017), 1393–1404. DOI:https://doi.org/10.1109/TIP.2017.2655449 Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. Chen Wei, Wenjing Wang, Wenhan Yang, and Jiaying Liu. 2016. Nighttime haze removal with illumination correction. arXiv preprint arXiv:1606.01460 (2016).Google ScholarGoogle Scholar
  51. Chen Wei, Wenjing Wang, Wenhan Yang, and Jiaying Liu. 2018. Deep Retinex decomposition for low-light enhancement. In Proceedings of the British Machine Vision Conference (BMVC’18). BMVA Press, Newcastle, UK, 1–12.Google ScholarGoogle Scholar
  52. Shikui Wei, Lixin Liao, Jia Li, Qinjie Zheng, Fei Yang, and Yao Zhao. 2019. Saliency inside: Learning attentive CNNs for content-based image retrieval. IEEE Trans. Image Proc. 28, 9 (May 2019), 4580–4593. DOI:https://doi.org/10.1109/TIP.2019.2913513Google ScholarGoogle ScholarCross RefCross Ref
  53. Lin Wu, Yang Wang, Junbin Gao, and Xue Li. 2019. Where-and-when to look: Deep Siamese attention networks for video-based person re-identification. IEEE Trans. Multimedia 21, 6 (June 2019), 1412–1424. DOI:https://doi.org/10.1109/TMM.2018.2877886Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Lin Wu, Yang Wang, Junbin Gao, Meng Wang, Zheng-Jun Zha, and Dacheng Tao. 2020. Deep co-attention based comparators for relative representation learning on person re-identification. IEEE Trans. Neural Netw. Learn. Syst. (Apr. 2020), 1–14. DOI:https://doi.org/10.1109/TNNLS.2020.2979190Google ScholarGoogle Scholar
  55. Lin Wu, Yang Wang, Xue Li, and Junbin Gao. 2018. Deep attention-based spatially recursive networks for fine-grained visual recognition. IEEE Trans. Cyber. 49, 5 (May 2018), 1791–1802. DOI:https://doi.org/10.1109/TCYB.2018.2813971Google ScholarGoogle Scholar
  56. Lin Wu, Yang Wang, and Ling Shao. 2018. Cycle-consistent deep generative hashing for cross-modal retrieval. IEEE Trans. Image Proc. 28, 4 (Apr. 2018), 1602–1612. DOI:https://doi.org/10.1109/TIP.2018.2878970Google ScholarGoogle Scholar
  57. Lin Wu, Yang Wang, Ling Shao, and Meng Wang. 2019. 3-D PersonVLAD: Learning deep global representations for video-based person reidentification. IEEE Trans. Neural Netw. Learn. Syst. 30, 11 (Nov. 2019), 3347–3359. DOI:https://doi.org/10.1109/TNNLS.2019.2891244Google ScholarGoogle ScholarCross RefCross Ref
  58. Lin Wu, Yang Wang, Hongzhi Yin, Meng Wang, and Ling Shao. 2020. Few-shot deep adversarial learning for video-based person re-identification. IEEE Trans. Image Proc. 29, 1 (Mar. 2020), 1233–1245.Google ScholarGoogle Scholar
  59. Xin Xu and Jie Wang. 2018. Extended non-local feature for visual saliency detection in low contrast images. In Proceedings of the European Conference on Computer Vision (ECCV’18) Workshops. Springer-Verlag, Munich, Germany, 580–592.Google ScholarGoogle Scholar
  60. Qiong Yan, Li Xu, Jianping Shi, and Jiaya Jia. 2013. Hierarchical saliency detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’13). IEEE Computer Society, 1155–1162. DOI:https://doi.org/10.1109/CVPR.2013.153 Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. Chuan Yang, Lihe Zhang, Huchuan Lu, Xiang Ruan, and Ming-Hsuan Yang. 2013. Saliency detection via graph-based manifold ranking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’13). IEEE, 3166–3173. DOI:https://doi.org/10.1109/CVPR.2013.407 Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. Zhenqiang Ying, Ge Li, Yurui Ren, Ronggang Wang, and Wenmin Wang. 2017. A new low-light image enhancement algorithm using camera response model. In Proceedings of the IEEE International Conference on Computer Vision (ICCV’17) Workshops. IEEE, 3015–3022. DOI:https://doi.org/10.1109/ICCVW.2017.356Google ScholarGoogle ScholarCross RefCross Ref
  63. Yu Zeng, Huchuan Lu, Lihe Zhang, Mengyang Feng, and Ali Borji. 2018. Learning to promote saliency detectors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’18). IEEE, 1644–1653.Google ScholarGoogle ScholarCross RefCross Ref
  64. Zelong Zeng, Zhixiang Wang, Zheng Wang, Yinqiang Zheng, Yung-Yu Chuang, and Shin’ichi Satoh. 2020. Illumination-adaptive person re-identification. IEEE Trans. Multimedia. DOI:https://doi.org/10.1109/TMM.2020.2969782Google ScholarGoogle Scholar
  65. Fangneng Zhan, Shijian Lu, and Chuhui Xue. 2018. Verisimilar image synthesis for accurate detection and recognition of texts in scenes. In Proceedings of the European Conference on Computer Vision (ECCV’18). Springer-Verlag, Munich, Germany, 249–266.Google ScholarGoogle ScholarCross RefCross Ref
  66. Dingwen Zhang, Junwei Han, Yu Zhang, and Dong Xu. 2019. Synthesizing supervision for learning deep saliency network without human annotation. IEEE Trans. Pattern Anal. Mach. Intell. 14, 8 (Feb. 2019), 1–14. DOI:https://doi.org/10.1109/TPAMI.2019.2900649Google ScholarGoogle Scholar
  67. He Zhang and Vishal M. Patel. 2018. Densely connected pyramid dehazing network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’18). IEEE, 3194–3203.Google ScholarGoogle Scholar
  68. He Zhang, Vishwanath Sindagi, and Vishal M. Patel. 2018. Multi-scale single image dehazing using perceptual pyramid deep network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’18) Workshops. IEEE, 902–911. DOI:https://doi.org/10.1109/CVPRW.2018.00135Google ScholarGoogle Scholar
  69. Jun Zhang, Meng Wang, Shengping Zhang, Xuelong Li, and Xindong Wu. 2016. Spatiochromatic context modeling for color saliency analysis. IEEE Trans. Neural Netw. Learn. Syst. 27, 6 (June 2016), 1177–1189. DOI:https://doi.org/10.1109/TNNLS.2015.2464316Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Exploring Image Enhancement for Salient Object Detection in Low Light Images

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format
    About Cookies On This Site

    We use cookies to ensure that we give you the best experience on our website.

    Learn more

    Got it!