skip to main content
research-article

Image Quality Assessment–driven Reinforcement Learning for Mixed Distorted Image Restoration

Authors Info & Claims
Published:03 February 2023Publication History
Skip Abstract Section

Abstract

Due to the diversity of the degradation process that is difficult to model, the recovery of mixed distorted images is still a challenging problem. The deep learning model trained under certain degradation declines significantly in other degradation situations. In this article, we explore ways to use a combination of tools to deal with the mixed distortion. First, we illustrate the limitations of a single deep network in dealing with multiple distortion types and then introduce a hierarchical toolkit with distinguished powerful tools. Second, we investigate how an efficient representation of images combined with a reinforcement learning (RL) paradigm helps to deal with tool noise in continuous restoration. The proposed method can accurately capture the distortion preferences for selecting the optimal recovery tools by RL agent. Finally, to fully utilize random tools for unknown distortion combinations, we adopt the exploration scheme with various quality evaluation methods to achieve more quality improvements. Experimental results demonstrate that the peak signal-to-noise ratio of the proposed method is 3.30 dB higher than other state-of-the-art RL-based methods on the CSIQ single distortion dataset and 0.95 dB higher on the DIV2K mixed distortion dataset.

REFERENCES

  1. [1] Agustsson Eirikur and Timofte Radu. 2017. Ntire 2017 challenge on single image super-resolution: Dataset and study. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 126135.Google ScholarGoogle ScholarCross RefCross Ref
  2. [2] Bertsekas Dimitri P.. 2018. Abstract Dynamic Programming. Athena Scientific Nashua, NH.Google ScholarGoogle Scholar
  3. [3] Bianco Simone, Celona Luigi, and Napoletano Paolo. 2021. Disentangling image distortions in deep feature space. Pattern Recogn. Lett. (2021).Google ScholarGoogle ScholarCross RefCross Ref
  4. [4] Bosse Sebastian, Maniry Dominique, Müller Klaus-Robert, Wiegand Thomas, and Samek Wojciech. 2017. Deep neural networks for no-reference and full-reference image quality assessment. IEEE Trans. Image Process. 27, 1 (2017), 206219.Google ScholarGoogle ScholarCross RefCross Ref
  5. [5] Buades Antoni, Coll Bartomeu, and Morel Jean-Michel. 2011. Non-local means denoising. Image Process. On Line 1 (2011), 208212.Google ScholarGoogle ScholarCross RefCross Ref
  6. [6] Dabov Kostadin, Foi Alessandro, Katkovnik Vladimir, and Egiazarian Karen. 2007. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16, 8 (2007), 20802095.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. [7] Dong Chao, Deng Yubin, Loy Chen Change, and Tang Xiaoou. 2015. Compression artifacts reduction by a deep convolutional network. In Proceedings of the IEEE International Conference on Computer Vision. 576584.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. [8] Furuta Ryosuke, Inoue Naoto, and Yamasaki Toshihiko. 2020. PixelRL: Fully convolutional network with reinforcement learning for image processing. IEEE Trans. Multimedia 22, 7 (2020), 17041719.Google ScholarGoogle ScholarCross RefCross Ref
  9. [9] Ghadiyaram Deepti and Bovik Alan C.. 2015. Massive online crowdsourced study of subjective and objective picture quality. IEEE Trans. Image Process. 25, 1 (2015), 372387.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. [10] He Jingwen, Dong Chao, and Qiao Yu. 2019. Modulating image restoration with continual levels via adaptive feature modification layers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1105611064.Google ScholarGoogle ScholarCross RefCross Ref
  11. [11] Hui Zheng, Gao Xinbo, Yang Yunchu, and Wang Xiumei. 2019. Lightweight image super-resolution with information multi-distillation network. In Proceedings of the 27th ACM International Conference on Multimedia. 20242032.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. [12] Kaelbling Leslie Pack, Littman Michael L., and Moore Andrew W.. 1996. Reinforcement learning: A survey. J. Artif. Intell. Res. 4 (1996), 237285.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. [13] Kim Jiwon, Lee Jung Kwon, and Lee Kyoung Mu. 2016. Accurate image super-resolution using very deep convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 16461654.Google ScholarGoogle ScholarCross RefCross Ref
  14. [14] Kim Youngbae, Koh Yeong Jun, Lee Chulwoo, Kim Sehoon, and Kim Chang-Su. 2015. Dark image enhancement based onpairwise target contrast and multi-scale detail boosting. In Proceedings of the IEEE International Conference on Image Processing (ICIP’15). IEEE, 14041408.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. [15] Kupyn Orest, Martyniuk Tetiana, Wu Junru, and Wang Zhangyang. 2019. Deblurgan-v2: Deblurring (orders-of-magnitude) faster and better. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 88788887.Google ScholarGoogle ScholarCross RefCross Ref
  16. [16] Larson Eric Cooper and Chandler Damon Michael. 2010. Most apparent distortion: Full-reference image quality assessment and the role of strategy. J. Electr. Imag. 19, 1 (2010), 011006.Google ScholarGoogle ScholarCross RefCross Ref
  17. [17] Li Mading, Liu Jiaying, Sun Xiaoyan, and Xiong Zhiwei. 2019. Image/video restoration via multiplanar autoregressive model and low-rank optimization. ACM Trans. Multimedia Comput. Commun. Appl. 15, 4 (2019), 123.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. [18] Lim Bee, Son Sanghyun, Kim Heewon, Nah Seungjun, and Lee Kyoung Mu. 2017. Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 136144.Google ScholarGoogle ScholarCross RefCross Ref
  19. [19] Liu Xinhao, Tanaka Masayuki, and Okutomi Masatoshi. 2012. Noise level estimation using weak textured patches of a single noisy image. In Proceedings of the 19th IEEE International Conference on Image Processing. IEEE, 665668.Google ScholarGoogle ScholarCross RefCross Ref
  20. [20] Liu Yiding, Yang Siyu, Li Bin, Zhou Wengang, Xu Jizheng, Li Houqiang, and Lu Yan. 2021. Affinity derivation for accurate instance segmentation. ACM Trans. Multimedia Comput. Commun. Appl. 17, 1 (2021), 120.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. [21] Luo Xiaofan, Wong Fukoeng, and Hu Haifeng. 2020. FIN: Feature integrated network for object detection. ACM Trans. Multimedia Comput. Commun. Appl. 16, 2 (2020), 118.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. [22] McCaffrey James D.. 2017. The epsilon-greedy Algorithm. [EB/OL]. Retrieved from https://jamesmccaffrey.wordpress.com/2017/11/30/the-epsilon-greedy-algorithm/.Google ScholarGoogle Scholar
  23. [23] Mittal Anish, Moorthy Anush Krishna, and Bovik Alan Conrad. 2012. No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 21, 12 (2012), 46954708.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. [24] Qian Guocheng, Gu Jinjin, Ren Jimmy S., Dong Chao, Zhao Furong, and Lin Juan. 2019. Trinity of pixel enhancement: A joint solution for demosaicking, denoising and super-resolution. arXiv:1905.02538. Retrieved from https://arxiv.org/abs/1905.02538.Google ScholarGoogle Scholar
  25. [25] Suganuma Masanori, Liu Xing, and Okatani Takayuki. 2019. Attention-based adaptive selection of operations for image restoration in the presence of unknown combined distortions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 90399048.Google ScholarGoogle ScholarCross RefCross Ref
  26. [26] Tao Xin, Gao Hongyun, Shen Xiaoyong, Wang Jue, and Jia Jiaya. 2018. Scale-recurrent network for deep image deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 81748182.Google ScholarGoogle ScholarCross RefCross Ref
  27. [27] Tesauro Gerald et al. 1995. Temporal difference learning and TD-Gammon. Commun. ACM 38, 3 (1995), 5868.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. [28] Wang Wei, Guo Ruiming, Tian Yapeng, and Yang Wenming. 2019. Cfsnet: Toward a controllable feature space for image restoration. (2019), 41404149.Google ScholarGoogle Scholar
  29. [29] Wang Zhou, Bovik Alan C., Sheikh Hamid R., and Simoncelli Eero P.. 2004. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 13, 4 (2004), 600612.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. [30] Wang Zhou, Sheikh Hamid R., and Bovik Alan C.. 2002. No-reference perceptual quality assessment of JPEG compressed images. In Proceedings of the International Conference on Image Processing, Vol. 1. IEEE, I–I.Google ScholarGoogle ScholarCross RefCross Ref
  31. [31] Watkins Christopher John Cornish Hellaby. 1989. Learning from delayed rewards.Google ScholarGoogle Scholar
  32. [32] Wiener Norbert. 1964. Extrapolation, interpolation, and smoothing of stationary time series: With engineering applications. 8 (1964).Google ScholarGoogle Scholar
  33. [33] Yan Chenggang, Li Zhisheng, Zhang Yongbing, Liu Yutao, Ji Xiangyang, and Zhang Yongdong. 2020. Depth image denoising using nuclear norm and learning graph model. ACM Trans. Multimedia Comput. Commun. Appl. 16, 4 (2020), 117.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. [34] Yang Peihao, Kong Linghe, Qiu Meikang, Liu Xue, and Chen Guihai. 2021. Compressed imaging reconstruction with sparse random projection. ACM Trans. Multimedia Comput. Commun. Appl. 17, 1 (2021), 125.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. [35] Ying Zhenqiang, Niu Haoran, Gupta Praful, Mahajan Dhruv, Ghadiyaram Deepti, and Bovik Alan. 2020. From patches to pictures (PaQ-2-PiQ): Mapping the perceptual space of picture quality. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 35753585.Google ScholarGoogle ScholarCross RefCross Ref
  36. [36] Yu Ke, Dong Chao, Lin Liang, and Loy Chen Change. 2018. Crafting a toolchain for image restoration by deep reinforcement learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 24432452.Google ScholarGoogle ScholarCross RefCross Ref
  37. [37] Yu Ke, Wang Xintao, Dong Chao, Tang Xiaoou, and Loy Chen Change. 2021. Path-restore: Learning network path selection for image restoration. IEEE Trans. Pattern Anal. Mach. Intell. (2021).Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. [38] Zhang Kai, Zuo Wangmeng, Chen Yunjin, Meng Deyu, and Zhang Lei. 2017. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Trans. Image Process. 26, 7 (2017), 31423155.Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. [39] Zhang Kai, Zuo Wangmeng, and Zhang Lei. 2018. FFDNet: Toward a fast and flexible solution for CNN-based image denoising. IEEE Trans. Image Process. 27, 9 (2018), 46084622.Google ScholarGoogle Scholar
  40. [40] Zhang Xiaoshuai, Yang Wenhan, Hu Yueyu, and Liu Jiaying. 2018. DMCNN: Dual-domain multi-scale convolutional neural network for compression artifacts removal. In Proceedings of the 25th IEEE International Conference on Image Processing (ICIP’18). IEEE, 390394.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Image Quality Assessment–driven Reinforcement Learning for Mixed Distorted Image Restoration

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Multimedia Computing, Communications, and Applications
      ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 19, Issue 1s
      February 2023
      504 pages
      ISSN:1551-6857
      EISSN:1551-6865
      DOI:10.1145/3572859
      • Editor:
      • Abdulmotaleb El Saddik
      Issue’s Table of Contents

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 3 February 2023
      • Online AM: 29 April 2022
      • Accepted: 17 April 2022
      • Revised: 2 March 2022
      • Received: 9 August 2021
      Published in tomm Volume 19, Issue 1s

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Refereed
    • Article Metrics

      • Downloads (Last 12 months)337
      • Downloads (Last 6 weeks)6

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    View Full Text

    HTML Format

    View this article in HTML Format .

    View HTML Format
    About Cookies On This Site

    We use cookies to ensure that we give you the best experience on our website.

    Learn more

    Got it!