Abstract
Single image super-resolution attempts to reconstruct a high-resolution (HR) image from its corresponding low-resolution (LR) image, which has been a research hotspot in computer vision and image processing for decades. To improve the accuracy of super-resolution images, many works adopt very deep networks to model the translation from LR to HR, resulting in memory and computation consumption. In this article, we design a lightweight dense connection distillation network by combining the feature fusion units and dense connection distillation blocks (DCDB) that include selective cascading and dense distillation components. The dense connections are used between and within the distillation block, which can provide rich information for image reconstruction by fusing shallow and deep features. In each DCDB, the dense distillation module concatenates the remaining feature maps of all previous layers to extract useful information, the selected features are then assessed by the proposed layer contrast-aware channel attention mechanism, and finally the cascade module aggregates the features. The distillation mechanism helps to reduce training parameters and improve training efficiency, and the layer contrast-aware channel attention further improves the performance of model. The quality and quantity experimental results on several benchmark datasets show the proposed method performs better tradeoff in term of accuracy and efficiency.
- Namhyuk Ahn, Byungkon Kang, and Kyung-Ah Sohn. 2018. Fast, accurate, and lightweight super-resolution with cascading residual network. In Proceedings of the European Conference on Computer Vision (ECCV’18).Google Scholar
Cross Ref
- Wong P. W. Allebach J. 1996. Edge-directed interpolation. In Proceedings of the International Conference on Image Processing.Google Scholar
- Marco Bevilacqua, Aline Roumy, Christine Guillemot, and Marie-Line Alberi-Morel. 2012. Low-complexity single-image super-resolution based on nonnegative neighbor embedding. In Proceedings of the British Machine Vision Conference (BMVC’12).Google Scholar
Cross Ref
- C. E. Duchon. 1979. Lanczos filtering in one and two dimensions. J. Appl. Meteorol. 18, 8 (1979), 1016–1022.Google Scholar
Cross Ref
- Tao Dai, Jianrui Cai, Yongbing Zhang, Shu-Tao Xia, and Lei Zhang. 2019. Second-order attention network for single image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 11065–11074.Google Scholar
Cross Ref
- Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. 2016. Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 38, 2 (2016), 295–307. Google Scholar
Digital Library
- Chao Dong, Chen Change Loy, and Xiaoou Tang. 2016. Accelerating the super-resolution convolutional neural network. In Proceedings of the European Conference on Computer Vision (ECCV’16).Google Scholar
Cross Ref
- Vincent Dumoulin, Jonathon Shlens, and Manjunath Kudlur. 2017. A learned representation for artistic style. CoRR abs/1610.07629 (2017).Google Scholar
- Dan Guo, Shuo Wang, Qi Tian, and Meng Wang. 2019. Dense temporal convolution network for sign language translation. In Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI’19). Google Scholar
Cross Ref
- Muhammad Haris, Gregory Shakhnarovich, and Norimichi Ukita. 2018. Deep back-projection networks for super-resolution. Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1664–1673.Google Scholar
Cross Ref
- Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR’16), 770–778.Google Scholar
Cross Ref
- J. Hu, L. Shen, S. Albanie, G. Sun, and E. Wu. 2019. Squeeze-and-excitation networks. IEEE Trans. Pattern Anal. Mach. Intell. 142, 8 (2019). Google Scholar
- Gao Huang, Zhuang Liu, and Kilian Q. Weinberger. 2017. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17).Google Scholar
- Jia-Bin Huang, Abhishek Singh, and Narendra Ahuja. 2015. Single image super-resolution from transformed self-exemplars. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR’15). 5197–5206.Google Scholar
Cross Ref
- Zheng Hui, Xinbo Gao, Yunchu Yang, and Xiumei Wang. 2019. Lightweight image super-resolution with information multi-distillation network. In Proceedings of the 27th ACM International Conference on Multimedia (ACM MM’19). 2024–2032. Google Scholar
Digital Library
- Zheng Hui, Xiumei Wang, and Xinbo Gao. 2018. Fast and accurate single image super-resolution via information distillation network. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 723–731.Google Scholar
Cross Ref
- Jithin Saji Isaac and Ramesh Kulkarni. 2015. Super resolution techniques for medical image processing. In Proceedings of the International Conference on Technologies for Sustainable Development. 1–6.Google Scholar
Cross Ref
- L. Itti, C. Koch, and E. Niebur. 1998. A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20, 11 (1998), 1254–1259. Google Scholar
Digital Library
- Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee. 2016. Accurate image super-resolution using very deep convolutional networks. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR’16), 1646–1654.Google Scholar
Cross Ref
- Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee. 2016. Deeply-recursive convolutional network for image super-resolution. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR’16), 1637–1645.Google Scholar
Cross Ref
- Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. CoRR abs/1412.6980 (2015).Google Scholar
- Wei-Sheng Lai, Jia-Bin Huang, Narendra Ahuja, and Ming-Hsuan Yang. 2017. Deep laplacian pyramid networks for fast and accurate super-resolution. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17), 5835–5843.Google Scholar
Cross Ref
- Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, and Wenzhe Shi. 2017. Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the The IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17). 105–114.Google Scholar
Cross Ref
- Zhang Lei and Wu Xiaolin. 2006. An edge-guided image interpolation algorithm via directional filtering and data fusion. IEEE Trans. Image Process. 15, 8 (2006), 2226–2238. Google Scholar
Digital Library
- Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee. 2017. Enhanced deep residual networks for single image super-resolution. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW’17), 1132–1140.Google Scholar
Cross Ref
- B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee. 2017. Enhanced deep residual networks for single image super-resolution. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW’17). 1132–1140.Google Scholar
- Xiaojiao Mao, Chunhua Shen, and Yu-Bin Yang. 2016. Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections. In Advances in Neural Information Processing Systems 29, D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (Eds.). 2802–2810. Google Scholar
Digital Library
- David R. Martin, Charless C. Fowlkes, Doron Tal, and Jitendra Malik. 2001. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proceedings of the International Conference on Computer Vision (ICCV’01).Google Scholar
Cross Ref
- Yusuke Matsui, Kota Ito, Yuji Aramaki, Toshihiko Yamasaki, and Kiyoharu Aizawa. 2016. Sketch-based manga retrieval using manga109 dataset. Multimedia Tools Appl. 76 (2016), 21811–21838. Google Scholar
Digital Library
- Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, and Alexander Ku. 2018. Image transformer. In Proceedings of the International Conference on Machine Learning (ICML).Google Scholar
- Adam Paszke, Sam Gross, and Adam Lerer. 2017. Automatic differentiation in pytorch.Google Scholar
- Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive summarization. CoRR abs/1705.04304 (2018).Google Scholar
- Mehdi S. M. Sajjadi, Bernhard Scholkopf, and Michael Hirsch. 2017. EnhanceNet: Single image super-resolution through automated texture synthesis. In Proceedings of the IEEE International Conference on Computer Vision (ICCV’17).Google Scholar
Cross Ref
- Wenzhe Shi, Jose Caballero, Ferenc Huszár, Johannes Totz, Andrew P. Aitken, Rob Bishop, Daniel Rueckert, and Zehan Wang. 2016. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR’16), 1874–1883.Google Scholar
Cross Ref
- W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang. 2016. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR’16). 1874–1883. Google Scholar
- Jian Sun, Zongben Xu, and Heung-Yeung Shum. 2008. Image super-resolution using gradient profile prior. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition (CVPR’08). 1–8.Google Scholar
- Ying Tai, Jian Yang, and Xiaoming Liu. 2017. Image super-resolution via deep recursive residual network. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17), 2790–2798.Google Scholar
Cross Ref
- Ying Tai, Jian Xi Yang, Xiaoming Liu, and Chunyan Xu. 2017. MemNet: A persistent memory network for image restoration. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV’17), 4549–4557.Google Scholar
Cross Ref
- Radu Timofte, Eirikur Agustsson, Luc Van Gool, Ming-Hsuan Yang, and Lei Zhang. 2017. NTIRE 2017 challenge on single image super-resolution: Methods and results. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17) Workshops. 1110–1121.Google Scholar
- Radu Timofte, Vincent De Smet, and Luc Van Gool. 2013. Anchored neighborhood regression for fast example-based super-resolution. In Proceedings of the 2013 IEEE International Conference on Computer Vision, 1920–1927. Google Scholar
Digital Library
- Tong Tong, Gen Li, Xiejie Liu, and Qinquan Gao. 2017. Image super-resolution using dense skip connections. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV’17), 4809–4817.Google Scholar
Cross Ref
- Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the Conference and Workshop on Neural Information Processing Systems (NIPS’17). 2472–2481. Google Scholar
Digital Library
- Jiheng Wang, Alan C. Bovik, Hamid R. Sheikh, and Eero P. Simoncelli. 2004. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13, 4 (2004), 600–612. Google Scholar
Digital Library
- M. Wang. 2012. Movie2Comics: Towards a lively video content presentation. IEEE Trans. Multimedia 14, 3 (2012), 858–870. Google Scholar
Digital Library
- Meng Wang, Richang Hong, Guangda Li, Zheng-Jun Zha, Shuicheng Yan, and Tat-Seng Chua. [n.d.]. Event driven web video summarization by tag localization and key-shot identification. IEEE Trans. Multimedia 14, 4 ([n. d.]), 975–985. Google Scholar
Digital Library
- Xintao Wang, Ke Yu, Shixiang Wu, Jinjin Gu, Yihao Liu, Chao Dong, Chen Change Loy, Yu Qiao, and Xiaoou Tang. 2019. ESRGAN: Enhanced super-resolution generative adversarial networks. In Proceedings of the European Conference on Computer Vision Workshops (ECCV’19). 63–79.Google Scholar
Digital Library
- Yang Wang. 2020. Survey on deep multi-modal data analytics: Collaboration, rivalry and fusion. arXiv abs/2006.08159. Retrieved from https://arxiv.org/abs/2006.08159.Google Scholar
- Y. Wang, X. Lin, L. Wu, and W. Zhang. 2017. Effective multi-query expansions: Collaborative deep networks for robust landmark retrieval. IEEE Trans. Image Process. 26, 3 (2017), 1393–1404. Google Scholar
Digital Library
- Zhaowen Wang, Ding Liu, Jianchao Yang, Wei Han, and Thomas S. Huang. 2015. Deep networks for image super-resolution with sparse prior. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV’15), 370–378. Google Scholar
Digital Library
- Bingzhe Wu, Haodong Duan, Zhichao Liu, and Guangyu Sun. 2017. SRPGAN: Perceptual generative adversarial network for single image super resolution. CoRR abs/1712.05927 (2017).Google Scholar
- Lihua Wu, Yang Wang, Junbin Gao, and Xue Li. 2019. Where-and-when to look: Deep siamese attention networks for video-based person re-identification. IEEE Trans. Multimedia 21, 6 (2019), 1412–1424.Google Scholar
Digital Library
- L. Wu, Y. Wang, J. Gao, M. Wang, Z. Zha, and D. Tao. 2020. Deep coattention-based comparator for relative representation learning in person re-identification. IEEE Trans. Neural Netw. Learn. Syst. (2020), 1–14.Google Scholar
- L. Wu, Y. Wang, X. Li, and J. Gao. 2019. Deep attention-based spatially recursive networks for fine-grained visual recognition. IEEE Trans. Cybernet. 49, 5 (2019), 1791–1802.Google Scholar
Cross Ref
- L. Wu, Y. Wang, and L. Shao. 2019. Cycle-consistent deep generative hashing for cross-modal retrieval. IEEE Trans. Image Process. 28, 4 (2019), 1602–1612.Google Scholar
Digital Library
- L. Wu, Y. Wang, H. Yin, M. Wang, and L. Shao. 2020. Few-shot deep adversarial learning for video-based person re-identification. IEEE Trans. Image Process. 29 (2020), 1233–1245.Google Scholar
Cross Ref
- Roman Zeyde, Michael Elad, and Matan Protter. 2010. On single image scale-up using sparse-representations. In Curves and Surfaces. Google Scholar
Digital Library
- Han Zhang, Ian J. Goodfellow, Dimitris N. Metaxas, and Augustus Odena. 2018. Self-attention generative adversarial networks. CoRR abs/1805.08318 (2018).Google Scholar
- Kaibing Zhang, Xinbo Gao, Dacheng Tao, and Xuelong Li. 2012. Single image super-resolution with non-local means and steering kernel regression. IEEE Trans. Image Process. 21, 11 (2012), 4544–4556. Google Scholar
Digital Library
- Kai Zhang, Wangmeng Zuo, and Lei Zhang. 2017. Learning a single convolutional super-resolution network for multiple degradations. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3262–3271.Google Scholar
- Yulun Zhang, Kunpeng Li, Kai Li, Lichen Wang, Bineng Zhong, and Yun Fu. 2018. Image super-resolution using very deep residual channel attention networks. In Proceedings of the European Conference on Computer Vision (ECCV’18).Google Scholar
Cross Ref
- Yulun Zhang, Yapeng Tian, Yu Kong, Bineng Zhong, and Yun Fu. 2018. Residual dense network for image super-resolution. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2472–2481.Google Scholar
Cross Ref
- Wilman W. W. Zou and Pong C. Yuen. 2012. Very low resolution face recognition problem. IEEE Trans. Image Process. 21, 1 (2012), 327–40. Google Scholar
Digital Library
Index Terms
Lightweight Single Image Super-resolution with Dense Connection Distillation Network
Recommendations
Single Image Super-Resolution Reconstruction Technique based on A Single Hybrid Dictionary
A new sparse domain approach is proposed in this paper to realize the single image super-resolution (SR) reconstruction based upon one single hybrid dictionary, which is deduced from the mixture of both the high resolution (HR) image patch samples and ...
Single image super resolution based on sparse domain selection
Single image super-resolution (SR) aims to form a high-resolution (HR) image from an input low-resolution (LR) image. The sparse coding-based example learning methods typically assume that the low-resolution and high-resolution features share the same ...
Single image super-resolution by combining self-learning and example-based learning methods
In this paper we propose a novel method for single image super-resolution (SISR) by combining self-learning method and example-based learning method. The self-learning method we used which is proposed by Zeyde et al. (2012) has the ability to scale-up a ...






Comments