skip to main content
research-article

Toward A No-reference Omnidirectional Image Quality Evaluation by Using Multi-perceptual Features

Published:06 February 2023Publication History
Skip Abstract Section

Abstract

Compared to ordinary images, omnidirectional image (OI) usually has a broader view and a higher resolution, and image quality assessment (IQA) can help people to understand and improve their visual experience. However, the current IQA works cannot achieve good performance. To address this, we proposed a novel visual perception-based no-reference/blind omnidirectional image quality assessment (NR/B-OIQA) model. The gradient-based global structural features and gray-level co-occurrence matrix-based local structural features are combined together to highlight the rich quality-aware structural information. And a novel steganalysis real model-based color descriptor is extracted to reflect the color information that ignored in most IQA models. With a multi-scale visual perception, we take image entropy and the natural scene statistics features to convey the high-level semantics and quantify the unnaturalness of omnidirectional images. Finally, we apply support vector regression to predict the objective quality value based on the subjective scores and extracted all features. Experiments are conducted on OIQA and CVIQD2018 Databases, and the results illustrate that our model has more reliable performance and stronger competitiveness and receives better conformity with the subjective values.

REFERENCES

  1. [1] Chen Chenglizhao, Zhao Hongmeng, Yang Huan, Yu Teng, Peng Chong, and Qin Hong. 2021. Full-reference screen content image quality assessment by fusing multilevel structure similarity. ACM Trans. Multimedia Comput. Commun. Appl. 17, 3, Article 94 (July 2021), 21 pages. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. [2] Duan Huiyu, Zhai Guangtao, Min Xiongkuo, Zhu Yucheng, Fang Yi, and Yang Xiaokang. 2018. Perceptual quality assessment of omnidirectional images. In Proceedings of the IEEE International Symposium on Circuits and Systems (ISCAS’18). 15. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  3. [3] J. Fridrich and J. Kodovsky. 2012. Rich models for steganalysis of digital images. In IEEE Transactions on Information Forensics and Security. 868–882. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. [4] Gevers Theo and Smeulders Arnold W. M.. 1999. Color-based object recognition. Pattern Recogn. 32, 3 (1999), 453–464.Google ScholarGoogle ScholarCross RefCross Ref
  5. [5] Kim H. Lim, H. G., and Ro Y. M.. 2020. Deep virtual reality image quality assessment with human perception guider for omnidirectional image. IEEE Trans. Circ. Syst. Video Technol. 30 (Apr. 2020), 917928.Google ScholarGoogle ScholarCross RefCross Ref
  6. [6] Haralick R. M., Shanmugam K., and Dinstein I.. 1973. Textural features for image classification. Studies Media Commun. 3, 6 (1973), 610621.Google ScholarGoogle Scholar
  7. [7] Hateren J. H. van and Schaaf A. van der. 1998. Independent component filters of natural images compared with simple cells in primary visual cortex. Proc. Roy. Soc. B: Biol. Sci. 265, 1394 (1998), 359366.Google ScholarGoogle ScholarCross RefCross Ref
  8. [8] He L., Gao X., Wen L., Li X., and Tao D.. 2011. Image quality assessment based on S-CIELAB model. Signal Image Video Process. 5, 3 (2011), 283290.Google ScholarGoogle ScholarCross RefCross Ref
  9. [9] Hla B., Bla B., Sta B., and Jha B.. 2020. Identification of deep network generated images using disparities in color components. Signal Process. 174 (2020).Google ScholarGoogle Scholar
  10. [10] Huawei iLab. 2019. VR Data Report. Huawei iLab, Shenzhen, China.Google ScholarGoogle Scholar
  11. [11] Xu W. Zhou, J., and Chen Z.. 2021. Blind omnidirectional image quality assessment with viewport oriented graph convolutional networks. IEEE Trans. Circ. Syst. Video Technol. 31 (May 2021), 17241737.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. [12] Jiang Hao, Jiang Gangyi, Yu Mei, Zhang Yun, Yang You, Peng Zongju, Chen Fen, and Zhang Qingbo. 2021. Cubemap-based perception-driven blind quality assessment for 360-degree images. IEEE Trans. Image Process. (2021), 2364–2377. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. [13] Jones J. P. and Palmer L. A.. 1987. An evaluation of the two-dimensional gabor filter model of simple receptive fields in cat striate cortex. J. Neurophysiol. 58, 6 (1987), 12331258.Google ScholarGoogle ScholarCross RefCross Ref
  14. [14] Kiran Indra, Guha Tanaya, and Pandey Gaurav. 2016. Blind image quality assessment using subspace alignment. InProceedings of the 10th Indian Conference on Computer Vision, Graphics and Image Processing (ICVGIP’16). ACM, New York, NY, Article 91, 6 pages. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. [15] Kusuno Y., Takahashi J., and Yu Y.. 2019. A method localizing an omnidirectional image in pre-constructed 3D wireframe map. In Proceedings of the IEEE/SICE International Symposium on System Integration.Google ScholarGoogle ScholarCross RefCross Ref
  16. [16] Larsson. J.2006. Orientation-selective adaptation to first- and second-order patterns in human visual cortex. J. Neurophysiol. 95, 2 (2006), 862881.Google ScholarGoogle ScholarCross RefCross Ref
  17. [17] Lasmar N. E., Stitou Y., and Berthoumieu Y.. 2010. Multiscale skewed heavy tailed model for texture analysis. In Proceedings of the IEEE International Conference on Image Processing.Google ScholarGoogle Scholar
  18. [18] Li Jie, Yan Jia, Deng Songfeng, and He Meiling. 2017. Blind image quality assessment using center-surround mechanism. InProceedings of the International Conference on Video and Image Processing (ICVIP’17). ACM, New York, NY, 113118. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. [19] Li Q., Lin W., and Fang Y.. 2016. BSD: Blind image quality assessment based on structural degradation. Neurocomputing 236 (May 2016), 93103.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. [20] Li Y., Yang F., Wan W., Wang J., Gao M., Zhang J., and Sun J.. 2019. No-reference stereoscopic image quality assessment based on visual attention and perception. IEEE Access 7 (2019), 4670646716.Google ScholarGoogle ScholarCross RefCross Ref
  21. [21] Liu Lixiong, Liu Bao, Huang Hua, and Bovik Alan Conrad. 2014. No-reference image quality assessment based on spatial and spectral entropies. Signal Process.: Image Commun. 29, 8 (2014), 856863. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  22. [22] Liu Yutao, Gu Ke, Li Xiu, and Zhang Yongbing. 2020. Blind image quality assessment by natural scene statistics and perceptual characteristics. ACM Trans. Multimedia Comput. Commun. Appl. 16, 3, Article 91 (Aug. 2020), 91 pages. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. [23] Ma Kede and Fang Yuming. 2021. Image Quality Assessment in the Modern Age. ACM, New York, NY, 56645666. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. [24] Mittal Anish, Moorthy Anush Krishna, and Bovik Alan Conrad. 2012. No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 21, 12 (2012), 46954708. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. [25] Mittal Anish, Soundararajan Rajiv, and Bovik Alan C.. 2013. Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20, 3 (2013), 209212. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  26. [26] Mittal Anish, Soundararajan Rajiv, and Bovik Alan C.. 2013. Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20, 3 (2013), 209212. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  27. [27] Ruderman Daniel L.. 2009. The statistics of natural images. Netw. Comput. Neural Syst. (2009), 517–548. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  28. [28] Sheikh H. R. and Bovik A. C.. 2006. Image information and visual quality. IEEE Trans. Image Process. 15, 2 (2006), 430444.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. [29] Sheikh H. R., Sabir M. F., and Bovik A. C.. 2006. A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Trans. Image Process. 15, 11 (2006), 34403451.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. [30] Shim Inwook, Oh Tae-Hyun, Lee Joon-Young, Choi Jinwook, Choi Dong-Geol, and Kweon In So. 2019. Gradient-based camera exposure control for outdoor mobile platforms. IEEE Trans. Circ. Syst. Video Technol. 29, 6 (2019), 15691583. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  31. [31] Sporring J.. 1996. The entropy of scale-space. In Proceedings of the International Conference on Pattern Recognition.Google ScholarGoogle ScholarCross RefCross Ref
  32. [32] Sui Xiangjie, Ma Kede, Yao Yiru, and Fang Yuming. 2021. Perceptual quality assessment of omnidirectional images as moving camera videos. IEEE Trans. Visual. Comput. Graph.99 (2021), 1.Google ScholarGoogle Scholar
  33. [33] Sun W., Gu K., Ma S., Zhu W., Liu N., and Zhai G.. 2018. A large-scale compressed 360-degree spherical image database: From subjective quality evaluation to objective model comparison. In Proceedings of the IEEE 20th International Workshop on Multimedia Signal Processing (MMSP’18).Google ScholarGoogle ScholarCross RefCross Ref
  34. [34] Sun W., Luo W., Min X., Zhai G., and Ma S.. 2019. MC360IQA: The multi-channel CNN for blind 360-degree image quality assessment. In Proceedings of the IEEE International Symposium on Circuits and Systems (ISCAS’19).Google ScholarGoogle ScholarCross RefCross Ref
  35. [35] Sun Y., Lu A., and Yu L.. 2017. Weighted-to-spherically-uniform quality evaluation for omnidirectional video. IEEE Signal Process. Lett. (2017), 14081412.Google ScholarGoogle Scholar
  36. [36] Vinje William E. and Gallant Jack L.. 2000. Sparse coding and decorrelation in primary visual cortex during natural vision. Science 287, 5456 (2000), 12731276.Google ScholarGoogle ScholarCross RefCross Ref
  37. [37] VR Photography. 2020. Retrieved from https://en.wikipedia.org/wiki/VRphotography.Google ScholarGoogle Scholar
  38. [38] Xia Yumeng, Wang Yongfang, and Peng Ye. 2019. Blind panoramic image quality assessment via the asymmetric mechanism of human brain. In Proceedings of the IEEE Visual Communications and Image Processing (VCIP’19). 14. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  39. [39] Xu J., Zhou W., and Chen Z.. 2020. Blind omnidirectional image quality assessment with viewport oriented graph convolutional networks. IEEE Trans. Circ. Syst. Video Technol.99 (2020), 1.Google ScholarGoogle Scholar
  40. [40] Xue Wufeng, Zhang Lei, Mou Xuanqin, and Bovik Alan C.. 2014. Gradient magnitude similarity deviation: A highly efficient perceptual image quality index. IEEE Trans. Image Process. 23, 2 (2014), 684695. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. [41] Yeung Yuileong, Lu Wei, Xue Yingjie, Huang Jiwu, and Shi Yun-Qing. 2020. Secure binary image steganography with distortion measurement based on prediction. IEEE Trans. Circ. Syst. Video Technol. 30, 5 (2020), 14231434. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  42. [42] Yu Matt, Lakshman Haricharan, and Girod Bernd. 2015. A framework to evaluate omnidirectional video coding schemes. In Proceedings of the IEEE International Symposium on Mixed & Augmented Reality.Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. [43] Chen W. Zhou, Z., and Li W.. 2018. Blind stereoscopic video quality assessment: From depth perception to overall experience. IEEE Trans. Image Process. 27, 2 (2018), 721734.Google ScholarGoogle ScholarCross RefCross Ref
  44. [44] Zakharchenko Vladyslav, Choi Kwang Pyo, and Park Jeong Hoon. 2016. Quality metric for spherical panoramic video. In Proceedings of the SPIE Optical Engineering + Applications Conference.Google ScholarGoogle Scholar
  45. [45] Zhang L., Shen Y., and Li H.. 2014. VSI: A visual saliency-induced index for perceptual image quality assessment. IEEE Trans. Image Process. 23, 10 (2014), 42704281.Google ScholarGoogle ScholarCross RefCross Ref
  46. [46] Zhang L., Zhang L., Mou X., and Zhang D.. 2011. FSIM: A feature similarity index for image quality assessment. IEEE Trans. Image Process. 20, 8 (2011), 23782386.Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. [47] Zheng X., Jiang G., Yu M., and Jiang H.. 2020. Segmented spherical projection based blind omnidirectional image quality assessment. IEEE Access 8 (2020), 11.Google ScholarGoogle Scholar
  48. [48] Zhou W., Bovik A. C., Sheikh H. R., and Simoncelli E. P.. 2004. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 13, 4 (2004).Google ScholarGoogle Scholar
  49. [49] Zhou W., Jiang Q., Wang Y., Chen Z., and Li W.. 2020. Blind quality assessment for image superresolution using deep two-stream convolutional networks. Info. Sci. 528 (2020).Google ScholarGoogle ScholarCross RefCross Ref
  50. [50] Zhou W., Shi L., Chen Z., and Zhang J.. 2020. Tensor oriented no-reference light field image quality assessment. IEEE Trans. Image Process. 29 (2020), 40704084.Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. [51] Zhou W., Xu J., Jiang Q., and Chen Z.. 2021. No-reference quality assessment for 360-degree images by analysis of multi-frequency information and local-global naturalness. IEEE Transactions on Circuits and Systems for Video Technology. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  52. [52] Zhou Y., Yu M., Ma H., Shao H., and Jiang G.. 2018. Weighted-to-spherically-uniform SSIM objective quality evaluation for panoramic video. In Proceedings of the 14th IEEE International Conference on Signal Processing (ICSP’18).Google ScholarGoogle ScholarCross RefCross Ref
  53. [53] Zhu Shiping, Liu Chang, and Xu Ziyao. 2020. High-definition video compression system based on perception guidance of salient information of a convolutional neural network and HEVC compression domain. IEEE Trans. Circ. Syst. Video Technol. 30, 7 (2020), 19461959. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Toward A No-reference Omnidirectional Image Quality Evaluation by Using Multi-perceptual Features

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Multimedia Computing, Communications, and Applications
      ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 19, Issue 2
      March 2023
      540 pages
      ISSN:1551-6857
      EISSN:1551-6865
      DOI:10.1145/3572860
      • Editor:
      • Abdulmotaleb El Saddik
      Issue’s Table of Contents

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 6 February 2023
      • Online AM: 21 July 2022
      • Accepted: 13 July 2022
      • Revised: 2 June 2022
      • Received: 11 January 2022
      Published in tomm Volume 19, Issue 2

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Refereed
    • Article Metrics

      • Downloads (Last 12 months)208
      • Downloads (Last 6 weeks)11

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    View Full Text

    HTML Format

    View this article in HTML Format .

    View HTML Format
    About Cookies On This Site

    We use cookies to ensure that we give you the best experience on our website.

    Learn more

    Got it!