skip to main content
research-article

Point Cloud Quality Assessment: Dataset Construction and Learning-based No-reference Metric

Authors Info & Claims
Published:17 February 2023Publication History
Skip Abstract Section

Abstract

Full-reference (FR) point cloud quality assessment (PCQA) has achieved impressive progress in recent years. However, in many cases, obtaining the reference point clouds is difficult, so no-reference (NR) metrics have become a research hotspot. Few researches about NR-PCQA are carried out due to the lack of a large-scale PCQA dataset. In this article, we first build a large-scale PCQA dataset named LS-PCQA, which includes 104 reference point clouds and more than 22,000 distorted samples. In the dataset, each reference point cloud is augmented with 31 types of impairments (e.g., Gaussian noise, contrast distortion, local missing, and compression loss) at 7 distortion levels. Besides, each distorted point cloud is assigned with a pseudo-quality score as its substitute of Mean Opinion Score. Inspired by the hierarchical perception system and considering the intrinsic attributes of point clouds, we propose a NR metric ResSCNN based on sparse convolutional neural network (CNN) to accurately estimate the subjective quality of point clouds. We conduct several experiments to evaluate the performance of the proposed NR metric. The results demonstrate that ResSCNN exhibits the state-of-the-art performance among all the existing NR-PCQA metrics and even outperforms some FR metrics. The dataset presented in this work will be made publicly accessible at https://smt.sjtu.edu.cn. The source code for the proposed ResSCNN can be found at https://github.com/lyp22/ResSCNN.

REFERENCES

  1. [1] Free3d. [n.d.]. Retrieved from https://free3d.com/3d-models.Google ScholarGoogle Scholar
  2. [2] Image and Video Quality Assessment Research at LIVE. [n.d.]. Retrieved from http://live.ece.utexas.edu/research/quality/.Google ScholarGoogle Scholar
  3. [3] JPEG Pleno Database. [n.d.]. Retrieved from http://uspaulopc.di.ubi.pt/.Google ScholarGoogle Scholar
  4. [4] MPEG Inanimate DataSets. [n.d.]. Retrieved from http://mpegfs.int-evry.fr/MPEG/PCC/DataSets/pointCloud/CfP/datasets/Static_Objects_and_Scenes/Inanimate_Objects/.Google ScholarGoogle Scholar
  5. [5] MPEG People DataSets. [n.d.]. Retrieved from http://mpegfs.int-evry.fr/MPEG/PCC/DataSets/pointCloud/CfP/decoded/Dynamic_Objects/People/8i/.Google ScholarGoogle Scholar
  6. [6] MPEG Reference Software. [n.d.]. Retrieved from http://mpegx.int-evry.fr/software/MPEG/PCC/TM/mpeg-pcc-dmetric.Google ScholarGoogle Scholar
  7. [7] MPEG Static Object and Scenes DataSets. [n.d.]. Retrieved from http://mpegfs.int-evry.fr/MPEG/PCC/DataSets/pointCloud/CfP/datasets/Static_Objects_and_Scenes/ULB_Unicorn/.Google ScholarGoogle Scholar
  8. [8] Sketchfab. [n.d.]. Retrieved from https://sketchfab.com.Google ScholarGoogle Scholar
  9. [9] Alexiou Evangelos and Ebrahimi Touradj. 2018. Point cloud quality assessment metric based on angular similarity. In Proceedings of the IEEE International Conference on Multimedia and Expo. 16.Google ScholarGoogle ScholarCross RefCross Ref
  10. [10] Alexiou Evangelos and Ebrahimi Touradj. 2020. Towards a point cloud structural similarity metric. In Proceedings of the IEEE International Conference on Multimedia and Expo Workshops. 13.Google ScholarGoogle ScholarCross RefCross Ref
  11. [11] Alexiou Evangelos, Yang Nanyang, and Ebrahimi Touradj. 2020. PointXR: A toolbox for visualization and subjective evaluation of point clouds in virtual reality. In Proceedings of the International Conference on Quality of Multimedia Experience. 16.Google ScholarGoogle ScholarCross RefCross Ref
  12. [12] Bai Shuai, He Zhiqun, Dong Yuan, and Bai Hongliang. 2020. Multi-hierarchical independent correlation filters for visual tracking. In Proceedings of the IEEE International Conference on Multimedia and Expo. 16.Google ScholarGoogle ScholarCross RefCross Ref
  13. [13] Boyat Ajay Kumar and Joshi Brijendra Kumar. 2015. A review paper: Noise models in digital image processing. Signal Image Process.: Int. J. 6, 2 (2015).Google ScholarGoogle Scholar
  14. [14] Charles R. Qi, Su Hao, Kaichun Mo, and Guibas Leonidas J.. 2017. PointNet: Deep learning on point sets for 3D classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 7785.Google ScholarGoogle ScholarCross RefCross Ref
  15. [15] Choy Christopher, Gwak JunYoung, and Savarese Silvio. 2019. 4D spatio-temporal convnets: Minkowski convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 30703079.Google ScholarGoogle ScholarCross RefCross Ref
  16. [16] Cignoni Paolo, Rocchini Claudio, and Scopigno Roberto. 1998. Metro: Measuring error on simplified surfaces. Comput. Graph. Forum 17, 2 (1998), 167174.Google ScholarGoogle ScholarCross RefCross Ref
  17. [17] Demisse Girum G., Aouada Djamila, and Ottersten Björn. 2018. Deformation-based 3D facial expression representation. ACM Trans. Multimedia Comput. Commun. Appl. 14, 1s (2018).Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. [18] Deng Haowen, Birdal Tolga, and Ilic Slobodan. 2018. PPF-FoldNet: Unsupervised learning of rotation invariant 3D local descriptors. In Proceedings of the European Conference on Computer Vision. 620638.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. [19] Deng Haowen, Birdal Tolga, and Ilic Slobodan. 2018. PPFNet: Global context aware local features for robust 3D point matching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 195205.Google ScholarGoogle ScholarCross RefCross Ref
  20. [20] Duan Huiyu, Zhai Guangtao, Min Xiongkuo, et al. 2018. Perceptual quality assessment of omnidirectional images. In Proceedings of the IEEE International Symposium on Circuits and Systems. 15.Google ScholarGoogle ScholarCross RefCross Ref
  21. [21] Gao Fei and Yu Jun. 2016. Biologically inspired image quality assessment. Signal Process. 124 (2016), 210219.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. [22] Graham Benjamin, Engelcke Martin, and Maaten Laurens van der. 2018. 3D semantic segmentation with submanifold sparse convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 92249232.Google ScholarGoogle ScholarCross RefCross Ref
  23. [23] Graham Benjamin and Maaten Laurens van der. 2017. Submanifold sparse convolutional networks. Retrieved from https://arXiv:1706.01307.Google ScholarGoogle Scholar
  24. [24] Guo Yulan, Bennamoun Mohammed, Sohel Ferdous, et al. 2016. A comprehensive performance evaluation of 3d local feature descriptors. In International Journal of Computer Vision, Vol. 116. 6689.Google ScholarGoogle Scholar
  25. [25] ITU-R. 2012. Methodology for the subjective assessment of the quality of television pictures. ITU-R Recommendation BT.500-13. https://www.itu.int/rec/R-REC-BT.500-14-201910-I.Google ScholarGoogle Scholar
  26. [26] P ITU-T,. 2012. Methods, metrics and procedures for statistical evaluation, qualification and comparison of objective quality prediction models. ITU-T Recommendation, 1401. https://www.itu.int/rec/T-REC-P.1401-202001-I.Google ScholarGoogle Scholar
  27. [27] Javaheri Alireza, Brites Catarina, Pereira Fernando, et al. 2021. A point-to-distribution joint geometry and color metric for point cloud quality assessment. In Proceedings of the IEEE International Workshop on Multimedia Signal Processing. 16.Google ScholarGoogle ScholarCross RefCross Ref
  28. [28] Javaheri Alireza, Brites Catarina, Pereira Fernando, and Ascenso Joao. 2020. A generalized Hausdorff distance based quality metric for point cloud geometry. In Proceedings of the International Conference on Quality of Multimedia Experience. 16.Google ScholarGoogle ScholarCross RefCross Ref
  29. [29] Javaheri Alireza, Brites Catarina, Pereira Fernando, and Ascenso João. 2021. Joint geometry and color projection-based point cloud quality metric. Retrieved from https://arXiv:2108.02481.Google ScholarGoogle Scholar
  30. [30] Javaheri Alireza, Brites Catarina, Pereira Fernando, and Ascenso Joao. 2021. Point cloud rendering after coding: Impacts on subjective and objective quality. IEEE Trans. Multimedia 23 (2021), 40494064.Google ScholarGoogle ScholarCross RefCross Ref
  31. [31] Jiang Qiuping, Gao Wei, Wang Shiqi, et al. 2020. Blind image quality measurement by exploiting high-order statistics with deep dictionary encoding network. IEEE Trans. Instrument. Measure. 69, 10 (2020), 73987410.Google ScholarGoogle ScholarCross RefCross Ref
  32. [32] Jiang Qiuping, Shao Feng, Lin Weisi, et al. 2018. Optimizing multistage discriminative dictionaries for blind image quality assessment. IEEE Trans. Multimedia 20, 8 (2018), 20352048.Google ScholarGoogle ScholarCross RefCross Ref
  33. [33] Jiang Qiuping, Zhou Wei, Chai Xiongli, et al. 2020. A full-reference stereoscopic image quality measurement via hierarchical deep feature degradation fusion. IEEE Trans. Instrument. Measure. 69, 12 (2020), 97849796.Google ScholarGoogle ScholarCross RefCross Ref
  34. [34] Johnson A. E. and Hebert M.. 1999. Using spin images for efficient object recognition in cluttered 3D scenes. IEEE Trans. Pattern Anal. Mach. Intell. 21, 5 (1999), 433449.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. [35] Kazhdan Michael, Bolitho Matthew, and Hoppe Hugues. 2006. Poisson surface reconstruction. In Proceedings of the 4th Eurographics Symposium on Geometry Processing. 6170.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. [36] Khoury Marc, Zhou Qian-Yi, and Koltun Vladlen. 2017. Learning compact geometric features. In Proceedings of the IEEE International Conference on Computer Vision. 153161.Google ScholarGoogle ScholarCross RefCross Ref
  37. [37] Liu Qi, Su Honglei, Duanmu Zhengfang, et al. 2022. Perceptual quality assessment of colored 3D point clouds. IEEE Trans. Visual. Comput. Graph. (2022), 1–1.Google ScholarGoogle Scholar
  38. [38] Liu Qi, Yuan Hui, Hamzaoui Raouf, et al. 2021. Reduced reference perceptual quality model with application to rate control for video-based point cloud compression. IEEE Trans. Image Process. 30 (2021), 66236636.Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. [39] Liu Qi, Yuan Hui, Su Honglei, et al. 2021. PQA-Net: Deep no reference point cloud quality assessment via multi-view projection. IEEE Trans. Circ. Syst. Video Technol. 31, 12 (2021), 46454660.Google ScholarGoogle ScholarCross RefCross Ref
  40. [40] Ma Kede, Duanmu Zhengfang, Wu Qingbo, et al. 2017. Waterloo exploration database: New challenges for image quality assessment models. IEEE Trans. Image Process. 26, 2 (2017), 10041016.Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. [41] Mekuria Rufael, Li Zhu, Tulvan Christian, and Chou Phil. 2016. Evaluation criteria for point cloud compression. ISO/IEC MPEG w16332, Geneva, Switzerland. https://dms.mpeg.expert/doc_end_user/current_document.php?id=55875.Google ScholarGoogle Scholar
  42. [42] Meynet Gabriel, Digne Julie, and Lavoué Guillaume. 2019. PC-MSDM: A quality metric for 3D point clouds. In Proceedings of the International Conference on Quality of Multimedia Experience. 13.Google ScholarGoogle ScholarCross RefCross Ref
  43. [43] Meynet Gabriel, Nehmé Yana, Digne Julie, and Lavoué Guillaume. 2020. PCQM: A full-reference quality metric for colored 3D point clouds. In Proceedings of the International Conference on Quality of Multimedia Experience. 16.Google ScholarGoogle ScholarCross RefCross Ref
  44. [44] Min Xiongkuo, Gu Ke, Zhai Guangtao, et al. 2021. Screen content quality assessment: Overview, benchmark, and beyond. ACM Comput. Surv. 54, 9 (2021), 136.Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. [45] Min Xiongkuo, Ma Kede, Gu Ke, et al. 2017. Unified blind quality assessment of compressed natural, graphic, and screen content images. IEEE Trans. Image Process. 26, 11 (2017), 54625474.Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. [46] Min Xiongkuo, Zhai Guangtao, Gu Ke, et al. 2019. Objective quality evaluation of dehazed images. IEEE Trans. Intell. Transport. Syst. 20, 8 (2019), 28792892.Google ScholarGoogle ScholarCross RefCross Ref
  47. [47] Min Xiongkuo, Zhai Guangtao, Gu Ke, et al. 2019. Quality evaluation of image dehazing methods using synthetic hazy images. IEEE Trans. Multimedia 21, 9 (2019), 23192333.Google ScholarGoogle ScholarCross RefCross Ref
  48. [48] Min Xiongkuo, Zhai Guangtao, Zhou Jiantao, et al. 2020. Study of subjective and objective quality assessment of audio-visual signals. IEEE Trans. Image Process. 29 (2020), 60546068.Google ScholarGoogle ScholarCross RefCross Ref
  49. [49] Min Xiongkuo, Zhou Jiantao, Zhai Guangtao, et al. 2020. A metric for light field reconstruction, compression, and display quality evaluation. IEEE Trans. Image Process. 29 (2020), 37903804.Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. [50] Nouri Anass, Charrier Christophe, and Lézoray Olivier. 2017. Greyc 3D colored mesh database. Technical Report, Normandie Université.Google ScholarGoogle Scholar
  51. [51] Ponomarenko Nikolay, Jin Lina, Ieremeiev Oleg, et al. 2015. Image database TID2013: Peculiarities, results and perspectives. Signal Process. Image Commun. 30 (2015), 5777.Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. [52] Ponomarenko Nikolay, Lukin Vladimir, Zelensky Alexander, et al. 2009. Tid2008-a database for evaluation of full-reference visual quality assessment metrics. Adv. Modern Radioelectr. 10, 4 (2009), 3045.Google ScholarGoogle Scholar
  53. [53] Rusu Radu Bogdan, Blodow Nico, and Beetz Michael. 2009. Fast point feature histograms (FPFH) for 3D registration. In Proceedings of the IEEE International Conference on Robotics and Automation. 32123217.Google ScholarGoogle ScholarCross RefCross Ref
  54. [54] Rusu Radu Bogdan, Blodow Nico, Marton Zoltan Csaba, and Beetz Michael. 2008. Aligning point cloud views using persistent feature histograms. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems. 33843391.Google ScholarGoogle Scholar
  55. [55] Salti Samuele, Tombari Federico, and Stefano Luigi Di. 2014. SHOT: Unique signatures of histograms for surface and texture description. Comput. Vision Image Understand. 125 (2014), 251264.Google ScholarGoogle ScholarCross RefCross Ref
  56. [56] Sheikh Hamid R. and Bovik Alan C.. 2006. Image information and visual quality. IEEE Trans. Image Process. 15, 2 (2006), 430444.Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. [57] Su Honglei, Duanmu Zhengfang, Liu Wentao, et al. 2019. Perceptual quality assessment of 3D point clouds. In Proceedings of the IEEE International Conference on Image Processing. 31823186.Google ScholarGoogle ScholarCross RefCross Ref
  58. [58] Sun Jia, Huang Di, Wang Yunhong, and Chen Liming. 2019. Expression robust 3D facial landmarking via progressive coarse-to-fine tuning. ACM Trans. Multimedia Comput. Commun. Appl. 15, 1 (2019), 123.Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. [59] Sun Wei, Min Xiongkuo, Zhai Guangtao, et al. 2020. MC360IQA: A multi-channel CNN for blind 360-degree image quality assessment. IEEE J. Select. Top. Signal Process. 14, 1, 6477.Google ScholarGoogle ScholarCross RefCross Ref
  60. [60] Tao Wenxu, Jiang Gangyi, Jiang Zhidi, et al. 2021. Point cloud projection and multi-scale feature fusion network based blind quality assessment for colored point clouds. In Proceedings of the ACM International Conference on Multimedia. 52665272.Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. [61] Tian Dong, Ochimizu Hideaki, Feng Chen, et al. 2017. Evaluation metrics for point cloud compression. ISO/IEC JTC m74008, Geneva, Switzerland (2017).Google ScholarGoogle Scholar
  62. [62] Tombari Federico, Salti Samuele, and Stefano Luigi Di. 2010. Unique shape context for 3d data description. In Proceedings of the ACM Workshop on 3D Object Retrieval. 5762.Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. [63] Torlig Eric M., Alexiou Evangelos, Fonseca Tiago A., et al. 2018. A novel methodology for quality assessment of voxelized point clouds. Appl. Dig. Image Process. XLI 10752 (2018), 174190.Google ScholarGoogle Scholar
  64. [64] Group Video Quality Experts. 2000. Final report from the video quality experts group on the validation of objective models of video quality assessment. In Proceedings of the VQEG Meeting.Google ScholarGoogle Scholar
  65. [65] Group Video Quality Experts. 2010. Report on the validation of video quality models for high definition video content. In Proceedings of the VQEG Meeting.Google ScholarGoogle Scholar
  66. [66] Viola Irene and Cesar Pablo. 2020. A reduced reference metric for visual quality evaluation of point cloud contents. IEEE Signal Process. Lett. 27 (2020), 16601664.Google ScholarGoogle ScholarCross RefCross Ref
  67. [67] Viola Irene, Subramanyam Shishir, and Cesar Pablo. 2020. A color-based objective quality metric for point cloud contents. In Proceedings of the International Conference on Quality of Multimedia Experience. 16.Google ScholarGoogle ScholarCross RefCross Ref
  68. [68] Wang Zhou, Bovik Alan C., Sheikh Hamid R., and Simoncelli Eero P.. 2004. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 13, 4 (2004), 600612.Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. [69] Wu Jinjian, Ma Jupo, Liang Fuhu, et al. 2020. End-to-end blind image quality prediction with cascaded deep neural network. IEEE Trans. Image Process. 29 (2020), 74147426.Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. [70] Xue Wufeng, Zhang Lei, Mou Xuanqin, and Bovik Alan C.. 2014. Gradient magnitude similarity deviation: A highly efficient perceptual image quality index. IEEE Trans. Image Process. 23, 2 (2014), 684695.Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. [71] Yang Qi, Chen Hao, Ma Zhan, et al. 2021. Predicting the perceptual quality of point cloud: A 3D-to-2D projection-based exploration. IEEE Trans. Multimedia 23 (2021), 38773891.Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. [72] Yang Qi, Chen Siheng, Xu Yiling, et al. 2021. Point cloud distortion quantification based on potential energy for human and machine perception. Retrieved from https://arXiv:2103.02850.Google ScholarGoogle Scholar
  73. [73] Yang Qi, Liu Yipeng, Chen Siheng, et al. 2022. No-reference point cloud quality assessment via domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 11.Google ScholarGoogle ScholarCross RefCross Ref
  74. [74] Yang Qi, Ma Zhan, Xu Yiling, et al. 2022. Inferring point cloud quality via graph similarity. IEEE Trans. Pattern Anal. Mach. Intell. 44, 6 (2022), 30153029.Google ScholarGoogle ScholarCross RefCross Ref
  75. [75] Zeng Andy, Song Shuran, Nießner Matthias, et al. 2017. 3DMatch: Learning local geometric descriptors from RGB-D reconstructions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 199208.Google ScholarGoogle ScholarCross RefCross Ref
  76. [76] Zhai Guangtao and Min Xiongkuo. 2020. Perceptual image quality assessment: A survey. Sci. China Info. Sci. 63, 11 (2020), 211301.Google ScholarGoogle ScholarCross RefCross Ref
  77. [77] Zhang Yujie, Yang Qi, and Xu Yiling. 2021. MS-GraphSIM: Inferring point cloud quality via multiscale graph similarity. In Proceedings of the ACM International Conference on Multimedia. 12301238.Google ScholarGoogle ScholarDigital LibraryDigital Library
  78. [78] Zhang Zicheng, Sun Wei, Min Xiongkuo, et al. 2021. No-reference quality assessment for 3D colored point cloud and mesh Models. Retrieved from https://arXiv:2107.02041.Google ScholarGoogle Scholar
  79. [79] Zhang Zicheng, Sun Wei, Min Xiongkuo, et al. 2021. A no-reference visual quality metric for 3D color meshes. In Proceedings of the IEEE International Conference on Multimedia and Expo Workshops. 16.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Point Cloud Quality Assessment: Dataset Construction and Learning-based No-reference Metric

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          Full Access

          • Published in

            cover image ACM Transactions on Multimedia Computing, Communications, and Applications
            ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 19, Issue 2s
            April 2023
            545 pages
            ISSN:1551-6857
            EISSN:1551-6865
            DOI:10.1145/3572861
            • Editor:
            • Abdulmotaleb El Saddik
            Issue’s Table of Contents

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 17 February 2023
            • Online AM: 28 July 2022
            • Accepted: 19 July 2022
            • Revised: 17 June 2022
            • Received: 11 July 2021
            Published in tomm Volume 19, Issue 2s

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          Full Text

          View this article in Full Text.

          View Full Text

          HTML Format

          View this article in HTML Format .

          View HTML Format
          About Cookies On This Site

          We use cookies to ensure that we give you the best experience on our website.

          Learn more

          Got it!