skip to main content
research-article

Objective Object Segmentation Visual Quality Evaluation: Quality Measure and Pooling Method

Authors Info & Claims
Published:04 March 2022Publication History
Skip Abstract Section

Abstract

Objective object segmentation visual quality evaluation is an emergent member of the visual quality assessment family. It aims to develop an objective measure instead of a subjective survey to evaluate the object segmentation quality in agreement with human visual perception. It is an important benchmark for assessing and comparing the performances of object segmentation methods in terms of visual quality. Despite its essential role, sufficient study compared with other visual quality evaluation studies is still lacking. In this article, we propose a novel full-reference objective measure that includes a two-level single object segmentation visual quality measure and a pooling method for multiple object segmentation overall visual quality. The single object segmentation visual quality measure combines a pixel-level sub-measure and a region-level sub-measure for evaluating the similarity of area, shape, and object completeness between the segmentation result and the ground truth in terms of human visual perception. For the proposed multiple object segmentation overall visual quality pooling method, the rank of each object’s segmentation quality as a novel factor is integrated into the weighted harmonic mean to evaluate the overall quality. To evaluate the performance of our proposed measure, we tested it on an object segmentation subjective visual quality assessment database. The experimental results demonstrate that our proposed two-level measure and pooling method with good robustness perform better in matching subjective assessments compared with other state-of-the-art objective measures.

REFERENCES

  1. [1] Avidan Shai and Shamir Ariel. 2007. Seam carving for content-aware image resizing. In ACM SIGGRAPH 2007 Papers. 10–es.Google ScholarGoogle Scholar
  2. [2] Berezsky Oleh, Melnyk Grygoriy, Batko Yuriy, and Pitsun Oleh. 2016. Regions matching algorithms analysis to quantify the image segmentation results. In Proceedings of the 2016 11th International Scientific and Technical Conference on Computer Sciences and Information Technologies (CSIT’16). IEEE, Los Alamitos, CA, 3336.Google ScholarGoogle ScholarCross RefCross Ref
  3. [3] Chen Zhenzhong and Zhu Han. 2019. Visual quality evaluation for semantic segmentation: Subjective assessment database and objective assessment measure. IEEE Transactions on Image Processing 28, 12 (2019), 57855796.Google ScholarGoogle ScholarCross RefCross Ref
  4. [4] Cheng Ming-Ming, Mitra Niloy J., Huang Xiaolei, Torr Philip H. S., and Hu Shi-Min. 2015. Global contrast based salient region detection. IEEE Transactions on Pattern Analysis and Machine Intelligence 37, 3 (2015), 569582.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. [5] Csurka Gabriela, Larlus Diane, Perronnin Florent, and Meylan France. 2013. What is a good evaluation measure for semantic segmentation? In Proceedings of the British Machine Vision Conference, Vol. 27.Google ScholarGoogle ScholarCross RefCross Ref
  6. [6] Dogra Debi Prosad, Majumdar Arun K., and Sural Shamik. 2012. Evaluation of segmentation techniques using region area and boundary matching information. Journal of Visual Communication and Image Representation 23, 1 (2012), 150160.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. [7] Fan Deng-Ping, Cheng Ming-Ming, Liu Yun, Li Tao, and Borji Ali. 2017. Structure-measure: A new way to evaluate foreground maps. In Proceedings of the IEEE International Conference on Computer Vision.Google ScholarGoogle ScholarCross RefCross Ref
  8. [8] Fan Deng-Ping, Gong Cheng, Cao Yang, Ren Bo, Cheng Ming-Ming, and Borji Ali. 2018. Enhanced-alignment measure for binary foreground map evaluation. In Proceedings of the International Joint Conferences on Artificial Intelligence.Google ScholarGoogle ScholarCross RefCross Ref
  9. [9] Feng Mengyang, Lu Huchuan, and Ding Errui. 2019. Attentive feedback network for boundary-aware salient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 16231632.Google ScholarGoogle ScholarCross RefCross Ref
  10. [10] Fu Zhenqi, Shao Feng, Jiang Qiuping, Meng Xiangchao, and Ho Yo-Sung. 2021. Subjective and objective quality assessment for stereoscopic image retargeting. IEEE Transactions on Multimedia 23 (2021), 21002113.Google ScholarGoogle ScholarCross RefCross Ref
  11. [11] Ge Feng, Wang Song, and Liu Tiecheng. 2007. New benchmark for image segmentation evaluation. Journal of Electronic Imaging 16, 3 (2007), 033011.Google ScholarGoogle ScholarCross RefCross Ref
  12. [12] Gelasca E. Drelie. 2005. Full-Reference Objective Quality Metrics for Video Watermarking, Video Segmentation and 3D Model Watermarking. Ph.D. Dissertation. Lausanne, EPFL.Google ScholarGoogle Scholar
  13. [13] Gelasca Elisa Drelie and Ebrahimi Touradj. 2009. On evaluating video object segmentation quality: A perceptually driven objective metric. IEEE Journal of Selected Topics in Signal Processing 3, 2 (2009), 319335.Google ScholarGoogle ScholarCross RefCross Ref
  14. [14] Han Jungong, Pauwels Eric J., and Zeeuw Paul De. 2013. Fast saliency-aware multi-modality image fusion. Neurocomputing 111 (2013), 7080.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. [15] Huang Chao, Wu Qingbo, and Meng Fanman. 2016. QualityNet: Segmentation quality evaluation with deep convolutional networks. In Proceedings of the 2016 Conference on Visual Communications and Image Processing (VCIP’16). IEEE, Los Alamitos, CA, 14.Google ScholarGoogle ScholarCross RefCross Ref
  16. [16] Hussain Tanveer, Anwar Saeed, Ullah Amin, Muhammad Khan, and Baik Sung Wook. 2021. Densely deformable efficient salient object detection network. Arxiv Preprint arXiv:2102.06407 (2021).Google ScholarGoogle Scholar
  17. [17] Kohli Pushmeet, L’ubor Ladicky, and Philip H. S. Torr. 2009. Robust higher order potentials for enforcing label consistency. International Journal of Computer Vision 82, 3 (2009), 302324.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. [18] Li Yin, Hou Xiaodi, Koch Christof, Rehg James M., and Yuille Alan L.. 2014. The secrets of salient object segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 280287.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. [19] Liu Jiang-Jiang, Hou Qibin, Cheng Ming-Ming, Feng Jiashi, and Jiang Jianmin. 2019. A simple pooling-based design for real-time salient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 39173926.Google ScholarGoogle ScholarCross RefCross Ref
  20. [20] Margolin Ran, Zelnik-Manor Lihi, and Tal Ayellet. 2014. How to evaluate foreground maps? In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 248255.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. [21] McGuinness Kevin and O’Connor Noel E.. 2010. A comparative evaluation of interactive segmentation algorithms. Pattern Recognition 43, 2 (2010), 434444.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. [22] Movahedi Vida and Elder James H.. 2010. Design and perceptual validation of performance measures for salient object segmentation. In Proceedings of the Computer Vision and Pattern Recognition Workshops (CVPRW’10). 4956.Google ScholarGoogle ScholarCross RefCross Ref
  23. [23] Peng Bo, Simfukwe Macmillan, and Li Tianrui. 2018. Region-based image segmentation evaluation via perceptual pooling strategies. Machine Vision and Applications 29, 3 (2018), 477488.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. [24] Pont-Tuset Jordi and Marques Ferran. 2015. Supervised evaluation of image segmentation and object proposal techniques. IEEE Transactions on Pattern Analysis and Machine Intelligence 38, 7 (2015), 14651478.Google ScholarGoogle ScholarCross RefCross Ref
  25. [25] Powers David Martin. 2011. Evaluation: From precision, recall and F-measure to ROC, informedness, markedness and correlation. Journal of Machine Learning Technologies 2, 1 (2011), 3763.Google ScholarGoogle ScholarCross RefCross Ref
  26. [26] Qin Xuebin, Zhang Zichen, Huang Chenyang, Gao Chao, Dehghan Masood, and Jagersand Martin. 2019. BASNet: Boundary-aware salient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 74797489.Google ScholarGoogle ScholarCross RefCross Ref
  27. [27] Qingbo, Wu, Hongliang, Li, Fanman, Meng, and King N. Ngan. 2018. A perceptually weighted rank correlation indicator for objective image quality assessment. IEEE Transactions on Image Processing 27, 5 (2018), 2499–2513.Google ScholarGoogle Scholar
  28. [28] Ran Shi, Jian Xiong, and Tong Qiao. 2020. Objective object segmentation visual quality evaluation based on pixel-level and region-level characteristics. In Proceedings of the 2nd ACM International Conference on Multimedia in Asia (MMAsia’20).Google ScholarGoogle Scholar
  29. [29] Rother Carsten, Kolmogorov Vladimir, and Blake Andrew. 2004. GrabCut: Interactive foreground extraction using iterated graph cuts. ACM Transactions on Graphics 23 (2004), 309314.Google ScholarGoogle Scholar
  30. [30] Shi Ran, Ngan King Ngi, Li Songnan, Paramesran Raveendran, and Li Hongliang. 2015. Visual quality evaluation of image object segmentation: Subjective assessment and objective measure. IEEE Transactions on Image Processing 24, 12 (2015), 50335045.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. [31] Shi Wen, Meng Fanman, and Wu Qingbo. 2017. Segmentation quality evaluation based on multi-scale convolutional neural networks. In Proceedings of the 2017 IEEE Conference on Visual Communications and Image Processing (VCIP’17). IEEE, Los Alamitos, CA, 14.Google ScholarGoogle ScholarCross RefCross Ref
  32. [32] Sim Kyohoon, Yang Jiachen, Lu Wen, and Gao Xinbo. 2020. MaD-DLS: Mean and deviation of deep and local similarity for image quality assessment. IEEE Transactions on Multimedia 23 (2020), 40374048.Google ScholarGoogle Scholar
  33. [33] Sofiiuk Konstantin, Petrov Ilia, Barinova Olga, and Konushin Anton. 2020. F-BRS: Rethinking backpropagating refinement for interactive segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR’20).Google ScholarGoogle ScholarCross RefCross Ref
  34. [34] Villegas Paulo and Marichal Xavier. 2004. Perceptually-weighted evaluation criteria for segmentation masks in video sequences. IEEE Transactions on Image Processing 13, 8 (2004), 10921103.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. [35] Final Report From the Video Quality Experts Group on the Validation of Objective Models of Video Quality Assessment II, Video Quality Expert Group (VQEG), 2003. [Online]. Available http://www.vqeg.org/.Google ScholarGoogle Scholar
  36. [36] Wang Wenguan, Lai Qiuxia, Fu Huazhu, Shen Jianbing, Ling Haibin, and Yang Ruigang. 2021. Salient object detection in the deep learning era: An in-depth survey. IEEE Transactions on Pattern Analysis and Machine Intelligence. Early access, January 12, 2021.Google ScholarGoogle Scholar
  37. [37] Wang Wenguan, Shen Jianbing, Cheng Ming-Ming, and Shao Ling. 2019. An iterative and cooperative top-down and bottom-up inference network for salient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 59685977.Google ScholarGoogle ScholarCross RefCross Ref
  38. [38] Wang Zhou, Bovik Alan C., Sheikh Hamid R., and Simoncelli Eero P.. 2004. Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing 13, 4 (2004), 600612.Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. [39] Wichmann Felix A. and Hill N. Jeremy. 2001. The psychometric function: I. Fitting, sampling, and goodness of fit. Perception & Psychophysics 63, 8 (2001), 12931313.Google ScholarGoogle ScholarCross RefCross Ref
  40. [40] Wu Qingbo, Wang Lei, Ngan King Ngi, Li Hongliang, Meng Fanman, and Xu Linfeng. 2020. Subjective and objective de-raining quality assessment towards authentic rain image. IEEE Transactions on Circuits and Systems for Video Technology 30, 11 (2020), 38833897.Google ScholarGoogle ScholarCross RefCross Ref
  41. [41] Wu Runmin, Feng Mengyang, Guan Wenlong, Wang Dong, Lu Huchuan, and Ding Errui. 2019. A mutual learning method for salient object detection with intertwined multi-supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 81508159.Google ScholarGoogle ScholarCross RefCross Ref
  42. [42] Wu Zhe, Su Li, and Huang Qingming. 2019. Cascaded partial decoder for fast and accurate salient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 39073916.Google ScholarGoogle ScholarCross RefCross Ref
  43. [43] Zeng Yu, Zhuge Yunzhi, Lu Huchuan, Zhang Lihe, Qian Mingyang, and Yu Yizhou. 2019. Multi-source weak supervision for saliency detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 60746083.Google ScholarGoogle ScholarCross RefCross Ref
  44. [44] Zhang Dingwen, Han Junwei, Zhang Yu, and Xu Dong. 2019. Synthesizing supervision for learning deep saliency network without human annotation. IEEE Transactions on Pattern Analysis and Machine Intelligence 42, 7 (2019), 17551769.Google ScholarGoogle ScholarCross RefCross Ref
  45. [45] Zhang Lu, Zhang Jianming, Lin Zhe, Lu Huchuan, and He You. 2019. Capsal: Leveraging captioning to boost semantics for salient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 60246033.Google ScholarGoogle ScholarCross RefCross Ref
  46. [46] Ziółko Bartosz, Emms David, and Ziółko Mariusz. 2017. Fuzzy evaluations of image segmentations. IEEE Transactions on Fuzzy Systems 26, 4 (2017), 17891799.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Objective Object Segmentation Visual Quality Evaluation: Quality Measure and Pooling Method

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image ACM Transactions on Multimedia Computing, Communications, and Applications
        ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 18, Issue 3
        August 2022
        478 pages
        ISSN:1551-6857
        EISSN:1551-6865
        DOI:10.1145/3505208
        Issue’s Table of Contents

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 4 March 2022
        • Accepted: 1 September 2021
        • Revised: 1 August 2021
        • Received: 1 May 2021
        Published in tomm Volume 18, Issue 3

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Refereed

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Full Text

      View this article in Full Text.

      View Full Text

      HTML Format

      View this article in HTML Format .

      View HTML Format
      About Cookies On This Site

      We use cookies to ensure that we give you the best experience on our website.

      Learn more

      Got it!