skip to main content
research-article

Detection of Moving Object Using Superpixel Fusion Network

Authors Info & Claims
Published:16 March 2023Publication History
Skip Abstract Section

Abstract

Moving object detection is still a challenging task in complex scenes. The existing methods based on deep learning mainly use U-Nets and have achieved amazing results. However, they ignore the local continuity between pixels. In order to solve this problem, a method based on a superpixel fusion network (SF-Net) is proposed in this article. First, the median filter is used to extract the candidate foreground (called pixel features) and the image sequence is segmented by superpixel. Then, the histogram features (called superpixel features) of the candidate foreground superpixels are extracted. Next, the pixel features and the superpixel features are the inputs of SF-Net, respectively. Experiments show the effectiveness of SF-Net on 34 image sequences and the average F-measure reaches 0.84. SF-Net can remove more background noise and has stronger expression ability than a network with the same depth.

REFERENCES

  1. [1] Bakkay Mohammed Chafik, Rashwan Hatem A., Salmane Houssam, Khoudour Louahdi, Puig D., and Ruichek Yassine. 2018. BSCGAN: Deep background subtraction with conditional generative adversarial networks. International Conference on Image Processing (ICIP’18). IEEE, Athens, 40184022.Google ScholarGoogle ScholarCross RefCross Ref
  2. [2] Barnich Olivier and Droogenbroeck Marc Van. 2010. ViBe: A universal background subtraction algorithm for video sequences. IEEE Transactions on Image Processing 20, 6 (2010), 17091724.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. [3] Boukhriss Rania Rebai, Fendri Emna, and Hammami Mohamed. 2020. Moving object detection under different weather conditions using full-spectrum light sources. Pattern Recognition Letters 129 (2020), 205212.Google ScholarGoogle ScholarCross RefCross Ref
  4. [4] Bouwmans T.. 2011. Recent advanced statistical background modeling for foreground detection: A systematic survey. Recent Patents on Computer Science 4, 3 (2011), 147176.Google ScholarGoogle Scholar
  5. [5] Candès Emmanuel J., Li Xiaodong, Ma Yi, and Wright John. 2011. Robust principal component analysis? Journal of the ACM 58, 3 (2011), 137.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. [6] Chandrakar Ramakant, Raja Rohit, Miri Rohit, Sinha Upasana, Kushwaha Alok Kumar Singh, and Raja Hiral. 2022. Enhanced the moving object detection and object tracking for traffic surveillance using RBF-FDLNN and CBF algorithm. Expert Systems with Applications 191 (2022), 116306.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. [7] Chen Mingliang, Wei Xing, Yang Qingxiong, Li Qing, Wang Gang, and Yang Ming-Hsuan. 2017. Spatiotemporal GMM for background subtraction with superpixel hierarchy. IEEE Transactions on Pattern Analysis and Machine Intelligence 40, 6 (2017), 15181525.Google ScholarGoogle ScholarCross RefCross Ref
  8. [8] Elgammal Ahmed, Harwood David, and Davis Larry. 2000. Non-parametric model for background subtraction. European Conference on Computer Vision. Springer, Berlin, Heidelberg, 751767.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. [9] Fang Weitao, Zhang Tingting, Zhao Chenqiu, Soomro Danyal Badar, Taj Rizwan, and Hu Haibo. 2018. Background subtraction based on random superpixels under multiple scales for video analytics. IEEE Access 6 (2018), 3337633386.Google ScholarGoogle ScholarCross RefCross Ref
  10. [10] Fu Yinghua, Wang Yao, Zhong Yin, Fu Dongxiang, and Peng Qing. 2020. Change detection based on tensor RPCA for longitudinal retinal fundus images. Neurocomputing 387 (2020), 112.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. [11] Gadde Raghudeep, Jampani Varun, Kiefel Martin, Kappler Daniel, and Gehler Peter V.. 2016. Superpixel convolutional networks using bilateral inceptions. In European Conference on Computer Vision. Springer, Amsterdam, 597613.Google ScholarGoogle ScholarCross RefCross Ref
  12. [12] Giordano Daniela, Murabito Francesca, Palazzo Simone, and Spampinato Concetto. 2015. Superpixel-based video object segmentation using perceptual organization and location prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Boston, MA, 48144822.Google ScholarGoogle ScholarCross RefCross Ref
  13. [13] Giraldo Jhony H., Javed Sajid, and Bouwmans Thierry. 2022. Graph moving object segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence 44, 5 (2022), 24852503.Google ScholarGoogle Scholar
  14. [14] Giraldo Jhony H., Javed Sajid, Werghi Naoufel, and Bouwmans Thierry. 2021. Graph CNN for moving object detection in complex environments from unseen videos. International Conference on Computer Vision Workshops (ICCVW), Montreal, BC, 225–233.Google ScholarGoogle Scholar
  15. [15] Guo Jianting, Zheng Peijia, and Huang Jiwu. 2017. An efficient motion detection and tracking scheme for encrypted surveillance videos. ACM Transactions on Multimedia Computing, Communications, and Applications 13, 4 (2017), 123.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. [16] He Shengfeng, Lau Rynson W. H., Liu Wenxi, Huang Zhe, and Yang Qingxiong. 2015. SuperCNN: A superpixelwise convolutional neural network for salient object detection. International Journal of Computer Vision 115, 3 (2015), 330344.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. [17] Hofmann Martin, Tiefenbacher Philipp, and Rigoll Gerhard. 2012. Background segmentation with feedback: The pixel-based adaptive segmenter. IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. IEEE, Providence, RI, 3843.Google ScholarGoogle ScholarCross RefCross Ref
  18. [18] Hou Bingxin, Liu Ying, and Ling Nam. 2020. A super-fast deep network for moving object detection. In 2020 IEEE International Symposium on Circuits and Systems (ISCAS’20), IEEE, Seville, 15. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  19. [19] Hou Bingxin, Liu Ying, Ling Nam, Liu Lingzhi, and Ren Yongxiong. 2021. A fast lightweight 3D separable convolutional neural network with multi-input multi-output for moving object detection. IEEE Access 9 (2021), 148433148448.Google ScholarGoogle ScholarCross RefCross Ref
  20. [20] Isik Sahin, Özkan Kemal, Günal Serkan, and Gerek Ömer Nezih. 2018. SWCD: A sliding window and self-regulated learning-based background updating method for change detection in videos. Journal of Electronic Imaging 27, 2 (2018), 023002.Google ScholarGoogle ScholarCross RefCross Ref
  21. [21] Jampani Varun, Sun Deqing, Liu Ming-Yu, Yang Ming-Hsuan, and Kautz Jan. 2018. Superpixel sampling networks. In Proceedings of the European Conference on Computer Vision (ECCV’18), Springer, Munich, 352368.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. [22] Javed Sajid, Oh Seon Ho, Sobral Andrews, Bouwmans Thierry, and Jung Soon Ki. 2015. Background subtraction via superpixel-based online matrix decomposition with structured foreground constraints. In Proceedings of the IEEE International Conference on Computer Vision Workshops (ICCVW). IEEE, Santiago, 930–938.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. [23] Javed Sajid, Mahmood Arif, Al-Maadeed Somaya, Bouwmans Thierry, and Jung Soon Ki. 2018. Moving object detection in complex scene using spatiotemporal structured-sparse RPCA. IEEE Transactions on Image Processing 28, 2 (2018), 10071022.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. [24] Javed Sajid, Mahmood Arif, Bouwmans Thierry, and Soon Ki Jung. 2017. Superpixels based manifold structured sparse RPCA for moving object detection. In International Workshop on Activity Monitoring by Multiple Distributed Sensing (BMVC’17). BMVA Press, London, 1–13.Google ScholarGoogle Scholar
  25. [25] Javed Sajid, Mahmood Arif, Dias Jorge, and Werghi Naoufel. 2020. CS-RPCA: Clustered sparse RPCA for moving object detection. In 2020 IEEE International Conference on Image Processing (ICIP’20). IEEE, Abu Dhabi, 32093213.Google ScholarGoogle ScholarCross RefCross Ref
  26. [26] Javed Sajid, Narayanamurthy Praneeth, Bouwmans Thierry, and Vaswani Namrata. 2018. Robust PCA and robust subspace tracking: A comparative evaluation. In 2018 IEEE Statistical Signal Processing Workshop (SSP’18), IEEE, Freiburg im Breisgau, 836840.Google ScholarGoogle Scholar
  27. [27] Ju Jianguo and Xing Jinsheng. 2019. Moving object detection based on smoothing three frame difference method fused with RPCA. Multimedia Tools and Applications 78, 21 (2019), 2993729951.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. [28] Kalsotra Rudrika and Arora Sakshi. 2021. Background subtraction for moving object detection: Explorations of recent developments and challenges. The Visual Computer (2021), 128.Google ScholarGoogle Scholar
  29. [29] Li Chuanjun, Zheng S. Q., and Prabhakaran B.. 2007. Segmentation and recognition of motion streams by similarity search. ACM Transactions on Multimedia Computing, Communications, and Applications 3, 3 (2007), 16–es.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. [30] Li Yang. 2022. Moving object detection for unseen videos via truncated weighted robust principal component analysis and salience convolution neural network. Multimedia Tools and Applications (2022), 112.Google ScholarGoogle Scholar
  31. [31] Li Yang, Liu Guangcan, Liu Qingshan, Sun Yubao, and Chen Shengyong. 2019. Moving object detection via segmentation and saliency constrained RPCA. Neurocomputing 323 (2019), 352362.Google ScholarGoogle ScholarCross RefCross Ref
  32. [32] Li Ye-Fei, Liu Lei, Song Jia-Xiao, Zhang Zhuang, and Chen Xu. 2018. Combination of local binary pattern operator with sample consensus model for moving objects detection. Infrared Physics and Technology 92 (2018), 4452.Google ScholarGoogle ScholarCross RefCross Ref
  33. [33] Y. Yang, A. Loquercio, D. Scaramuzza, and S. Soatto. 2019. Unsupervised moving object detection via contextual information separation. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, Long Beach, CA, 879–888.Google ScholarGoogle Scholar
  34. [34] Lim Long Ang and Keles Hacer Yalim. 2018. Foreground segmentation using convolutional neural networks for multiscale feature encoding. Pattern Recognition Letters 112 (2018), 256262.Google ScholarGoogle ScholarCross RefCross Ref
  35. [35] Lin Feng, Zhou Wengang, Deng Jiajun, Li Bin, Lu Yan, and Li Houqiang. 2021. Residual refinement network with attribute guidance for precise saliency detection. ACM Transactions on Multimedia Computing, Communications, and Applications 17, 3 (2021), 119.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. [36] Lv Ning, Chen Chen, Qiu Tie, and Sangaiah Arun Kumar. 2018. Deep learning and superpixel feature extraction based on contractive autoencoder for change detection in SAR images. IEEE transactions on industrial informatics 14, 12 (2018), 55305538.Google ScholarGoogle ScholarCross RefCross Ref
  37. [37] Mandal Murari, Dhar Vansh, Mishra Abhishek, Vipparthi Santosh Kumar, and Abdel-Mottaleb Mohamed. 2021. 3DCD: Scene independent end-to-end spatiotemporal feature learning framework for change detection in unseen videos. IEEE Transactions on Image Processing 30 (2021), 546558.Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. [38] Mandal Murari and Vipparthi Santosh Kumar. 2022. An empirical review of deep learning frameworks for change detection: Model design, experimental frameworks, challenges and research needs. IEEE Transactions on Intelligent Transportation Systems 23, 7 (2022), 61016122.Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. [39] Minematsu Tsubasa, Shimada Atsushi, and Taniguchi Rin-ichiro. 2020. Rethinking background and foreground in deep neural network-based background subtraction. IEEE International Conference on Image Processing (ICIP’20), IEEE, Abu Dhabi, 32293233.Google ScholarGoogle ScholarCross RefCross Ref
  40. [40] Minematsu T., Shimada A., Uchiyama H., and Taniguchi Rin Ichiro. 2018. Analytics of deep neural network-based background subtraction. Journal of Imaging 4, 6 (2018), 78.Google ScholarGoogle ScholarCross RefCross Ref
  41. [41] Montero Vince Jebryl, Jung Woo-Young, and Jeong Yong-Jin. 2021. Fast background subtraction with adaptive block learning using expectation value suitable for real-time moving object detection. Journal of Real-Time Image Processing 18, 3 (2021), 967981.Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. [42] Patil Prashant W. and Murala Subrahmanyam. 2018. MSFgNet: A novel compact end-to-end deep network for moving object detection. IEEE Transactions on Intelligent Transportation Systems 20, 11 (2018), 40664077.Google ScholarGoogle ScholarCross RefCross Ref
  43. [43] Patil Prashant W., Murala Subrahmanyam, Dhall Abhinav, and Chaudhary Sachin. 2018. MsEDNet: Multi-scale deep saliency learning for moving object detection. In 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC’18). IEEE, Miyazaki, 16701675.Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. [44] Rai Mritunjay, Sharma Rohit, Satapathy Suresh Chandra, Yadav Dileep Kumar, Maity Tanmoy, and Yadav Ravindra Kumar. 2022. An improved statistical approach for moving object detection in thermal video frames. Multimedia Tools and Applications 81, 7 (2022), 92899311.Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. [45] Stauffer Chris and Grimson W. Eric L.. 1999. Adaptive background mixture models for real-time tracking. In Proceedings of the 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149), IEEE, Fort Collins, CO, 246–252.Google ScholarGoogle ScholarCross RefCross Ref
  46. [46] Sultana Maryam, Mahmood Arif, and Jung Soon Ki. 2020. Unsupervised moving object detection in complex scenes using adversarial regularizations. IEEE Transactions on Multimedia 23 (2020), 20052018.Google ScholarGoogle ScholarCross RefCross Ref
  47. [47] Suo Peng and Wang Yanjiang. 2008. An improved adaptive background modeling algorithm based on Gaussian Mixture Model. In 9th International Conference on Signal Processing. IEEE, Beijing, 14361439.Google ScholarGoogle Scholar
  48. [48] Tezcan M. O., Ishwar Prakash, and Konrad Janusz. 2020. BSUV-Net: A fully-convolutional neural network for background subtraction of unseen videos. IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, Snowmass, CO, 2763–2772.Google ScholarGoogle Scholar
  49. [49] Wang Yi, Jodoin Pierre-Marc, Porikli Fatih, Konrad Janusz, Benezeth Yannick, and Ishwar Prakash. 2014. CDnet 2014: An expanded change detection benchmark dataset. IEEE Conference on Computer Vision and Pattern Recognition Workshops, IEEE, Columbus, OH, 393–400.Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. [50] Xiang Pei, Song Jiangluqi, Qin Hanlin, Tan Wei, Li Huan, and Zhou Huixin. 2021. Visual attention and background subtraction with adaptive weight for hyperspectral anomaly detection. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 14 (2021), 22702283.Google ScholarGoogle ScholarCross RefCross Ref
  51. [51] Yang Yanchao, Loquercio Antonio, Scaramuzza Davide, and Soatto Stefano. 2019. Unsupervised moving object detection via contextual information separation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, Long Beach, CA, 879–888.Google ScholarGoogle ScholarCross RefCross Ref
  52. [52] Zhang Jun, Wang Meng, Lin Liang, Yang Xun, Gao Jun, and Rui Yong. 2017. Saliency detection on light field: A multi-cue approach. ACM Transactions on Multimedia Computing, Communications, and Applications 13, 3 (2017), 122.Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. [53] Zhang Jin, Zhang Xi, Zhang Yanyan, Duan Yexin, Li Yang, and Pan Zhisong. 2021. Meta-knowledge learning and domain adaptation for unseen background subtraction. IEEE Transactions on Image Processing 30 (2021), 90589068.Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. [54] Zhao Chenqiu, Zhang Tingting, Huang Qianying, Zhang Xiaohong, Yang Dan, Qu Yinq, and Huang Sheng. 2016. Background subtraction based on superpixels under multi-scale in complex scenes. In Chinese Conference on Pattern Recognition. Springer, Chendu, 392403.Google ScholarGoogle ScholarCross RefCross Ref
  55. [55] Zhao Jiaxing, Bo Ren, Hou Qibin, Cheng Ming-Ming, and Rosin Paul. 2018. FLIC: Fast linear iterative clustering with active search. Computational Visual Media 4, 4 (2018), 333348.Google ScholarGoogle ScholarCross RefCross Ref
  56. [56] Zhu Lin, Jiang Xiurong, Li Jianing, Hao Yuanhong, and Tian Yonghong. 2020. Motion-aware structured matrix factorization for foreground detection in complex scenes. ACM Transactions on Multimedia Computing, Communications, and Applications 16, 4 (2020), 123.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

(auto-classified)
  1. Detection of Moving Object Using Superpixel Fusion Network

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          Full Access

          • Published in

            cover image ACM Transactions on Multimedia Computing, Communications, and Applications
            ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 19, Issue 5
            September 2023
            262 pages
            ISSN:1551-6857
            EISSN:1551-6865
            DOI:10.1145/3585398
            • Editor:
            • Abdulmotaleb El Saddik
            Issue’s Table of Contents

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 16 March 2023
            • Online AM: 12 January 2023
            • Accepted: 3 January 2023
            • Revised: 25 December 2022
            • Received: 2 May 2022
            Published in tomm Volume 19, Issue 5

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          Full Text

          View this article in Full Text.

          View Full Text

          HTML Format

          View this article in HTML Format .

          View HTML Format
          About Cookies On This Site

          We use cookies to ensure that we give you the best experience on our website.

          Learn more

          Got it!