skip to main content
research-article

PAV-SOD: A New Task towards Panoramic Audiovisual Saliency Detection

Published:25 February 2023Publication History
Skip Abstract Section

Abstract

Object-level audiovisual saliency detection in 360° panoramic real-life dynamic scenes is important for exploring and modeling human perception in immersive environments, also for aiding the development of virtual, augmented, and mixed reality applications in fields such as education, social network, entertainment, and training. To this end, we propose a new task, panoramic audiovisual salient object detection, (PAV-SOD1), which aims to segment the objects grasping most of the human attention in 360° panoramic videos reflecting real-life daily scenes. To support the task, we collect PAVS10K, the first panoramic video dataset for audiovisual salient object detection, which consists of 67 4K-resolution equirectangular videos with per-video labels including hierarchical scene categories and associated attributes depicting specific challenges for conducting PAV-SOD, and 10,465 uniformly sampled video frames with manually annotated object-level and instance-level pixel-wise masks. The coarse-to-fine annotations enable multi-perspective analysis regarding PAV-SOD modeling. We further systematically benchmark 13 state-of-the-art salient object detection (SOD)/video object segmentation (VOS) methods based on our PAVS10K. Besides, we propose a new baseline network, which takes advantage of both visual and audio cues of 360° video frames by using a new conditional variational auto-encoder (CVAE). Our CVAE-based audiovisual network, namely, CAV-Net, consists of a spatial-temporal visual segmentation network, a convolutional audio-encoding network, and audiovisual distribution estimation modules. As a result, our CAV-Net outperforms all competing models and is able to estimate the aleatoric uncertainties within PAVS10K. With extensive experimental results, we gain several findings about PAV-SOD challenges and insights towards PAV-SOD model interpretability. We hope that our work could serve as a starting point for advancing SOD towards immersive media.

REFERENCES

  1. [1] Rai Yashas, Gutiérrez Jesús, and Callet Patrick Le. 2017. A dataset of head and eye movements for 360 degree images. In Proceedings of the 8th ACM on Multimedia Systems Conference. 205210.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. [2] Sitzmann Vincent, Serrano Ana, Pavel Amy, Agrawala Maneesh, Gutierrez Diego, Masia Belen, and Wetzstein Gordon. 2018. Saliency in VR: How do people explore virtual environments? IEEE Trans. Visualiz. Comput. Graph. 24, 4 (2018), 16331642.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. [3] Corbillon Xavier, Simone Francesca De, and Simon Gwendal. 2017. 360-degree video head movement dataset. In Proceedings of the 8th ACM on Multimedia Systems Conference. 199204.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. [4] Xu Mai, Song Yuhang, Wang Jianyi, Qiao MingLang, Huo Liangyu, and Wang Zulin. 2018. Predicting head movement in panoramic video: A deep reinforcement learning approach. IEEE Trans. Neural Netw. Learn. Syst. 41, 11 (2018), 26932708.Google ScholarGoogle Scholar
  5. [5] Zhang Ziheng, Xu Yanyu, Yu Jingyi, and Gao Shenghua. 2018. Saliency detection in 360 videos. In Proceedings of the European Conference on Computer Vision (ECCV). 488503.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. [6] Xu Yanyu, Dong Yanbing, Wu Junru, Sun Zhengzhong, Shi Zhiru, Yu Jingyi, and Gao Shenghua. 2018. Gaze prediction in dynamic 360 immersive videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 53335342.Google ScholarGoogle ScholarCross RefCross Ref
  7. [7] Cheng Hsien-Tzu, Chao Chun-Hung, Dong Jin-Dong, Wen Hao-Kai, Liu Tyng-Luh, and Sun Min. 2018. Cube padding for weakly-supervised saliency prediction in 360 videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 14201429.Google ScholarGoogle ScholarCross RefCross Ref
  8. [8] Chao Fang-Yi, Ozcinar Cagri, Wang Chen, Zerman Emin, Zhang Lu, Hamidouche Wassim, Deforges Olivier, and Smolic Aljosa. 2020. Audio-visual perception of omnidirectional video for virtual reality applications. In Proceedings of the IEEE International Conference on Multimedia & Expo Workshops (ICMEW). IEEE, 16.Google ScholarGoogle ScholarCross RefCross Ref
  9. [9] Arandjelovic Relja and Zisserman Andrew. 2018. Objects that sound. In Proceedings of the European Conference on Computer Vision (ECCV). 435451.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. [10] Vasudevan Arun Balajee, Dai Dengxin, and Gool Luc Van. 2020. Semantic object prediction and spatial sound super-resolution with binaural sounds. In Proceedings of the European Conference on Computer Vision. Springer, 638655.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. [11] Afouras Triantafyllos, Owens Andrew, Chung Joon Son, and Zisserman Andrew. 2020. Self-supervised learning of audio-visual objects from video. In Proceedings of the European Conference on Computer Vision. Springer, 208224.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. [12] Yang Jing, Sasikumar Prasanth, Bai Huidong, Barde Amit, Sörös Gábor, and Billinghurst Mark. 2020. The effects of spatial auditory and visual cues on mixed reality remote collaboration. J. Multimod. User Interf. 14, 4 (2020), 337352.Google ScholarGoogle ScholarCross RefCross Ref
  13. [13] Rhee Taehyun, Petikam Lohit, Allen Benjamin, and Chalmers Andrew. 2017. MR360: Mixed reality rendering for 360 panoramic videos. IEEE Trans. Visualiz. Comput. Graph. 23, 4 (2017), 13791388.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. [14] Borji Ali, Cheng Ming-Ming, Hou Qibin, Jiang Huaizu, and Li Jia. 2019. Salient object detection: A survey. Computat. Vis. Media 5, 2 (2019), 117150.Google ScholarGoogle ScholarCross RefCross Ref
  15. [15] Zhang Dingwen, Tian Haibin, and Han Jungong. 2020. Few-cost salient object detection with adversarial-paced learning. Adv. Neural Inf. Process. Syst. 33 (2020), 1223612247.Google ScholarGoogle Scholar
  16. [16] Fang Chaowei, Tian Haibin, Zhang Dingwen, Zhang Qiang, Han Jungong, and Han Junwei. 2021. Densely nested top-down flows for salient object detection. arXiv preprint arXiv:2102.09133 (2021).Google ScholarGoogle Scholar
  17. [17] Fan Deng-Ping, Cheng Ming-Ming, Liu Jiang-Jiang, Gao Shang-Hua, Hou Qibin, and Borji Ali. 2018. Salient objects in clutter: Bringing salient object detection to the foreground. In Proceedings of the European Conference on Computer Vision (ECCV). 186202.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. [18] Wang Wenguan, Lai Qiuxia, Fu Huazhu, Shen Jianbing, Ling Haibin, and Yang Ruigang. 2021. Salient object detection in the deep learning era: An in-depth survey. IEEE Trans. Pattern Anal. Mach. Intell. (2021).Google ScholarGoogle Scholar
  19. [19] Cheng Ming-Ming, Gao Shanghua, Borji Ali, Tan Yong-Qiang, Lin Zheng, and Wang Meng. 2021. A highly efficient model to study the semantics of salient object detection. IEEE Trans. Pattern Anal. Mach. Intell. (2021).Google ScholarGoogle Scholar
  20. [20] Fan Deng-Ping, Wang Wenguan, Cheng Ming-Ming, and Shen Jianbing. 2019. Shifting more attention to video salient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 85548564.Google ScholarGoogle ScholarCross RefCross Ref
  21. [21] Zhao Wangbo, Zhang Jing, Li Long, Barnes Nick, Liu Nian, and Han Junwei. 2021. Weakly supervised video salient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1682616835.Google ScholarGoogle ScholarCross RefCross Ref
  22. [22] Zhang Miao, Liu Jie, Wang Yifei, Piao Yongri, Yao Shunyu, Ji Wei, Li Jingjing, Lu Huchuan, and Luo Zhongxuan. 2021. Dynamic context-sensitive filtering network for video salient object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 15531563.Google ScholarGoogle ScholarCross RefCross Ref
  23. [23] Cong Runmin, Yang Ning, Li Chongyi, Fu Huazhu, Zhao Yao, Huang Qingming, and Kwong Sam. 2022. Global-and-local collaborative learning for co-salient object detection. arXiv preprint arXiv:2204.08917 (2022).Google ScholarGoogle Scholar
  24. [24] Fan Qi, Fan Deng-Ping, Fu Huazhu, Tang Chi-Keung, Shao Ling, and Tai Yu-Wing. 2021. Group collaborative learning for co-salient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1228812298.Google ScholarGoogle ScholarCross RefCross Ref
  25. [25] Hsu Kuang-Jui, Lin Yen-Yu, and Chuang Yung-Yu. 2019. DeepCO3: Deep instance co-segmentation by co-peak search and co-saliency detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 88468855.Google ScholarGoogle ScholarCross RefCross Ref
  26. [26] Zhang Qijian, Cong Runmin, Hou Junhui, Li Chongyi, and Zhao Yao. 2020. CoADNet: Collaborative aggregation-and-distribution networks for co-salient object detection. Adv. Neural Inf. Process. Syst. 33 (2020), 69596970.Google ScholarGoogle Scholar
  27. [27] Chen Zuyao, Cong Runmin, Xu Qianqian, and Huang Qingming. 2020. DPANet: Depth potentiality-aware gated attention network for RGB-D salient object detection. IEEE Trans. Image Process. 30 (2020), 70127024.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. [28] Fan Deng-Ping, Zhai Yingjie, Borji Ali, Yang Jufeng, and Shao Ling. 2020. BBS-Net: RGB-D salient object detection with a bifurcated backbone strategy network. In Proceedings of the European Conference on Computer Vision. Springer, 275292.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. [29] Chen Qian, Liu Ze, Zhang Yi, Fu Keren, Zhao Qijun, and Du Hongwei. 2021. RGB-D salient object detection via 3D convolutional neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence. 10631071.Google ScholarGoogle ScholarCross RefCross Ref
  30. [30] Wu Yu-Huan, Liu Yun, Xu Jun, Bian Jia-Wang, Gu Yu-Chao, and Cheng Ming-Ming. 2021. MobileSal: Extremely efficient RGB-D salient object detection. IEEE Trans. Pattern Anal. Mach. Intell. (2021).Google ScholarGoogle Scholar
  31. [31] Tu Zhengzheng, Xia Tian, Li Chenglong, Wang Xiaoxiao, Ma Yan, and Tang Jin. 2019. RGB-T image saliency detection via collaborative graph learning. IEEE Trans. Multim. 22, 1 (2019), 160173.Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. [32] Tang Jin, Fan Dongzhe, Wang Xiaoxiao, Tu Zhengzheng, and Li Chenglong. 2019. RGBT salient object detection: Benchmark and a novel cooperative ranking approach. IEEE Trans. Circ. Syst. Vid. Technol. 30, 12 (2019), 44214433.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. [33] Zhang Qiang, Xiao Tonglin, Huang Nianchang, Zhang Dingwen, and Han Jungong. 2020. Revisiting feature fusion for RGB-T salient object detection. IEEE Trans. Circ. Syst. Vid. Technol. 31, 5 (2020), 18041818.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. [34] Li Nianyi, Ye Jinwei, Ji Yu, Ling Haibin, and Yu Jingyi. 2014. Saliency detection on light field. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 28062813.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. [35] Zhang Miao, Li Jingjing, Wei Ji, Piao Yongri, and Lu Huchuan. 2019. Memory-oriented decoder for light field salient object detection. Adv. Neural Inf. Process. Syst. 32 (2019).Google ScholarGoogle Scholar
  36. [36] Zhang Yi, Chen Geng, Chen Qian, Sun Yujia, Xia Yong, Deforges Olivier, Hamidouche Wassim, and Zhang Lu. 2021. Learning synergistic attention for light field salient object detection. arXiv preprint arXiv:2104.13916 (2021).Google ScholarGoogle Scholar
  37. [37] Zeng Yi, Zhang Pingping, Zhang Jianming, Lin Zhe, and Lu Huchuan. 2019. Towards high-resolution salient object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 72347243.Google ScholarGoogle ScholarCross RefCross Ref
  38. [38] Zhang Pingping, Liu Wei, Zeng Yi, Lei Yinjie, and Lu Huchuan. 2021. Looking for the detail and context devils: High-resolution salient object detection. IEEE Trans. Image Process. 30 (2021), 32043216.Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. [39] Cong Runmin, Zhang Yumo, Fang Leyuan, Li Jun, Zhang Chunjie, Zhao Yao, and Kwong Sam. 2021. RRNet: Relational reasoning network with parallel multi-scale attention for salient object detection in optical remote sensing images. IEEE Trans. Geosci. Rem. Sens. (2021).Google ScholarGoogle Scholar
  40. [40] Li Chongyi, Cong Runmin, Hou Junhui, Zhang Sanyi, Qian Yue, and Kwong Sam. 2019. Nested network with two-stream pyramid for salient object detection in optical remote sensing images. IEEE Trans. Geosci. Rem. Sens. 57, 11 (2019), 91569166.Google ScholarGoogle ScholarCross RefCross Ref
  41. [41] Zhang Qijian, Cong Runmin, Li Chongyi, Cheng Ming-Ming, Fang Yuming, Cao Xiaochun, Zhao Yao, and Kwong Sam. 2020. Dense attention fluid network for salient object detection in optical remote sensing images. IEEE Trans. Image Process. 30 (2020), 13051317.Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. [42] Meredith M. Alex and Stein Barry E.. 1983. Interactions among converging sensory inputs in the superior colliculus. Science 221, 4608 (1983), 389391.Google ScholarGoogle ScholarCross RefCross Ref
  43. [43] Meredith M. Alex and Stein Barry E.. 1986. Visual, auditory, and somatosensory convergence on cells in superior colliculus results in multisensory integration. J. Neurophysiol. 56, 3 (1986), 640662.Google ScholarGoogle ScholarCross RefCross Ref
  44. [44] Tsiami Antigoni, Koutras Petros, and Maragos Petros. 2020. STAViS: Spatio-temporal audiovisual saliency network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 47664776.Google ScholarGoogle ScholarCross RefCross Ref
  45. [45] Wang Guotao, Chen Chenglizhao, Fan Deng-Ping, Hao Aimin, and Qin Hong. 2021. From semantic categories to fixations: A novel weakly-supervised visual-auditory saliency detection approach. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1511915128.Google ScholarGoogle ScholarCross RefCross Ref
  46. [46] Cheng Shuaiyang, Song Liang, Tang Jingjing, and Guo Shihui. 2021. Audio-visual salient object detection. In Proceedings of the International Conference on Intelligent Computing. Springer, 510521.Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. [47] Zhang Yi, Zhang Lu, Hamidouche Wassim, and Deforges Olivier. 2020. A fixation-based 360 benchmark dataset for salient object detection. In Proceedings of the IEEE International Conference on Image Processing (ICIP). 34583462.Google ScholarGoogle ScholarCross RefCross Ref
  48. [48] Li Jia, Su Jinming, Xia Changqun, and Tian Yonghong. 2019. Distortion-adaptive salient object detection in 360° omnidirectional images. IEEE J. Select. Topics Sig. Process. 14, 1 (2019), 3848.Google ScholarGoogle ScholarCross RefCross Ref
  49. [49] Ma Guangxiao, Li Shuai, Chen Chenglizhao, Hao Aimin, and Qin Hong. 2020. Stage-wise salient object detection in 360° omnidirectional image via object-level semantical saliency ranking. IEEE Trans. Visualiz. Comput. Graph. 26, 12 (2020), 35353545.Google ScholarGoogle ScholarCross RefCross Ref
  50. [50] Sohn Kihyuk, Lee Honglak, and Yan Xinchen. 2015. Learning structured output representation using deep conditional generative models. Adv. Neural Inf. Process. Syst. 28 (2015).Google ScholarGoogle Scholar
  51. [51] Wang Wenguan, Shen Jianbing, Xie Jianwen, Cheng Ming-Ming, Ling Haibin, and Borji Ali. 2019. Revisiting video saliency prediction in the deep learning era. IEEE Trans. Neural Netw. Learn. Syst. 43, 1 (2019), 220237.Google ScholarGoogle Scholar
  52. [52] Zhao Hang, Gan Chuang, Rouditchenko Andrew, Vondrick Carl, McDermott Josh, and Torralba Antonio. 2018. The sound of pixels. In Proceedings of the European Conference on Computer Vision (ECCV). 570586.Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. [53] Gemmeke Jort F., Ellis Daniel P. W., Freedman Dylan, Jansen Aren, Lawrence Wade, Moore R. Channing, Plakal Manoj, and Ritter Marvin. 2017. Audio set: An ontology and human-labeled dataset for audio events. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 776780.Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. [54] Tian Yapeng, Shi Jing, Li Bochen, Duan Zhiyao, and Xu Chenliang. 2018. Audio-visual event localization in unconstrained videos. In Proceedings of the European Conference on Computer Vision (ECCV). 247263.Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. [55] Chen Honglie, Xie Weidi, Vedaldi Andrea, and Zisserman Andrew. 2020. VGGSound: A large-scale audio-visual dataset. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 721725.Google ScholarGoogle ScholarCross RefCross Ref
  56. [56] Gao Ruohan, Chang Yen-Yu, Mall Shivani, Fei-Fei Li, and Wu Jiajun. 2021. ObjectFolder: A dataset of objects with implicit visual, auditory, and tactile representations. arXiv preprint arXiv:2109.07991 (2021).Google ScholarGoogle Scholar
  57. [57] Chen Honglie, Xie Weidi, Afouras Triantafyllos, Nagrani Arsha, Vedaldi Andrea, and Zisserman Andrew. 2021. Audio-visual synchronisation in the wild. arXiv preprint arXiv:2112.04432 (2021).Google ScholarGoogle Scholar
  58. [58] Gan Chuang, Zhang Yiwei, Wu Jiajun, Gong Boqing, and Tenenbaum Joshua B.. 2020. Look, listen, and act: Towards audio-visual embodied navigation. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA). IEEE, 97019707.Google ScholarGoogle ScholarCross RefCross Ref
  59. [59] Chen Changan, Jain Unnat, Schissler Carl, Gari Sebastia Vicenc Amengual, Al-Halah Ziad, Ithapu Vamsi Krishna, Robinson Philip, and Grauman Kristen. 2020. SoundSpaces: Audio-visual navigation in 3D environments. In Proceedings of the European Conference on Computer Vision. Springer, 1736.Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. [60] Morgado Pedro, Nvasconcelos Nuno, Langlois Timothy, and Wang Oliver. 2018. Self-supervised generation of spatial audio for 360 video. Adv. Neural Inf. Process. Syst. 31 (2018).Google ScholarGoogle Scholar
  61. [61] Garg Rishabh, Gao Ruohan, and Grauman Kristen. 2021. Geometry-aware multi-task learning for binaural audio generation from video. arXiv preprint arXiv:2111.10882 (2021).Google ScholarGoogle Scholar
  62. [62] Narasimhan Medhini, Ginosar Shiry, Owens Andrew, Efros Alexei A., and Darrell Trevor. 2022. Strumming to the beat: Audio-conditioned contrastive video textures. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 37613770.Google ScholarGoogle ScholarCross RefCross Ref
  63. [63] Chen Changan, Al-Halah Ziad, and Grauman Kristen. 2021. Semantic audio-visual navigation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1551615525.Google ScholarGoogle ScholarCross RefCross Ref
  64. [64] Majumder Sagnik, Al-Halah Ziad, and Grauman Kristen. 2021. Move2Hear: Active audio-visual source separation. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 275285.Google ScholarGoogle ScholarCross RefCross Ref
  65. [65] Rouditchenko Andrew, Zhao Hang, Gan Chuang, McDermott Josh, and Torralba Antonio. 2019. Self-supervised audio-visual co-segmentation. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 23572361.Google ScholarGoogle ScholarCross RefCross Ref
  66. [66] Chen Honglie, Xie Weidi, Afouras Triantafyllos, Nagrani Arsha, Vedaldi Andrea, and Zisserman Andrew. 2021. Localizing visual sounds the hard way. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1686716876.Google ScholarGoogle ScholarCross RefCross Ref
  67. [67] Gao Ruohan and Grauman Kristen. 2019. Co-separating sounds of visual objects. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 38793888.Google ScholarGoogle ScholarCross RefCross Ref
  68. [68] Liu Xian, Qian Rui, Zhou Hang, Hu Di, Lin Weiyao, Liu Ziwei, Zhou Bolei, and Zhou Xiaowei. 2022. Visual sound localization in the wild by cross-modal interference erasing. arXiv preprint arXiv:2202.06406 (2022).Google ScholarGoogle Scholar
  69. [69] Tian Yapeng, Hu Di, and Xu Chenliang. 2021. Cyclic co-learning of sounding object visual grounding and sound separation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 27452754.Google ScholarGoogle ScholarCross RefCross Ref
  70. [70] Hu Di, Qian Rui, Jiang Minyue, Tan Xiao, Wen Shilei, Ding Errui, Lin Weiyao, and Dou Dejing. 2020. Discriminative sounding objects localization via self-supervised audiovisual matching. Adv. Neural Inf. Process. Syst. 33 (2020), 1007710087.Google ScholarGoogle Scholar
  71. [71] Hu Di, Wei Yake, Qian Rui, Lin Weiyao, Song Ruihua, and Wen Ji-Rong. 2021. Class-aware sounding objects localization via audiovisual correspondence. IEEE Trans. Pattern Anal. Mach. Intell. (2021).Google ScholarGoogle Scholar
  72. [72] Afouras Triantafyllos, Asano Yuki M., Fagan Francois, Vedaldi Andrea, and Metze Florian. 2021. Self-supervised object detection from audio-visual correspondence. arXiv preprint arXiv:2104.06401 (2021).Google ScholarGoogle Scholar
  73. [73] David Erwan J., Gutiérrez Jesús, Coutrot Antoine, Silva Matthieu Perreira Da, and Callet Patrick Le. 2018. A dataset of head and eye movements for 360 videos. In Proceedings of the 9th ACM Multimedia Systems Conference. 432437.Google ScholarGoogle ScholarDigital LibraryDigital Library
  74. [74] Nguyen Anh and Yan Zhisheng. 2019. A saliency dataset for 360-degree videos. In Proceedings of the 10th ACM Multimedia Systems Conference. 279284.Google ScholarGoogle ScholarDigital LibraryDigital Library
  75. [75] Li Chen, Xu Mai, Du Xinzhe, and Wang Zulin. 2018. Bridge the gap between VQA and human behavior on omnidirectional video: A large-scale dataset and a deep learning model. In Proceedings of the 26th ACM International Conference on Multimedia. 932940.Google ScholarGoogle ScholarDigital LibraryDigital Library
  76. [76] Agtzidis Ioannis, Startsev Mikhail, and Dorr Michael. 2019. 360-degree video gaze behaviour: A ground-truth data set and a classification algorithm for eye movements. In Proceedings of the 27th ACM International Conference on Multimedia. 10071015.Google ScholarGoogle ScholarDigital LibraryDigital Library
  77. [77] Chao Fang-Yi, Zhang Lu, Hamidouche Wassim, and Deforges Olivier. 2018. SalGAN360: Visual saliency prediction on 360 degree images with generative adversarial networks. In Proceedings of the IEEE International Conference on Multimedia & Expo Workshops (ICMEW). IEEE, 14.Google ScholarGoogle ScholarCross RefCross Ref
  78. [78] Qiao Minglang, Xu Mai, Wang Zulin, and Borji Ali. 2020. Viewport-dependent saliency prediction in 360° video. IEEE Trans. Multim. 23 (2020), 748760.Google ScholarGoogle ScholarCross RefCross Ref
  79. [79] Dahou Yasser, Tliba Marouane, McGuinness Kevin, and O’Connor Noel. 2021. ATSal: An attention based architecture for saliency prediction in 360° videos. In Proceedings of the International Conference on Pattern Recognition. Springer, 305320.Google ScholarGoogle ScholarDigital LibraryDigital Library
  80. [80] Zhu Yucheng, Zhai Guangtao, Min Xiongkuo, and Zhou Jiantao. 2019. The prediction of saliency map for head and eye movements in 360 degree images. IEEE Trans. Multim. 22, 9 (2019), 23312344.Google ScholarGoogle ScholarCross RefCross Ref
  81. [81] Djilali Yasser Abdelaziz Dahou, McGuinness Kevin, and O’Connor Noel E.. 2021. Simple baselines can fool 360deg saliency metrics. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 37503756.Google ScholarGoogle Scholar
  82. [82] Tavakoli Hamed R., Borji Ali, Rahtu Esa, and Kannala Juho. 2019. DAVE: A deep audio-visual embedding for dynamic saliency prediction. arXiv preprint arXiv:1905.10693 (2019).Google ScholarGoogle Scholar
  83. [83] Aytar Yusuf, Vondrick Carl, and Torralba Antonio. 2016. SoundNet: Learning sound representations from unlabeled video. Adv. Neural Inf. Process. Syst. 29 (2016).Google ScholarGoogle Scholar
  84. [84] Liu Yufan, Qiao Minglang, Xu Mai, Li Bing, Hu Weiming, and Borji Ali. 2020. Learning to predict salient faces: A novel visual-audio saliency model. In Proceedings of the European Conference on Computer Vision. Springer, 413429.Google ScholarGoogle ScholarDigital LibraryDigital Library
  85. [85] Min Xiongkuo, Zhai Guangtao, Zhou Jiantao, Zhang Xiao-Ping, Yang Xiaokang, and Guan Xinping. 2020. A multimodal saliency model for videos with high audio-visual correspondence. IEEE Trans. Image Process. 29 (2020), 38053819.Google ScholarGoogle ScholarDigital LibraryDigital Library
  86. [86] Yao Shunyu, Min Xiongkuo, and Zhai Guangtao. 2021. Deep audio-visual fusion neural network for saliency estimation. In Proceedings of the IEEE International Conference on Image Processing (ICIP). 16041608.Google ScholarGoogle ScholarCross RefCross Ref
  87. [87] Jain Samyak, Yarlagadda Pradeep, Jyoti Shreyank, Karthik Shyamgopal, Subramanian Ramanathan, and Gandhi Vineet. 2020. ViNet: Pushing the limits of visual modality for audio-visual saliency prediction. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 35203527.Google ScholarGoogle Scholar
  88. [88] Abawi Fares, Weber Tom, and Wermter Stefan. 2021. GASP: Gated attention for saliency prediction. In Proceedings of the 30th International Joint Conference on Artificial Intelligence.Google ScholarGoogle Scholar
  89. [89] Ramenahalli Sudarshan. 2020. A biologically motivated, proto-object-based audiovisual saliency model. Artif. Intell. 1, 4 (2020), 487509.Google ScholarGoogle Scholar
  90. [90] Chao Fang-Yi, Ozcinar Cagri, Zhang Lu, Hamidouche Wassim, Deforges Olivier, and Smolic Aljosa. 2020. Towards audio-visual saliency prediction for omnidirectional video with spatial audio. In Proceedings of the IEEE International Conference on Visual Communications and Image Processing (VCIP). IEEE, 355358.Google ScholarGoogle ScholarCross RefCross Ref
  91. [91] Cokelek Mert, Imamoglu Nevrez, Ozcinar Cagri, Erdem Erkut, and Erdem Aykut. 2021. Leveraging frequency based salient spatial sound localization to improve 360° video saliency prediction. In Proceedings of the 17th International Conference on Machine Vision and Applications (MVA). 15.Google ScholarGoogle ScholarCross RefCross Ref
  92. [92] Perazzi Federico, Pont-Tuset Jordi, McWilliams Brian, Gool Luc Van, Gross Markus, and Sorkine-Hornung Alexander. 2016. A benchmark dataset and evaluation methodology for video object segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 724732.Google ScholarGoogle ScholarCross RefCross Ref
  93. [93] Li Fuxin, Kim Taeyoung, Humayun Ahmad, Tsai David, and Rehg James M.. 2013. Video segmentation by tracking many figure-ground segments. In Proceedings of the IEEE International Conference on Computer Vision. 21922199.Google ScholarGoogle ScholarDigital LibraryDigital Library
  94. [94] Ochs Peter, Malik Jitendra, and Brox Thomas. 2013. Segmentation of moving objects by long term video analysis. IEEE Trans. Pattern Anal. Mach. Intell. 36, 6 (2013), 11871200.Google ScholarGoogle ScholarDigital LibraryDigital Library
  95. [95] Kim Hansang, Kim Youngbae, Sim Jae-Young, and Kim Chang-Su. 2015. Spatiotemporal saliency detection for video sequences based on random walk with restart. IEEE Trans. Image Process. 24, 8 (2015), 25522564.Google ScholarGoogle ScholarDigital LibraryDigital Library
  96. [96] Wang Wenguan, Shen Jianbing, and Shao Ling. 2015. Consistent video saliency using local gradient flow optimization and global refinement. IEEE Trans. Image Process. 24, 11 (2015), 41854196.Google ScholarGoogle ScholarDigital LibraryDigital Library
  97. [97] Liu Zhi, Li Junhao, Ye Linwei, Sun Guangling, and Shen Liquan. 2016. Saliency detection for unconstrained videos using superpixel-level graph and spatiotemporal propagation. IEEE Trans. Circ. Syst. Vid. Technol. 27, 12 (2016), 25272542.Google ScholarGoogle ScholarDigital LibraryDigital Library
  98. [98] Li Jia, Xia Changqun, and Chen Xiaowu. 2017. A benchmark dataset and saliency-guided stacked autoencoders for video-based salient object detection. IEEE Trans. Image Process. 27, 1 (2017), 349364.Google ScholarGoogle ScholarCross RefCross Ref
  99. [99] Yan Pengxiang, Li Guanbin, Xie Yuan, Li Zhen, Wang Chuan, Chen Tianshui, and Lin Liang. 2019. Semi-supervised video salient object detection using pseudo-labels. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 72847293.Google ScholarGoogle ScholarCross RefCross Ref
  100. [100] Gu Yuchao, Wang Lijuan, Wang Ziqin, Liu Yun, Cheng Ming-Ming, and Lu Shao-Ping. 2020. Pyramid constrained self-attention network for fast video salient object detection. In Proceedings of the AAAI Conference on Artificial Intelligence. 1086910876.Google ScholarGoogle ScholarCross RefCross Ref
  101. [101] Schmidt Christian, Athar Ali, Mahadevan Sabarinath, and Leibe Bastian. 2022. D2Conv3D: Dynamic dilated convolutions for object segmentation in videos. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 12001209.Google ScholarGoogle ScholarCross RefCross Ref
  102. [102] Ren Sucheng, Liu Wenxi, Liu Yongtuo, Chen Haoxin, Han Guoqiang, and He Shengfeng. 2021. Reciprocal transformations for unsupervised video object segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1545515464.Google ScholarGoogle ScholarCross RefCross Ref
  103. [103] Mahadevan Sabarinath, Athar Ali, Ošep Aljoša, Hennen Sebastian, Leal-Taixé Laura, and Leibe Bastian. 2020. Making a case for 3D convolutions for object segmentation in videos. arXiv preprint arXiv:2008.11516 (2020).Google ScholarGoogle Scholar
  104. [104] Ji Ge-Peng, Fu Keren, Wu Zhe, Fan Deng-Ping, Shen Jianbing, and Shao Ling. 2021. Full-duplex strategy for video object segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 49224933.Google ScholarGoogle ScholarCross RefCross Ref
  105. [105] Zhang Kaihua, Zhao Zicheng, Liu Dong, Liu Qingshan, and Liu Bo. 2021. Deep transport network for unsupervised video object segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 87818790.Google ScholarGoogle ScholarCross RefCross Ref
  106. [106] Zhou Tianfei, Wang Shunzhou, Zhou Yi, Yao Yazhou, Li Jianwu, and Shao Ling. 2020. Motion-attentive transition for zero-shot video object segmentation. In Proceedings of the AAAI Conference on Artificial Intelligence. 1306613073.Google ScholarGoogle ScholarCross RefCross Ref
  107. [107] Lu Xiankai, Wang Wenguan, Ma Chao, Shen Jianbing, Shao Ling, and Porikli Fatih. 2019. See more, know more: Unsupervised video object segmentation with co-attention siamese networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 36233632.Google ScholarGoogle ScholarCross RefCross Ref
  108. [108] Huang Mengke, Liu Zhi, Li Gongyang, Zhou Xiaofei, and Meur Olivier Le. 2020. FANet: Features adaptation network for 360° omnidirectional salient object detection. IEEE Sig. Process. Lett. 27 (2020), 18191823.Google ScholarGoogle ScholarCross RefCross Ref
  109. [109] Hu Jie, Shen Li, and Sun Gang. 2018. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 71327141.Google ScholarGoogle ScholarCross RefCross Ref
  110. [110] Hu Hou-Ning, Lin Yen-Chen, Liu Ming-Yu, Cheng Hsien-Tzu, Chang Yung-Ju, and Sun Min. 2017. Deep 360 pilot: Learning a deep agent for piloting through 360 sports videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 13961405.Google ScholarGoogle ScholarCross RefCross Ref
  111. [111] Zhao Pengyu, You Ansheng, Zhang Yuanxing, Liu Jiaying, Bian Kaigui, and Tong Yunhai. 2020. Spherical criteria for fast and accurate 360 object detection. In Proceedings of the AAAI Conference on Artificial Intelligence. 1295912966.Google ScholarGoogle ScholarCross RefCross Ref
  112. [112] Wang Kuan-Hsun and Lai Shang-Hong. 2019. Object detection in curved space for 360-degree camera. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 36423646.Google ScholarGoogle ScholarCross RefCross Ref
  113. [113] Zhang Yiming, Xiao Xiangyun, and Yang Xubo. 2017. Real-time object detection for 360-degree panoramic image using CNN. In Proceedings of the International Conference on Virtual Reality and Visualization (ICVRV). IEEE, 1823.Google ScholarGoogle ScholarCross RefCross Ref
  114. [114] Yan Qiong, Xu Li, Shi Jianping, and Jia Jiaya. 2013. Hierarchical saliency detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 11551162.Google ScholarGoogle ScholarDigital LibraryDigital Library
  115. [115] Yang Chuan, Zhang Lihe, Lu Huchuan, Ruan Xiang, and Yang Ming-Hsuan. 2013. Saliency detection via graph-based manifold ranking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 31663173.Google ScholarGoogle ScholarDigital LibraryDigital Library
  116. [116] Li Yin, Hou Xiaodi, Koch Christof, Rehg James M., and Yuille Alan L.. 2014. The secrets of salient object segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 280287.Google ScholarGoogle ScholarDigital LibraryDigital Library
  117. [117] Li Guanbin and Yu Yizhou. 2015. Visual saliency based on multiscale deep features. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 54555463.Google ScholarGoogle Scholar
  118. [118] Wang Lijun, Lu Huchuan, Wang Yifan, Feng Mengyang, Wang Dong, Yin Baocai, and Ruan Xiang. 2017. Learning to detect salient objects with image-level supervision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 136145.Google ScholarGoogle ScholarCross RefCross Ref
  119. [119] Li Guanbin, Xie Yuan, Lin Liang, and Yu Yizhou. 2017. Instance-level salient object segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 23862395.Google ScholarGoogle ScholarCross RefCross Ref
  120. [120] Zhang Kaihao, Li Dongxu, Luo Wenhan, Ren Wenqi, Stenger Björn, Liu Wei, Li Hongdong, and Yang Ming-Hsuan. 2021. Benchmarking ultra-high-definition image super-resolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 1476914778.Google ScholarGoogle ScholarCross RefCross Ref
  121. [121] Deng Senyou, Ren Wenqi, Yan Yanyang, Wang Tao, Song Fenglong, and Cao Xiaochun. 2021. Multi-scale separable network for ultra-high-definition video deblurring. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 1403014039.Google ScholarGoogle ScholarCross RefCross Ref
  122. [122] Xu Mai, Li Chen, Zhang Shanyi, and Callet Patrick Le. 2020. State-of-the-art in 360 video/image processing: Perception, assessment and compression. IEEE J. Select. Topics Sig. Process. 14, 1 (2020), 526.Google ScholarGoogle ScholarCross RefCross Ref
  123. [123] Fan Ching-Ling, Lo Wen-Chih, Pai Yu-Tung, and Hsu Cheng-Hsin. 2019. A survey on 360 video streaming: Acquisition, transmission, and display. ACM Comput. Surv. 52, 4 (2019), 136.Google ScholarGoogle ScholarDigital LibraryDigital Library
  124. [124] Zhang Jing, Fan Deng-Ping, Dai Yuchao, Anwar Saeed, Saleh Fatemeh Sadat, Zhang Tong, and Barnes Nick. 2020. UC-Net: Uncertainty inspired RGB-D saliency detection via conditional variational autoencoders. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).Google ScholarGoogle ScholarCross RefCross Ref
  125. [125] Zhang Jing, Dai Yuchao, Xiang Mochu, Fan Deng-Ping, Moghadam Peyman, He Mingyi, Walder Christian, Zhang Kaihao, Harandi Mehrtash, and Barnes Nick. 2021. Dense uncertainty estimation. arXiv preprint arXiv:2110.06427 (2021).Google ScholarGoogle Scholar
  126. [126] Kingma Diederik P. and Welling Max. 2013. Auto-encoding variational Bayes. arXiv preprint arXiv:1312.6114 (2013).Google ScholarGoogle Scholar
  127. [127] Ranftl René, Bochkovskiy Alexey, and Koltun Vladlen. 2021. Vision transformers for dense prediction. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 1217912188.Google ScholarGoogle ScholarCross RefCross Ref
  128. [128] Dosovitskiy Alexey, Beyer Lucas, Kolesnikov Alexander, Weissenborn Dirk, Zhai Xiaohua, Unterthiner Thomas, Dehghani Mostafa, Minderer Matthias, Heigold Georg, Gelly Sylvain, et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020).Google ScholarGoogle Scholar
  129. [129] Wei Jun, Wang Shuhui, and Huang Qingming. 2020. F\(^3\)Net: Fusion, feedback and focus for salient object detection. In Proceedings of the AAAI Conference on Artificial Intelligence. 1232112328.Google ScholarGoogle Scholar
  130. [130] Kingma Diederik P. and Ba Jimmy. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).Google ScholarGoogle Scholar
  131. [131] Wu Zhe, Su Li, and Huang Qingming. 2019. Cascaded partial decoder for fast and accurate salient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 39073916.Google ScholarGoogle ScholarCross RefCross Ref
  132. [132] Wu Zhe, Su Li, and Huang Qingming. 2019. Stacked cross refinement network for edge-aware salient object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 72647273.Google ScholarGoogle ScholarCross RefCross Ref
  133. [133] Pang Youwei, Zhao Xiaoqi, Zhang Lihe, and Lu Huchuan. 2020. Multi-scale interactive network for salient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 94139422.Google ScholarGoogle ScholarCross RefCross Ref
  134. [134] Wei Jun, Wang Shuhui, Wu Zhe, Su Chi, Huang Qingming, and Tian Qi. 2020. Label decoupling framework for salient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1302513034.Google ScholarGoogle ScholarCross RefCross Ref
  135. [135] Gao Shang-Hua, Tan Yong-Qiang, Cheng Ming-Ming, Lu Chengze, Chen Yunpeng, and Yan Shuicheng. 2020. Highly efficient salient object detection with 100k parameters. In European Conference on Computer Vision. 702721.Google ScholarGoogle ScholarDigital LibraryDigital Library
  136. [136] Zhao Xiaoqi, Pang Youwei, Zhang Lihe, Lu Huchuan, and Zhang Lei. 2020. Suppress and balance: A simple gated network for salient object detection. In Proceedings of the European Conference on Computer Vision. Springer, 3551.Google ScholarGoogle ScholarDigital LibraryDigital Library
  137. [137] Achanta Radhakrishna, Hemami Sheila, Estrada Francisco, and Susstrunk Sabine. 2009. Frequency-tuned salient region detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 15971604.Google ScholarGoogle ScholarCross RefCross Ref
  138. [138] Perazzi Federico, Krähenbühl Philipp, Pritch Yael, and Hornung Alexander. 2012. Saliency filters: Contrast based filtering for salient region detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 733740.Google ScholarGoogle ScholarCross RefCross Ref
  139. [139] Fan Deng-Ping, Cheng Ming-Ming, Liu Yun, Li Tao, and Borji Ali. 2017. Structure-measure: A new way to evaluate foreground maps. In Proceedings of the IEEE International Conference on Computer Vision. 45484557.Google ScholarGoogle ScholarCross RefCross Ref
  140. [140] Fan Deng-Ping, Gong Cheng, Cao Yang, Ren Bo, Cheng Ming-Ming, and Borji Ali. 2018. Enhanced-alignment measure for binary foreground map evaluation. arXiv preprint arXiv:1805.10421 (2018).Google ScholarGoogle Scholar
  141. [141] He Kaiming, Zhang Xiangyu, Ren Shaoqing, and Sun Jian. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 770778.Google ScholarGoogle ScholarCross RefCross Ref
  142. [142] Gao Shang-Hua, Cheng Ming-Ming, Zhao Kai, Zhang Xin-Yu, Yang Ming-Hsuan, and Torr Philip. 2019. Res2Net: A new multi-scale backbone architecture. IEEE Trans. Pattern Anal. Mach. Intell. 43, 2 (2019), 652662.Google ScholarGoogle ScholarDigital LibraryDigital Library
  143. [143] Gu Ke, Zhai Guangtao, Lin Weisi, Yang Xiaokang, and Zhang Wenjun. 2015. No-reference image sharpness assessment in autoregressive parameter space. IEEE Trans. Image Process. 24, 10 (2015), 32183231.Google ScholarGoogle ScholarDigital LibraryDigital Library
  144. [144] Gu Ke, Tao Dacheng, Qiao Jun-Fei, and Lin Weisi. 2017. Learning a no-reference quality assessment model of enhanced images with big data. IEEE Trans. Neural Netw. Learn. Syst. 29, 4 (2017), 13011313.Google ScholarGoogle ScholarCross RefCross Ref
  145. [145] Gu Ke, Lin Weisi, Zhai Guangtao, Yang Xiaokang, Zhang Wenjun, and Chen Chang Wen. 2016. No-reference quality metric of contrast-distorted images based on information maximization. IEEE Trans. Cyber. 47, 12 (2016), 45594565.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. PAV-SOD: A New Task towards Panoramic Audiovisual Saliency Detection

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Multimedia Computing, Communications, and Applications
      ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 19, Issue 3
      May 2023
      514 pages
      ISSN:1551-6857
      EISSN:1551-6865
      DOI:10.1145/3582886
      • Editor:
      • Abdulmotaleb El Saddik
      Issue’s Table of Contents

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 25 February 2023
      • Online AM: 30 September 2022
      • Accepted: 25 September 2022
      • Revised: 29 August 2022
      • Received: 18 March 2022
      Published in tomm Volume 19, Issue 3

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
    • Article Metrics

      • Downloads (Last 12 months)458
      • Downloads (Last 6 weeks)38

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    View Full Text

    HTML Format

    View this article in HTML Format .

    View HTML Format
    About Cookies On This Site

    We use cookies to ensure that we give you the best experience on our website.

    Learn more

    Got it!