skip to main content
research-article

Saliency Detection on Light Field: A Multi-Cue Approach

Published:27 July 2017Publication History
Skip Abstract Section

Abstract

Saliency detection has recently received increasing research interest on using high-dimensional datasets beyond two-dimensional images. Despite the many available capturing devices and algorithms, there still exists a wide spectrum of challenges that need to be addressed to achieve accurate saliency detection. Inspired by the success of the light-field technique, in this article, we propose a new computational scheme to detect salient regions by integrating multiple visual cues from light-field images. First, saliency prior maps are generated from several light-field features based on superpixel-level intra-cue distinctiveness, such as color, depth, and flow inherited from different focal planes and multiple viewpoints. Then, we introduce the location prior to enhance the saliency maps. These maps will finally be merged into a single map using a random-search-based weighting strategy. Besides, we refine the object details by employing a two-stage saliency refinement to obtain the final saliency map.

In addition, we present a more challenging benchmark dataset for light-field saliency analysis, named HFUT-Lytro, which consists of 255 light fields with a range from 53 to 64 images generated from each light-field image, therein spanning multiple occurrences of saliency detection challenges such as occlusions, cluttered background, and appearance changes. Experimental results show that our approach can achieve 0.6--6.7% relative improvements over state-of-the-art methods in terms of the F-measure and Precision metrics, which demonstrates the effectiveness of the proposed approach.

References

  1. Radhakrishna Achanta, Sheila Hemami, Francisco Estrada, and Sabine Susstrunk. 2009. Frequency-tuned salient region detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 1597--1604. Google ScholarGoogle ScholarCross RefCross Ref
  2. Radhakrishna Achanta, Appu Shaji, Kevin Smith, Aurelien Lucchi, Pascal Fua, and Sabine Susstrunk. 2012. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 34, 11 (2012), 2274--2282. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Edward H. Adelson and James R. Bergen. 1991. The plenoptic function and the elements of early vision. Comput. Models Vis. Process. 1, 2 (1991), 3--20.Google ScholarGoogle Scholar
  4. Aseem Agarwala, Mira Dontcheva, Maneesh Agrawala, Steven Drucker, Alex Colburn, Brian Curless, David Salesin, and Michael Cohen. 2004. Interactive digital photomontage. ACM Trans. Graph. 23, 3 (2004), 294--302. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. James Bergstra and Yoshua Bengio. 2012. Random search for hyper-parameter optimization. J. Mach. Learn. Res. 13, 1 (2012), 281--305.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Ali Borji, Simone Frintrop, Dicky N. Sihite, and Laurent Itti. 2012. Adaptive object tracking by learning background context. In Proceedings of the Computer Vision and Pattern Recognition Workshop on Egocentric Vision. IEEE, 23--30. Google ScholarGoogle ScholarCross RefCross Ref
  7. Ali Borji, Dicky N. Sihite, and Laurent Itti. 2012. Salient object detection: A benchmark. In Proceedings of the European Conference on Computer Vision. Springer, Berlin, Florence, Italy, 414--429. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Ali Borji, Hamed R. Tavakoli, Dicky N. Sihite, and Laurent Itti. 2013. Analysis of scores, datasets, and models in visual saliency prediction. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, 921--928. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Thomas Brox, Andres Bruhn, Nils Papenberg, and Joachim Weickert. 2004. High accuracy optical flow estimation based on a theory for warping. In Proceedings of the European Conference on Computer Vision, Vol. 4. Springer, Berlin, Prague, Czech Republic, 25--36. Google ScholarGoogle ScholarCross RefCross Ref
  10. Neil Bruce and John Tsotsos. 2005. Saliency based on information maximization. In Advances in Neural Information Processing Systems. Curran Associates, Vancouver, Canada, 155--162.Google ScholarGoogle Scholar
  11. Neil Bruce, Calden Wloka, Nick Frosst, Shafin Rahman, and John K. Tsotsos. 2015. On computational modeling of visual saliency: Examining what’s right, and what’s left. Vis. Res. 116 (2015), 95--112. Google ScholarGoogle ScholarCross RefCross Ref
  12. Xiaochun Cao, Zhiqiang Tao, Bao Zhang, Huazhu Fu, and Wei Feng. 2014. Self-adaptively weighted co-saliency detection via rank constraint. IEEE Trans. Image Process. 23, 9 (2014), 4175--4185.Google ScholarGoogle Scholar
  13. Moran Cerf, Jonathan Harel, Wolfgang Einhäuser, and Christof Koch. 2008. Predicting human gaze using low-level saliency combined with face detection. In Advances in Neural Information Processing Systems. Curran Associates, Vancouver, Canada, 241--248.Google ScholarGoogle Scholar
  14. Ming-Ming Cheng, Niloy J. Mitra, Xiaolei Huang, Philip H. S. Torr, and Shi-Min Hu. 2015. Global contrast based salient region detection. IEEE Trans. Pattern Anal. Mach. Intell. 37, 3 (March 2015), 569--582. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Donald G. Dansereau, Oscar Pizarro, and Stefan B. Williams. 2013. Decoding, calibration and rectification for lenselet-based plenoptic cameras. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 1027--1034. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. 2011. The PASCAL Visual Object Classes Challenge 2011 (VOC2011) Results. Retrieved from http://www.pascal-network.org/challenges/VOC/voc2011/workshop/index.html.Google ScholarGoogle Scholar
  17. Tom Foulsham and Geoffrey Underwood. 2008. What can saliency models predict about eye movements? Spatial and sequential aspects of fixations during encoding and recognition. J. Vis. 8, 2 (2008), 6--6. Google ScholarGoogle ScholarCross RefCross Ref
  18. Alireza Ghasemi, Nelly Afonso, and Martin Vetterli. 2013. LCAV-31: A dataset for light-field object recognition. In Proceedings of the International Society for Optics and Photonics Conference on Computational Imaging XII (SPIE 9020). SPIE, San Francisco, CA, 902014--902014.Google ScholarGoogle Scholar
  19. Hae-Gon Jeon, Jaesik Park, Gyeongmin Choe, Jinsun Park, Yunsu Bok, Yu-Wing Tai, and In So Kweon. 2015. Accurate depth map estimation from a lenslet light field camera. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 1547--1555. Google ScholarGoogle ScholarCross RefCross Ref
  20. Ming Jiang, Shengsheng Huang, Juanyong Duan, and Qi Zhao. 2015. SALICON: Saliency in context. In Proceedings of the Computer Vision and Pattern Recognition. IEEE, 1072--1080. Google ScholarGoogle ScholarCross RefCross Ref
  21. Tilke Judd, Frédo Durand, and Antonio Torralba. 2012. A Benchmark of Computational Models of Saliency to Predict Human Fixations. Technical Report. MIT.Google ScholarGoogle Scholar
  22. Tilke Judd, Krista Ehinger, Fredo Durand, and Antonio Torralba. 2009. Learning to predict where humans look. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, 2106--2113. Google ScholarGoogle ScholarCross RefCross Ref
  23. Changil Kim, Henning Zimmer, Yael Pritch, Alexander Sorkine-Hornung, and Markus H. Gross. 2013. Scene reconstruction from high spatio-angular resolution light fields. ACM Trans. Graph. 32, 4 (2013), 73--1. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Congyan Lang, Tam V. Nguyen, Harish Katti, Karthik Yadati, Mohan Kankanhalli, and Shuicheng Yan. 2012. Depth matters: Influence of depth cues on visual saliency. In Proceedings of the European Conference on Computer Vision. Springer, Berlin, 101--115. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Marc Levoy and Pat Hanrahan. 1996. Light field rendering. In Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques. ACM, 31--42. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Changyang Li, Yuchen Yuan, Weidong Cai, Yong Xia, and David Dagan Feng. 2015. Robust saliency detection via regularized random walks ranking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2710--2717.Google ScholarGoogle Scholar
  27. Guanbin Li and Yizhou Yu. 2015. Visual saliency based on multiscale deep features. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 5455--5463.Google ScholarGoogle Scholar
  28. Jian Li, Martin D. Levine, Xiangjing An, Xin Xu, and Hangen He. 2013a. Visual saliency based on scale-space analysis in the frequency domain. IEEE Trans. Pattern Anal. Mach. Intell. 35, 4 (2013), 996--1010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Nianyi Li, Bilin Sun, and Jingyi Yu. 2015. A weighted sparse coding framework for saliency detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 5216--5223. Google ScholarGoogle ScholarCross RefCross Ref
  30. Nianyi Li, Jinwei Ye, Yu Ji, Haibin Ling, and Jingyi Yu. 2014. Saliency detection on light field. In Proceedings of the IEEE Conference on Computer Vision and Pattern RecognitionComputer Vision and Pattern Recognition. IEEE, 2806--2813. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Xi Li, Yao Li, Chunhua Shen, Anthony Dick, and Anton Van Den Hengel. 2013b. Contextual hypergraph modelling for salient object detection. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, 3328--3335.Google ScholarGoogle Scholar
  32. Xiaohui Li, Huchuan Lu, Lihe Zhang, Xiang Ruan, and Ming-Hsuan Yang. 2013c. Saliency detection via dense and sparse reconstruction. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, 2976--2983. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Chia-Kai Liang, Tai-Hsu Lin, Bing-Yi Wong, Chi Liu, and Homer H. Chen. 2008. Programmable aperture photography: Multiplexed light field acquisition. ACM Trans. Graph. 27, 3 (2008), 55.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Ce Liu. 2009. Beyond Pixels: Exploring New Representations and Applications for Motion Analysis. Ph.D. Dissertation. Massachusetts Institute of Technology.Google ScholarGoogle Scholar
  35. Nian Liu, Junwei Han, Dingwen Zhang, Shifeng Wen, and Tianming Liu. 2015. Predicting eye fixations using convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 362--370.Google ScholarGoogle Scholar
  36. Tie Liu, Zejian Yuan, Jian Sun, Jingdong Wang, Nanning Zheng, Xiaoou Tang, and Heung-Yeung Shum. 2011. Learning to detect a salient object. IEEE Trans. Pattern Anal. Mach. Intell. 33, 2 (2011), 353--367. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Yao Lu, Wei Zhang, Cheng Jin, and Xiangyang Xue. 2012. Learning attention map from images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 1067--1074.Google ScholarGoogle Scholar
  38. Andrew Lumsdaine and Todor Georgiev. 2009. The focused plenoptic camera. In Proceedings of the International Conference on Computational Photography. IEEE, 1--8. Google ScholarGoogle ScholarCross RefCross Ref
  39. Ping Luo, Yonglong Tian, Xiaogang Wang, and Xiaoou Tang. 2014. Switchable deep network for pedestrian detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 899--906. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Chih-Yao Ma and Hsueh-Ming Hang. 2015. Learning-based saliency model with depth information. J. Vis. 15, 6 (2015), 1--22. Google ScholarGoogle ScholarCross RefCross Ref
  41. Kshitij Marwah, Gordon Wetzstein, Yosuke Bando, and Ramesh Raskar. 2013. Compressive light field photography using overcomplete dictionaries and optimized projections. ACM Trans. Graph. 32, 4 (2013), 46. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Hajime Mihara, Takuya Funatomi, Kenichiro Tanaka, and Hiroyuki Kubo. 2016. 4D light field segmentation with spatial and angular consistencies. In Proceedings of the International Conference on Computational Photography. IEEE, 1--8. Google ScholarGoogle ScholarCross RefCross Ref
  43. Antoine Mousnier, Elif Vural, and Christine Guillemot. 2015. Partial light field tomographic reconstruction from a fixed-camera focal stack. arXiv preprint arXiv:1503.01903 abs/1503.01903 (2015).Google ScholarGoogle Scholar
  44. Ren Ng. 2006. Digital Light Field Photography. Ph.D. Dissertation. Stanford University.Google ScholarGoogle Scholar
  45. Ren Ng, Marc Levoy, Mathieu Brédif, Gene Duval, Mark Horowitz, and Pat Hanrahan. 2005. Light Field Photography with a Hand-held Plenoptic Camera. Technical Report 2. Stanford University Computer Science.Google ScholarGoogle Scholar
  46. Nobuyuki Otsu. 1975. A threshold selection method from gray-level histograms. Automatica 11, 285--296 (1975), 23--27.Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Houwen Peng, Bing Li, Weihua Xiong, Weiming Hu, and Rongrong Ji. 2014. RGBD salient object detection: A benchmark and algorithms. In Proceedings of the European Conference on Computer Vision. Springer, Berlin, 92--109. Google ScholarGoogle ScholarCross RefCross Ref
  48. Federico Perazzi, Philipp Krähenbühl, Yael Pritch, and Alexander Hornung. 2012. Saliency filters: Contrast based filtering for salient region detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 733--740. Google ScholarGoogle ScholarCross RefCross Ref
  49. David Martin Powers. 2011. Evaluation: From precision, recall and F-measure to ROC, informedness, markedness and correlation. J. Mach. Learn. Technol. 2, 1 (2011), 37--63.Google ScholarGoogle ScholarCross RefCross Ref
  50. Ramachandra Raghavendra, Kiran B. Raja, and Christoph Busch. 2014. Presentation attack detection for face recognition using light field camera. IEEE Trans. Image Process. 24, 3 (2014), 1060--1075. Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Subramanian Ramanathan, Harish Katti, Nicu Sebe, Mohan Kankanhalli, and Tat-Seng Chua. 2010. An eye fixation database for saliency detection in images. In Proceedings of the European Conference on Computer Vision, Vol. 6314. Springer, Berlin, 30--43. Google ScholarGoogle ScholarCross RefCross Ref
  52. Jianqiang Ren, Xiaojin Gong, Lu Yu, Wenhui Zhou, and Michael Ying Yang. 2015. Exploiting global priors for RGB-D saliency detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 25--32. Google ScholarGoogle ScholarCross RefCross Ref
  53. Tongwei Ren, Yan Liu, Ran Ju, and Gangshan Wu. 2016. How important is location information in saliency detection of natural images. Multimedia Tools Appl. 75, 5 (2016), 2543--2564. Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Martin Rerabek and Touradj Ebrahimi. 2016. New light field image dataset. In Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX). IEEE.Google ScholarGoogle Scholar
  55. Boris Schauerte and Rainer Stiefelhagen. 2013. How the distribution of salient objects in images influences salient object detection. In Proceedings of the International Conference on Image Processing. IEEE, 74--78. Google ScholarGoogle ScholarCross RefCross Ref
  56. Atsushi Shimada, Hajime Nagahara, and Rin ichiro Taniguchi. 2013. Object detection based on spatio-temporal light field sensing. IPSJ Trans. Comput. Vis. Appl. 5, 0 (2013), 129--133.Google ScholarGoogle ScholarCross RefCross Ref
  57. Michael W. Tao, Pratul P. Srinivasan, Jitendra Malik, Szymon Rusinkiewicz, and Ravi Ramamoorthi. 2015. Depth from shading, defocus, and correspondence using light-field angular coherence. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 1940--1948. Google ScholarGoogle ScholarCross RefCross Ref
  58. Po-He Tseng, Ran Carmi, Ian G. M. Cameron, Douglas P. Munoz, and Laurent Itti. 2009. Quantifying center bias of observers in free viewing of dynamic natural scenes. J. Vis. 9, 7 (2009), 4. Google ScholarGoogle ScholarCross RefCross Ref
  59. Vaibhav Vaish, Marc Levoy, Richard Szeliski, C. Lawrence Zitnick, and Sing Bing Kang. 2006. Reconstructing occluded surfaces using synthetic apertures: Stereo, focus and robust measures. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Vol. 2. IEEE, 2331--2338. Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. Ashok Veeraraghavan, Ramesh Raskar, Amit Agrawal, Ankit Mohan, and Jack Tumblin. 2007. Dappled photography: Mask enhanced cameras for heterodyned light fields and coded aperture refocusing. ACM Trans. Graph. 26, 3 (2007), 9. Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. Eleonora Vig, Michael Dorr, and David Cox. 2014. Large-scale optimization of hierarchical features for saliency prediction in natural images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2798--2805. Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. Junle Wang, Matthieu Perreira da Silva, Patrick Le Callet, and Vincent Ricordel. 2013. A computational model of stereoscopic 3d visual saliency. IEEE Trans. Image Process. 22, 6 (2013), 2151--2165. Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. Ting-Chun Wang, Alexei Efros, and Ravi Ramamoorthi. 2015. Occlusion-aware depth estimation using light-field cameras. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, 3487--3495. Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. Sven Wanner, Stephan Meister, and Bastian Goldluecke. 2013. Datasets and benchmarks for densely sampled 4d light fields. In Vision, Modeling and Visualization. The Eurographics Association, Lugano, Switzerland, 225--226.Google ScholarGoogle Scholar
  65. Gordon Wetzstein, Ivo Ihrke, Douglas Lanman, and Wolfgang Heidrich. 2011. Computational plenoptic imaging. Comput. Graph. Forum 30, 8 (2011), 2397--2426. Google ScholarGoogle ScholarCross RefCross Ref
  66. Gordon Wetzstein, Douglas R. Lanman, Matthew Waggener Hirsch, and Ramesh Raskar. 2012. Tensor displays: Compressive light field synthesis using multilayer displays with directional backlighting. ACM Trans. Graph. 31, 4 (2012), 1--22. Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. Bennett Wilburn, Neel Joshi, Vaibhav Vaish, Eino-Ville Talvala, Emilio Antunez, Adam Barth, Andrew Adams, Mark Horowitz, and Marc Levoy. 2005. High performance imaging using large camera arrays. ACM Trans. Graph. 24, 3 (2005), 765--776. Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. Juan Xu, Ming Jiang, Shuo Wang, Mohan S. Kankanhalli, and Qi Zhao. 2014. Predicting human gaze beyond pixels. J. Vis. 14, 1 (2014), 1--20. Google ScholarGoogle ScholarCross RefCross Ref
  69. Linfeng Xu, Hongliang Li, Liaoyuan Zeng, and King Ngi Ngan. 2013. Saliency detection using joint spatial-color constraint and multiscale segmentation. J. Vis. Commun. Image Represent. 24, 4 (2013), 465--476. Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. Yichao Xu, Kazuki Maeno, Hajime Nagahara, Atsushi Shimada, and Rin ichiro Taniguchi. 2015a. Light field distortion feature for transparent object classification. Comput. Vis. Image Understand. 139 (2015), 122--135.Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. Yichao Xu, Hajime Nagahara, Atsushi Shimada, and Rin ichiro Taniguchi. 2015b. TransCut: Transparent object segmentation from a light-field image. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, 3442--3450. Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. Chuan Yang, Lihe Zhang, Huchuan Lu, Xiang Ruan, and Ming-Hsuan Yang. 2013. Saliency detection via graph-based manifold ranking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 3166--3173. Google ScholarGoogle ScholarDigital LibraryDigital Library
  73. Jun Zhang, Meng Wang, Jun Gao, Yi Wang, Xudong Zhang, and Xindong Wu. 2015. Saliency detection with a deeper investigation of light field. In Proceedings of the 24th International Conference on Artificial Intelligence. AAAI, 2212--2218.Google ScholarGoogle Scholar
  74. Qi Zhao and Christof Koch. 2012. Learning visual saliency by combining feature maps in a nonlinear manner using AdaBoost. J. Vis. 12, 6 (2012), 1--15. Google ScholarGoogle ScholarCross RefCross Ref
  75. Rui Zhao, Wanli Ouyang, Hongsheng Li, and Xiaogang Wang. 2015. Saliency detection by multi-context deep learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 1265--1274. Google ScholarGoogle ScholarCross RefCross Ref
  76. Rui Zhao, Wanli Ouyang, and Xiaogang Wang. 2013. Unsupervised salience learning for person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 3586--3593. Google ScholarGoogle ScholarDigital LibraryDigital Library
  77. Guokang Zhu, Qi Wang, and Yuan Yuan. 2014. Tag-saliency: Combining bottom-up and top-down information for saliency detection. Comput. Vis. Image Understand. 118 (2014), 40--49. Google ScholarGoogle ScholarDigital LibraryDigital Library
  78. Wangjiang Zhu, Shuang Liang, Yichen Wei, and Jian Sun. 2014. Saliency optimization from robust background detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2814--2821. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Saliency Detection on Light Field: A Multi-Cue Approach

            Recommendations

            Comments

            Login options

            Check if you have access through your login credentials or your institution to get full access on this article.

            Sign in

            Full Access

            • Published in

              cover image ACM Transactions on Multimedia Computing, Communications, and Applications
              ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 13, Issue 3
              August 2017
              233 pages
              ISSN:1551-6857
              EISSN:1551-6865
              DOI:10.1145/3104033
              Issue’s Table of Contents

              Copyright © 2017 ACM

              Publisher

              Association for Computing Machinery

              New York, NY, United States

              Publication History

              • Published: 27 July 2017
              • Revised: 1 May 2017
              • Accepted: 1 May 2017
              • Received: 1 September 2016
              Published in tomm Volume 13, Issue 3

              Permissions

              Request permissions about this article.

              Request Permissions

              Check for updates

              Qualifiers

              • research-article
              • Research
              • Refereed

            PDF Format

            View or Download as a PDF file.

            PDF

            eReader

            View online with eReader.

            eReader
            About Cookies On This Site

            We use cookies to ensure that we give you the best experience on our website.

            Learn more

            Got it!