skip to main content
research-article

A Fast View Synthesis Implementation Method for Light Field Applications

Authors Info & Claims
Published:12 November 2021Publication History
Skip Abstract Section

Abstract

View synthesis (VS) for light field images is a very time-consuming task due to the great quantity of involved pixels and intensive computations, which may prevent it from the practical three-dimensional real-time systems. In this article, we propose an acceleration approach for deep learning-based light field view synthesis, which can significantly reduce calculations by using compact-resolution (CR) representation and super-resolution (SR) techniques, as well as light-weight neural networks. The proposed architecture has three cascaded neural networks, including a CR network to generate the compact representation for original input views, a VS network to synthesize new views from down-scaled compact views, and a SR network to reconstruct high-quality views with full resolution. All these networks are jointly trained with the integrated losses of CR, VS, and SR networks. Moreover, due to the redundancy of deep neural networks, we use the efficient light-weight strategy to prune filters for simplification and inference acceleration. Experimental results demonstrate that the proposed method can greatly reduce the processing time and become much more computationally efficient with competitive image quality.

REFERENCES

  1. [1] Avidan Shai and Shamir Ariel. 2007. Seam carving for content-aware image resizing. In ACM SIGGRAPH 2007 Papers. 10–es. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. [2] Ballé Johannes, Minnen David, Singh Saurabh, Jin Hwang Sung, and Johnston Nick. 2018. Variational image compression with a scale hyperprior. In International Conference on Learning Representations. https://openreview.net/forum?id=rkcQFMZRb.Google ScholarGoogle Scholar
  3. [3] Bergstra James S, Bardenet Rémi, Bengio Yoshua, and Kégl Balázs. 2011. Algorithms for hyper-parameter optimization. In Advances in Neural Information Processing Systems. 25462554. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. [4] Buciluǎ Cristian, Caruana Rich, and Niculescu-Mizil Alexandru. 2006. Model compression. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 535541. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. [5] Chaurasia Gaurav, Duchene Sylvain, Sorkine-Hornung Olga, and Drettakis George. 2013. Depth synthesis and local warps for plausible image-based navigation. ACM Trans. Graph. 32, 3 (2013), 112. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. [6] Chen Xiaozhi, Kundu Kaustav, Zhu Yukun, Ma Huimin, Fidler Sanja, and Urtasun Raquel. 2018. 3D object proposals using stereo imagery for accurate object class detection. IEEE Trans. Pattern Anal. Mach. Intell. 40, 5 (2018), 12591272.Google ScholarGoogle ScholarCross RefCross Ref
  7. [7] Cheng Yu, Wang Duo, Zhou Pan, and Zhang Tao. 2017. A survey of model compression and acceleration for deep neural networks. arXiv:1710.09282. Retrieved from https://arxiv.org/abs/1710.09282.Google ScholarGoogle Scholar
  8. [8] Courbariaux Matthieu, Hubara Itay, Soudry Daniel, El-Yaniv Ran, and Bengio Yoshua. 2016. Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1. arXiv:1602.02830. Retrieved from https://arxiv.org/abs/1602.02830.Google ScholarGoogle Scholar
  9. [9] Denton Emily L., Zaremba Wojciech, Bruna Joan, LeCun Yann, and Fergus Rob. 2014. Exploiting linear structure within convolutional networks for efficient evaluation. In Advances in Neural Information Processing Systems. 12691277. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. [10] Didyk Piotr, Sitthi-Amorn Pitchaya, Freeman William, Durand Frédo, and Matusik Wojciech. 2013. Joint view expansion and filtering for automultiscopic 3D displays. ACM Trans. Graph. 32, 6 (2013), 18. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. [11] Dong Chao, Loy Chen Change, He Kaiming, and Tang Xiaoou. 2015. Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 38, 2 (2015), 295307. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. [12] Dong Chao, Loy Chen Change, and Tang Xiaoou. 2016. Accelerating the super-resolution convolutional neural network. In Proceedings of the European Conference on Computer Vision. Springer, 391407.Google ScholarGoogle ScholarCross RefCross Ref
  13. [13] Fang Lu, Au Oscar C., Tang Ketan, Wen Xing, and Wang Hanli. 2011. Novel 2-D MMSE subpixel-based image down-sampling. IEEE Trans. Circ. Syst. Vid. Technol. 22, 5 (2011), 740753. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. [14] Farrugia Reuben A., Galea Christian, and Christine Guillemot. 2017. Super resolution of light field images using linear subspace projection of patch-volumes. IEEE J. Select. Top. Sign. Process. 11, 7 (2017), 10581071.Google ScholarGoogle ScholarCross RefCross Ref
  15. [15] Feng Mingtao, Gilani Syed Zulqarnain, Wang Yaonan, Zhang Liang, and Mian Ajmal. 2021. Relation graph network for 3D object detection in point clouds. IEEE Trans. Image Process. 30 (2021), 92107.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. [16] Feurer Matthias, Klein Aaron, Eggensperger Katharina, Springenberg Jost, Blum Manuel, and Hutter Frank. 2015. Efficient and robust automated machine learning. In Advances in Neural Information Processing Systems. 29622970. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. [17] Flynn John, Neulander Ivan, Philbin James, and Snavely Noah. 2016. Deepstereo: Learning to predict new views from the world’s imagery. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 55155524.Google ScholarGoogle ScholarCross RefCross Ref
  18. [18] Gao Wei, Kwong Sam, and Jia Yuheng. 2017. Joint machine learning and game theory for rate control in high efficiency video coding. IEEE Trans. Image Process. 26, 12 (2017), 60746089.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. [19] Gao Wei, Kwong Sam, Yuan Hui, and Wang Xu. 2016. DCT coefficient distribution modeling and quality dependency analysis based frame-level bit allocation for HEVC. IEEE Trans. Circ. Syst. Vid. Technol. 26, 1 (2016), 139153.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. [20] Gao Wei, Kwong Sam, Zhou Yu, and Yuan Hui. [n.d.].Google ScholarGoogle Scholar
  21. [21] Gao Wei, Tao Lvfang, Zhou Linjie, Yang Dinghao, Zhang Xiaoyu, and Guo Zixuan. 2020. Low-rate image compression with super-resolution learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW’20). 607610.Google ScholarGoogle ScholarCross RefCross Ref
  22. [22] Georgeiv Todor, Colin Zheng Ke, Curless Brian, Salesin David, Nayar Shree, and Intwala Chintan. 2006. Spatio-angular resolution tradeoff in integral photography. In Proceedings of Eurographics Symposium on Rendering. 263272. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. [23] Glorot Xavier and Bengio Yoshua. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the 13th International Conference on Artificial Intelligence and Statistics. JMLR Workshop and Conference Proceedings, 249256.Google ScholarGoogle Scholar
  24. [24] Gordon Ariel, Eban Elad, Nachum Ofir, Chen Bo, Wu Hao, Yang Tien-Ju, and Choi Edward. 2018. Morphnet: Fast & simple resource-constrained structure learning of deep networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 15861595.Google ScholarGoogle ScholarCross RefCross Ref
  25. [25] Guo Yiwen, Yao Anbang, and Chen Yurong. 2016. Dynamic network surgery for efficient dnns. In Advances in Neural Information Processing Systems. 13791387. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. [26] Han Song, Pool Jeff, Tran John, and Dally William. 2015. Learning both weights and connections for efficient neural network. In Advances in Neural Information Processing Systems. 11351143. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. [27] Hanson Stephen José and Pratt Lorien Y.. 1989. Comparing biases for minimal network construction with back-propagation. In Advances in Neural Information Processing Systems. 177185. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. [28] Haris Muhammad, Shakhnarovich Gregory, and Ukita Norimichi. 2018. Deep back-projection networks for super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 16641673.Google ScholarGoogle ScholarCross RefCross Ref
  29. [29] Hassibi Babak and Stork David G.. 1993. Second order derivatives for network pruning: Optimal brain surgeon. In Advances in Neural Information Processing Systems. 164171. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. [30] He Kaiming, Zhang Xiangyu, Ren Shaoqing, and Sun Jian. 2015. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE International Conference on Computer Vision. 10261034. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. [31] He Yihui, Lin Ji, Liu Zhijian, Wang Hanrui, Li Li-Jia, and Han Song. 2018. Amc: Automl for model compression and acceleration on mobile devices. In Proceedings of the European Conference on Computer Vision (ECCV’18). 784800.Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. [32] He Yang, Liu Ping, Wang Ziwei, Hu Zhilan, and Yang Yi. 2019. Filter pruning via geometric median for deep convolutional neural networks acceleration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 43404349.Google ScholarGoogle ScholarCross RefCross Ref
  33. [33] Hinton Geoffrey, Vinyals Oriol, and Dean Jeff. 2015. Distilling the knowledge in a neural network. arXiv:1503.02531. Retrieved from https://arxiv.org/abs/1503.02531.Google ScholarGoogle Scholar
  34. [34] Hutter Frank, Hoos Holger H., and Leyton-Brown Kevin. 2011. Sequential model-based optimization for general algorithm configuration. In Proceedings of the International Conference on Learning and Intelligent Optimization. Springer, 507523. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. [35] Jacob Benoit, Kligys Skirmantas, Chen Bo, Zhu Menglong, Tang Matthew, Howard Andrew, Adam Hartwig, and Kalenichenko Dmitry. 2018. Quantization and training of neural networks for efficient integer-arithmetic-only inference. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 27042713.Google ScholarGoogle ScholarCross RefCross Ref
  36. [36] Jaderberg Max, Vedaldi Andrea, and Zisserman Andrew. 2014. Speeding up convolutional neural networks with low rank expansions. arXiv:1405.3866. Retrieved from https://arxiv.org/abs/1405.3866.Google ScholarGoogle Scholar
  37. [37] Jiang Feng, Tao Wen, Liu Shaohui, Ren Jie, Guo Xun, and Zhao Debin. 2017. An end-to-end compression framework based on convolutional neural networks. IEEE Trans. Circ. Syst. Vid. Technol. 28, 10 (2017), 30073018.Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. [38] Jiang Qiuping, Shao Feng, Gao Wei, Chen Zhuo, Jiang Gangyi, and Ho Yo-Sung. 2019. Unified no-reference quality assessment of singly and multiply distorted stereoscopic images. IEEE Trans. Image Process. 28, 4 (2019), 18661881.Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. [39] Jiang Qiuping, Shao Feng, Gao Wei, Li Hong, and Ho Yo-Sung. 2019. A risk-aware pairwise rank learning approach for visual discomfort prediction of stereoscopic 3D. IEEE Sign. Process. Lett. 26, 11 (2019), 15881592.Google ScholarGoogle ScholarCross RefCross Ref
  40. [40] Jones Donald R.. 2001. A taxonomy of global optimization methods based on response surfaces. J. Global Optim. 21, 4 (2001), 345383. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. [41] Kalantari Nima Khademi, Wang Ting-Chun, and Ramamoorthi Ravi. 2016. Learning-based view synthesis for light field cameras. ACM Trans. Graph. 35, 6 (2016), 110. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. [42] Kim Jiwon, Lee Jung Kwon, and Lee Kyoung Mu. 2016. Accurate image super-resolution using very deep convolutional networks. In Proceedings of the IEEE Conference on Computer vision and Pattern Recognition. 16461654.Google ScholarGoogle ScholarCross RefCross Ref
  43. [43] Kim Jiwon, Lee Jung Kwon, and Lee Kyoung Mu. 2016. Deeply-recursive convolutional network for image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 16371645.Google ScholarGoogle ScholarCross RefCross Ref
  44. [44] P. Diederikand Jimmy Ba Kingma. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations. https://openreview.net/forum?id=8gmWwjFyLj.Google ScholarGoogle Scholar
  45. [45] Lai Wei-Sheng, Huang Jia-Bin, Ahuja Narendra, and Yang Ming-Hsuan. 2017. Deep laplacian pyramid networks for fast and accurate super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 624632.Google ScholarGoogle ScholarCross RefCross Ref
  46. [46] LeCun Yann, Denker John S., and Solla Sara A.. 1990. Optimal brain damage. In Advances in Neural Information Processing Systems. 598605. Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. [47] Levin Anat and Durand Fredo. 2010. Linear view synthesis using a dimensionality gap light field prior. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE, 18311838.Google ScholarGoogle ScholarCross RefCross Ref
  48. [48] Li Hao, Kadav Asim, Durdanovic Igor, Samet Hanan, and Peter Graf Hans. 2017. Pruning filters for efficient ConvNets. In International Conference on Learning Representations. https://openreview.net/forum?id=rJqFGTslg.Google ScholarGoogle Scholar
  49. [49] Li Yue, Liu Dong, Li Houqiang, Li Li, Li Zhu, and Wu Feng. 2018. Learning a convolutional neural network for image compact-resolution. IEEE Trans. Image Process. 28, 3 (2018), 10921107.Google ScholarGoogle ScholarCross RefCross Ref
  50. [50] Liao Guibiao, Gao Wei, Jiang Qiuping, Wang Ronggang, and Li Ge. 2020. MMNet: Multi-stage and multi-scale fusion network for RGB-d salient object detection. In Proceedings of the 28th ACM International Conference on Multimedia. Association for Computing Machinery, 24362444. Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. [51] Lim Bee, Son Sanghyun, Kim Heewon, Nah Seungjun, and Lee Kyoung Mu. 2017. Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 136144.Google ScholarGoogle ScholarCross RefCross Ref
  52. [52] Luo Ping, Zhu Zhenyao, Liu Ziwei, Wang Xiaogang, and Tang Xiaoou. 2016. Face model compression by distilling knowledge from neurons. In Proceedings of the 30th AAAI Conference on Artificial Intelligence. Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. [53] Molchanov Pavlo, Mallya Arun, Tyree Stephen, Frosio Iuri, and Kautz Jan. 2019. Importance estimation for neural network pruning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1126411272.Google ScholarGoogle ScholarCross RefCross Ref
  54. [54] Pearson James, Brookes Mike, and Dragotti Pier Luigi. 2013. Plenoptic layer-based modeling for image based rendering. IEEE Trans. Image Process. 22, 9 (2013), 34053419. Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. [55] Pujades Sergi, Devernay Frédéric, and Goldluecke Bastian. 2014. Bayesian view synthesis and image-based rendering principles. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 39063913. Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. [56] Sunder Raj Abhilash, Lowney Michael, Shah Raj, and Wetzstein Gordon. 2020. The stanford lytro light field archive. Retrieved on Sept. 1, 2020 from http://lightfields.stanford.edu/LF2016.html.Google ScholarGoogle Scholar
  57. [57] Rigamonti Roberto, Sironi Amos, Lepetit Vincent, and Fua Pascal. 2013. Learning separable filters. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 27542761. Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. [58] Sainath Tara N., Kingsbury Brian, Sindhwani Vikas, Arisoy Ebru, and Ramabhadran Bhuvana. 2013. Low-rank matrix factorization for deep neural network training with high-dimensional output targets. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 66556659.Google ScholarGoogle ScholarCross RefCross Ref
  59. [59] Selvaraju Ramprasaath R, Cogswell Michael, Das Abhishek, Vedantam Ramakrishna, Parikh Devi, and Batra Dhruv. 2017. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision. 618626.Google ScholarGoogle ScholarCross RefCross Ref
  60. [60] Shi Lixin, Hassanieh Haitham, Davis Abe, Katabi Dina, and Durand Fredo. 2014. Light field reconstruction using sparsity in the continuous fourier domain. ACM Trans. Graph. 34, 1 (2014), 113. Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. [61] Shin Changha, Jeon Hae-Gon, Yoon Youngjin, Kweon In So, and Kim Seon Joo. 2018. Epinet: A fully-convolutional neural network using epipolar geometry for depth from light field images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 47484757.Google ScholarGoogle ScholarCross RefCross Ref
  62. [62] Srinivasan Pratul P., Wang Tongzhou, Sreelal Ashwin, Ramamoorthi Ravi, and Ng Ren. 2017. Learning to synthesize a 4d rgbd light field from a single image. In Proceedings of the IEEE International Conference on Computer Vision. 22432251.Google ScholarGoogle ScholarCross RefCross Ref
  63. [63] Tai Cheng, Xiao Tong, Zhang Yi, Wang Xiaogang, et al. 2015. Convolutional neural networks with low-rank regularization. arXiv:1511.06067. Retrieved from https://arxiv.org/abs/1511.06067.Google ScholarGoogle Scholar
  64. [64] Tai Ying, Yang Jian, and Liu Xiaoming. 2017. Image super-resolution via deep recursive residual network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 31473155.Google ScholarGoogle ScholarCross RefCross Ref
  65. [65] Tan Mingxing, Chen Bo, Pang Ruoming, Vasudevan Vijay, Sandler Mark, Howard Andrew, and Le Quoc V. 2019. Mnasnet: Platform-aware neural architecture search for mobile. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 28202828.Google ScholarGoogle ScholarCross RefCross Ref
  66. [66] Tong Tong, Li Gen, Liu Xiejie, and Gao Qinquan. 2017. Image super-resolution using dense skip connections. In Proceedings of the IEEE International Conference on Computer Vision. 47994807.Google ScholarGoogle ScholarCross RefCross Ref
  67. [67] Vagharshakyan Suren, Bregovic Robert, and Gotchev Atanas. 2017. Accelerated shearlet-domain light field reconstruction. IEEE J. Select. Top. Sign. Process. 11, 7 (2017), 10821091.Google ScholarGoogle ScholarCross RefCross Ref
  68. [68] Vagharshakyan Suren, Bregovic Robert, and Gotchev Atanas. 2017. Light field reconstruction using shearlet transform. IEEE Trans. Pattern Anal. Mach. Intell. 40, 1 (2017), 133147.Google ScholarGoogle ScholarCross RefCross Ref
  69. [69] Wanner Sven and Goldluecke Bastian. 2012. Spatial and angular variational super-resolution of 4D light fields. In Proceedings of the European Conference on Computer Vision. Springer, 608621. Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. [70] Wanner Sven and Goldluecke Bastian. 2013. Variational light field analysis for disparity estimation and super-resolution. IEEE Trans. Pattern Anal. Mach. Intell. 36, 3 (2013), 606619. Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. [71] Wu Gaochang, Zhao Mandan, Wang Liangyong, Dai Qionghai, Chai Tianyou, and Liu Yebin. 2017. Light field reconstruction using deep convolutional network on EPI. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 63196327.Google ScholarGoogle ScholarCross RefCross Ref
  72. [72] Yoon Youngjin, Jeon Hae-Gon, Yoo Donggeun, Lee Joon-Young, and Kweon In So. 2015. Learning a deep convolutional network for light-field image super-resolution. In Proceedings of the IEEE International Conference on Computer Vision Workshops. 2432. Google ScholarGoogle ScholarDigital LibraryDigital Library
  73. [73] You Yang, Buluç Aydın, and Demmel James. 2017. Scaling deep learning on gpu and knights landing clusters. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. 112. Google ScholarGoogle ScholarDigital LibraryDigital Library
  74. [74] Yu Jiahui and Huang Thomas. 2019. AutoSlim: Towards one-shot architecture search for channel numbers. arXiv:1903.11728. Retrieved from https://arxiv.org/abs/1903.11728.Google ScholarGoogle Scholar
  75. [75] Yuan Hui, Kwong Sam, Wang Xu, Gao Wei, and Zhang Yun. 2015. Rate distortion optimized inter-view frame level bit allocation method for MV-HEVC. IEEE Trans. Multimedia 17, 12 (2015), 21342146.Google ScholarGoogle ScholarDigital LibraryDigital Library
  76. [76] Zhang Fang-Lue, Wang Jue, Shechtman Eli, Zhou Zi-Ye, Shi Jia-Xin, and Hu Shi-Min. 2016. Plenopatch: Patch-based plenoptic image manipulation. IEEE Trans. Vis. Comput. Graph. 23, 5 (2016), 15611573. Google ScholarGoogle ScholarDigital LibraryDigital Library
  77. [77] Zhang Jing, Cao Yang, Zha Zheng-Jun, Zheng Zhigang, Chen Chang-Wen, and Wang Zengfu. 2016. A unified scheme for super-resolution and depth estimation from asymmetric stereoscopic video. IEEE Trans. Circ. Syst. Vid. Technol. 26, 3 (2016), 479493. Google ScholarGoogle ScholarDigital LibraryDigital Library
  78. [78] Zhang Zhoutong, Liu Yebin, and Dai Qionghai. 2015. Light field from micro-baseline image pair. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 38003809.Google ScholarGoogle ScholarCross RefCross Ref
  79. [79] Zhou Aojun, Yao Anbang, Guo Yiwen, Xu Lin, and Chen Yurong. 2017. Incremental network quantization: Towards lossless CNNs with low-precision weights. In International Conference on Learning Representations. https://openreview.net/forum?id=HyQJ-mclg.Google ScholarGoogle Scholar

Index Terms

  1. A Fast View Synthesis Implementation Method for Light Field Applications

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Multimedia Computing, Communications, and Applications
      ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 17, Issue 4
      November 2021
      529 pages
      ISSN:1551-6857
      EISSN:1551-6865
      DOI:10.1145/3492437
      Issue’s Table of Contents

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 12 November 2021
      • Accepted: 1 March 2021
      • Revised: 1 February 2021
      • Received: 1 October 2020
      Published in tomm Volume 17, Issue 4

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Refereed

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    View Full Text

    HTML Format

    View this article in HTML Format .

    View HTML Format
    About Cookies On This Site

    We use cookies to ensure that we give you the best experience on our website.

    Learn more

    Got it!