skip to main content
survey

Video Frame Interpolation: A Comprehensive Survey

Published:11 May 2023Publication History
Skip Abstract Section

Abstract

Video Frame Interpolation (VFI) is a fascinating and challenging problem in the computer vision (CV) field, aiming to generate non-existing frames between two consecutive video frames. In recent years, many algorithms based on optical flow, kernel, or phase information have been proposed. In this article, we provide a comprehensive review of recent developments in the VFI technique. We first introduce the history of VFI algorithms’ development, the evaluation metrics, and publicly available datasets. We then compare each algorithm in detail, point out their advantages and disadvantages, and compare their interpolation performance and speed on different remarkable datasets. VFI technology has drawn continuous attention in the CV community, some video processing applications based on VFI are also mentioned in this survey, such as slow-motion generation, video compression, video restoration. Finally, we outline the bottleneck faced by the current video frame interpolation technology and discuss future research work.

REFERENCES

  1. [1] Ahn Ha-Eun, Jeong Jinwoo, and Kim Je Woo. 2019. A fast 4K video frame interpolation using a hybrid task-based convolutional neural network. Symmetry 11, 5 (2019), 619.Google ScholarGoogle ScholarCross RefCross Ref
  2. [2] Argaw Dawit Mureja, Kim Junsik, Rameau Francois, and Kweon In So. 2021. Motion-blurred video interpolation and extrapolation. In Proceedings of the AAAI Conference on Artificial Intelligence.Google ScholarGoogle ScholarCross RefCross Ref
  3. [3] Baker Simon, Scharstein Daniel, Lewis J. P., Roth Stefan, Black Michael J., and Szeliski Richard. 2011. A database and evaluation methodology for optical flow. International Journal of Computer Vision 92, 1 (2011), 131.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. [4] Bao Wenbo, Lai Wei-Sheng, Ma Chao, Zhang Xiaoyun, Gao Zhiyong, and Yang Ming-Hsuan. 2019. Depth-aware video frame interpolation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 37033712.Google ScholarGoogle ScholarCross RefCross Ref
  5. [5] Bao Wenbo, Lai Wei-Sheng, Zhang Xiaoyun, Gao Zhiyong, and Yang Ming-Hsuan. 2019. Memc-net: Motion estimation and motion compensation driven neural network for video interpolation and enhancement. IEEE Transactions on Pattern Analysis and Machine Intelligence (2019).Google ScholarGoogle Scholar
  6. [6] Bao Wenbo, Zhang Xiaoyun, Chen Li, Ding Lianghui, and Gao Zhiyong. 2018. High-order model and dynamic filtering for frame rate up-conversion. IEEE Transactions on Image Processing 27, 8 (2018), 38133826.Google ScholarGoogle ScholarCross RefCross Ref
  7. [7] Bégaint Jean, Galpin Franck, Guillotel Philippe, and Guillemot Christine. 2019. Deep frame interpolation for video compression. In Proceedings of the DCC 2019-Data Compression Conference. IEEE, 110.Google ScholarGoogle ScholarCross RefCross Ref
  8. [8] Brooks Tim and Barron Jonathan T.. 2019. Learning to synthesize motion blur. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 68406848.Google ScholarGoogle ScholarCross RefCross Ref
  9. [9] Carion Nicolas, Massa Francisco, Synnaeve Gabriel, Usunier Nicolas, Kirillov Alexander, and Zagoruyko Sergey. 2020. End-to-end object detection with transformers. In Proceedings of the European Conference on Computer Vision. Springer, 213229.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. [10] Castagno Roberto, Haavisto Petri, and Ramponi Giovanni. 1996. A method for motion adaptive frame rate up-conversion. IEEE Transactions on Circuits and Systems for Video Technology 6, 5 (1996), 436446.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. [11] Chen Mark, Radford Alec, Child Rewon, Wu Jeffrey, Jun Heewoo, Luan David, and Sutskever Ilya. 2020. Generative pretraining from pixels. In Proceedings of the International Conference on Machine Learning. (PMLR), 16911703.Google ScholarGoogle Scholar
  12. [12] Chen Weifeng, Fu Zhao, Yang Dawei, and Deng Jia. 2016. Single-image depth perception in the wild. In Advances in Neural Information Processing Systems. 730738.Google ScholarGoogle Scholar
  13. [13] Chen Zhiqi, Wang Ran, Liu Haojie, and Wang Yao. 2021. PDWN: Pyramid deformable warping network for video interpolation. IEEE Open Journal of Signal Processing 2 (2021), 413424.Google ScholarGoogle ScholarCross RefCross Ref
  14. [14] Cheng Xianhang and Chen Zhenzhong. 2019. A multi-scale position feature transform network for video frame interpolation. IEEE Transactions on Circuits and Systems for Video Technology 30, 11 (2019), 39683981.Google ScholarGoogle ScholarCross RefCross Ref
  15. [15] Cheng Xianhang and Chen Zhenzhong. 2020. Video frame interpolation via deformable separable convolution. In Proceedings of the AAAI Conference on Artificial Intelligence 34, 1060710614.Google ScholarGoogle ScholarCross RefCross Ref
  16. [16] Cheng Xianhang and Chen Zhenzhong. 2021. Multiple video frame interpolation via enhanced deformable separable convolution. IEEE Transactions on Pattern Analysis and Machine Intelligence (2021).Google ScholarGoogle Scholar
  17. [17] Chi Zhixiang, Nasiri Rasoul Mohammadi, Liu Zheng, Lu Juwei, Tang Jin, and Plataniotis Konstantinos N.. 2020. All at once: Temporally adaptive multi-frame interpolation with advanced motion modeling. In European Conference on Computer Vision. Springer, 107123.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. [18] Choi Byeong-Doo, Han Jong-Woo, Kim Chang-Su, and Ko Sung-Jea. 2007. Motion-compensated frame interpolation using bilateral motion estimation and adaptive overlapped block motion compensation. IEEE Transactions on Circuits and Systems for Video Technology 17, 4 (2007), 407416.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. [19] Choi Hyomin and Bajić Ivan V.. 2019. Deep frame prediction for video coding. IEEE Transactions on Circuits and Systems for Video Technology (2019).Google ScholarGoogle Scholar
  20. [20] Choi Myungsub, Choi Janghoon, Baik Sungyong, Kim Tae Hyun, and Lee Kyoung Mu. 2020. Scene-adaptive video frame interpolation via meta-learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 94449453.Google ScholarGoogle ScholarCross RefCross Ref
  21. [21] Choi Myungsub, Choi Janghoon, Baik Sungyong, Kim Tae Hyun, and Lee Kyoung Mu. 2021. Test-time adaptation for video frame interpolation via meta-learning. IEEE Transactions on Pattern Analysis and Machine Intelligence (2021), 11.Google ScholarGoogle Scholar
  22. [22] Choi Myungsub, Kim Heewon, Han Bohyung, Xu Ning, and Lee Kyoung Mu. 2020. Channel attention is all you need for video frame interpolation. In Proceedings of the AAAI. 1066310671.Google ScholarGoogle ScholarCross RefCross Ref
  23. [23] Choi Myungsub, Lee Suyoung, Kim Heewon, and Lee Kyoung Mu. 2021. Motion-aware dynamic architecture for efficient frame interpolation. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 1383913848.Google ScholarGoogle ScholarCross RefCross Ref
  24. [24] Choi Whan, Koh Yeong Jun, and Kim Chang-Su. 2021. Multi-scale warping for video frame interpolation. IEEE Access 9 (2021), 150470150479.Google ScholarGoogle ScholarCross RefCross Ref
  25. [25] Dai Jifeng, Qi Haozhi, Xiong Yuwen, Li Yi, Zhang Guodong, Hu Han, and Wei Yichen. 2017. Deformable convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision. 764773.Google ScholarGoogle ScholarCross RefCross Ref
  26. [26] Deng Jiajun, Yu Haichao, Wang Zhangyang, Wang Xinchao, and Huang Thomas. 2019. Self-reproducing video frame interpolation. In Proceedings of the 2019 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR). IEEE, 193198.Google ScholarGoogle ScholarCross RefCross Ref
  27. [27] Ding Tianyu, Liang Luming, Zhu Zhihui, and Zharkov Ilya. 2021. CDFI: Compression-driven network design for frame interpolation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 80018011.Google ScholarGoogle ScholarCross RefCross Ref
  28. [28] Ding Xiangling, Pan Yifeng, Gu Qing, Chen Jiyou, Yang Gaobo, and Xiong Yimao. 2021. Detection of deep video frame interpolation via learning dual-stream fusion CNN in the compression domain. In Proceedings of the 2021 IEEE International Conference on Multimedia and Expo (ICME). 16.Google ScholarGoogle ScholarCross RefCross Ref
  29. [29] Dosovitskiy Alexey, Beyer Lucas, Kolesnikov Alexander, Weissenborn Dirk, Zhai Xiaohua, Unterthiner Thomas, Dehghani Mostafa, Minderer Matthias, Heigold Georg, Gelly Sylvain, et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020).Google ScholarGoogle Scholar
  30. [30] Dosovitskiy Alexey, Fischer Philipp, Ilg Eddy, Hausser Philip, Hazirbas Caner, Golkov Vladimir, Smagt Patrick Van Der, Cremers Daniel, and Brox Thomas. 2015. Flownet: Learning optical flow with convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision. 27582766.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. [31] Dutta Saikat and Mittal Anurag. 2021. ReFIn: A refinement approach for video frame interpolation. In Proceedings of the NeurIPS 2021 Workshop on Deep Learning and Inverse Problems.Google ScholarGoogle Scholar
  32. [32] Dutta Saikat, Shah Nisarg A., and Mittal Anurag. 2021. Efficient space-time video super resolution using low-resolution flow and mask upsampling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 314323.Google ScholarGoogle ScholarCross RefCross Ref
  33. [33] Flynn John, Neulander Ivan, Philbin James, and Snavely Noah. 2016. Deepstereo: Learning to predict new views from the world’s imagery. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 55155524.Google ScholarGoogle ScholarCross RefCross Ref
  34. [34] Fortun Denis, Bouthemy Patrick, and Kervrann Charles. 2015. Optical flow modeling and computation: A survey. Computer Vision and Image Understanding 134 (2015), 121.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. [35] Fourure Damien, Emonet Rémi, Fromont Elisa, Muselet Damien, Tremeau Alain, and Wolf Christian. 2017. Residual conv-deconv grid network for semantic segmentation. arXiv preprint arXiv:1707.07958 (2017).Google ScholarGoogle Scholar
  36. [36] Gong Chao, Lin Fuhong, Gong Xiaowen, and Lu Yueming. 2020. Intelligent cooperative edge computing in internet of things. IEEE Internet of Things Journal 7, 10 (2020), 93729382.Google ScholarGoogle ScholarCross RefCross Ref
  37. [37] Gu Donghao, Wen ZhaoJing, Cui Wenxue, Wang Rui, Jiang Feng, and Liu Shaohui. 2019. Continuous bidirectional optical flow for video frame sequence interpolation. In Proceedings of the 2019 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 17681773.Google ScholarGoogle ScholarCross RefCross Ref
  38. [38] Gui Shurui, Wang Chaoyue, Chen Qihua, and Tao Dacheng. 2020. FeatureFlow: Robust video interpolation via structure-to-texture generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1400414013.Google ScholarGoogle ScholarCross RefCross Ref
  39. [39] Guo Yuyu, Bi Lei, Ahn Euijoon, Feng Dagan, Wang Qian, and Kim Jinman. 2020. A spatiotemporal volumetric interpolation network for 4D dynamic medical image. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 47264735.Google ScholarGoogle ScholarCross RefCross Ref
  40. [40] Ha Taehyeun, Lee Seongjoo, and Kim Jaeseok. 2004. Motion compensated frame interpolation by new block-based motion estimation algorithm. IEEE Transactions on Consumer Electronics 50, 2 (2004), 752759.Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. [41] Haris Muhammad, Shakhnarovich Greg, and Ukita Norimichi. 2020. Space-time-aware multi-resolution video enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 28592868.Google ScholarGoogle ScholarCross RefCross Ref
  42. [42] Hu Mengshun, Liao Liang, Xiao Jing, Gu Lin, and Satoh Shin’ichi. 2020. Motion feedback design for video frame interpolation. In Proceedings of the 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2020). IEEE, 43474351.Google ScholarGoogle ScholarCross RefCross Ref
  43. [43] Huang Ai-Mei and Nguyen Truong. 2009. Correlation-based motion vector processing with adaptive interpolation scheme for motion-compensated frame interpolation. IEEE Transactions on Image Processing 18, 4 (2009), 740752.Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. [44] Huang Ai-Mei and Nguyen Truong Q.. 2008. A multistage motion vector processing method for motion-compensated frame interpolation. IEEE Transactions on Image Processing 17, 5 (2008), 694708.Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. [45] Hui Tak-Wai, Tang Xiaoou, and Loy Chen Change. 2018. Liteflownet: A lightweight convolutional neural network for optical flow estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 89818989.Google ScholarGoogle ScholarCross RefCross Ref
  46. [46] Ilg Eddy, Mayer Nikolaus, Saikia Tonmoy, Keuper Margret, Dosovitskiy Alexey, and Brox Thomas. 2017. Flownet 2.0: Evolution of optical flow estimation with deep networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 24622470.Google ScholarGoogle ScholarCross RefCross Ref
  47. [47] Ilg Eddy, Saikia Tonmoy, Keuper Margret, and Brox Thomas. 2018. Occlusions, motion and depth boundaries with a generic network for disparity, optical flow or scene flow estimation. In Proceedings of the European Conference on Computer Vision (ECCV). 614630.Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. [48] Jeon Bo-Won, Lee Gun-Ill, Lee Sung-Hee, and Park Rae-Hong. 2003. Coarse-to-fine frame interpolation for frame rate up-conversion using pyramid structure. IEEE Transactions on Consumer Electronics 49, 3 (2003), 499508.Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. [49] Jiang Bing, Zhang Yuyao, Wu Minye, Li Ji, and Yu Jingyi. 2021. Consistent WCE video frame interpolation based on endoscopy image motion estimation. In Proceedings of the 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI). 334338.Google ScholarGoogle ScholarCross RefCross Ref
  50. [50] Jiang Huaizu, Sun Deqing, Jampani Varun, Yang Ming-Hsuan, Learned-Miller Erik, and Kautz Jan. 2018. Super slomo: High quality estimation of multiple intermediate frames for video interpolation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 90009008.Google ScholarGoogle ScholarCross RefCross Ref
  51. [51] Jin Meiguang, Hu Zhe, and Favaro Paolo. 2019. Learning to extract flawless slow motion from blurry videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 81128121.Google ScholarGoogle ScholarCross RefCross Ref
  52. [52] Jin Xing, Tang Ping, Houet Thomas, Corpetti Thomas, Alvarez-Vanhard Emilien Gence, and Zhang Zheng. 2021. Sequence image interpolation via separable convolution network. Remote Sensing 13, 2 (2021), 296.Google ScholarGoogle ScholarCross RefCross Ref
  53. [53] Jonschkowski Rico, Stone Austin, Barron Jonathan T., Gordon Ariel, Konolige Kurt, and Angelova Anelia. 2020. What matters in unsupervised optical flow. In Proceedings of the European Conference on Computer Vision. Springer, 557572.Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. [54] Kalantari Nima Khademi, Wang Ting-Chun, and Ramamoorthi Ravi. 2016. Learning-based view synthesis for light field cameras. ACM Transactions on Graphics (TOG) 35, 6 (2016), 110.Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. [55] Kalluri Tarun, Pathak Deepak, Chandraker Manmohan, and Tran Du. 2020. FLAVR: Flow-agnostic video representations for fast frame interpolation. arXiv preprint arXiv:2012.08512 (2020).Google ScholarGoogle Scholar
  56. [56] Kang Suk-Ju, Cho Kyoung-Rok, and Kim Young Hwan. 2007. Motion compensated frame rate up-conversion using extended bilateral motion estimation. IEEE Transactions on Consumer Electronics 53, 4 (2007), 17591767.Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. [57] Kim Soo Ye, Oh Jihyong, and Kim Munchurl. 2020. FISR: Deep joint frame interpolation and super-resolution with a multi-scale temporal loss. In Proceedings of the AAAI. 1127811286.Google ScholarGoogle ScholarCross RefCross Ref
  58. [58] Koren Mark, Menda Kunal, and Sharma Apoorva. 2017. Frame interpolation using generative adversarial networks.Google ScholarGoogle Scholar
  59. [59] Kwon Yong-Hoon, Yoon Ju Hong, and Park Min-Gyu. 2021. Direct video frame interpolation with multiple latent encoders. IEEE Access 9 (2021), 3245732466.Google ScholarGoogle ScholarCross RefCross Ref
  60. [60] Feuvre Jean Le, Thiesse Jean-Marc, Parmentier Matthieu, Raulet Mickael, and Daguet Christophe. 2014. Ultra high definition HEVC DASH data set. In Proceedings of the 5th ACM Multimedia Systems Conference. 712.Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. [61] Lee Hyeongmin, Kim Taeoh, Chung Tae-young, Pak Daehyun, Ban Yuseok, and Lee Sangyoun. 2020. AdaCoF: Adaptive collaboration of flows for video frame interpolation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 53165325.Google ScholarGoogle ScholarCross RefCross Ref
  62. [62] Li En, Zeng Liekang, Zhou Zhi, and Chen Xu. 2019. Edge AI: On-demand accelerating deep neural network inference via edge computing. IEEE Transactions on Wireless Communications 19, 1 (2019), 447457.Google ScholarGoogle ScholarCross RefCross Ref
  63. [63] Li He, Ota Kaoru, and Dong Mianxiong. 2018. Learning IoT in edge: Deep learning for the Internet of Things with edge computing. IEEE Network 32, 1 (2018), 96101.Google ScholarGoogle ScholarCross RefCross Ref
  64. [64] Li He, Ota Kaoru, Dong Mianxiong, Vasilakos Athanasios, and Nagano Koji. 2017. Multimedia processing pricing strategy in GPU-accelerated cloud computing. IEEE Transactions on Cloud Computing (2017).Google ScholarGoogle Scholar
  65. [65] Li Haopeng, Yuan Yuan, and Wang Qi. 2019. Fi-net: A lightweight video frame interpolation network using feature-level flow. IEEE Access 7 (2019), 118287118296.Google ScholarGoogle Scholar
  66. [66] Li Haopeng, Yuan Yuan, and Wang Qi. 2020. Video frame interpolation via residue refinement. In Proceedings of the 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2020). IEEE, 26132617.Google ScholarGoogle ScholarCross RefCross Ref
  67. [67] Liang Dong-xue. 2021. Analysis of coronary angiography video interpolation methods to reduce x-ray exposure frequency based on deep learning. Cardiovascular Innovations and Applications (2021).Google ScholarGoogle Scholar
  68. [68] Lin Songnan, Zhang Jiawei, Pan Jinshan, Jiang Zhe, Zou Dongqing, Wang Yongtian, Chen Jing, and Ren Jimmy S. J.. 2020. Learning event-driven video deblurring and interpolation. In Proceedings of the ECCV (8). 695710.Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. [69] Ling Yang, Wang Jin, Liu Yunqiang, and Zhang Wenjun. 2008. A novel spatial and temporal correlation integrated based motion-compensated interpolation for frame rate up-conversion. IEEE Transactions on Consumer Electronics 54, 2 (2008), 863869.Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. [70] Liu Yihao, Xie Liangbin, Siyao Li, Sun Wenxiu, Qiao Yu, and Dong Chao. 2020. Enhanced quadratic video interpolation. In Proceedings of the European Conference on Computer Vision. Springer, 4156.Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. [71] Liu Yu-Lun, Liao Yi-Tung, Lin Yen-Yu, and Chuang Yung-Yu. 2019. Deep video frame interpolation using cyclic frame generation. In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33. 87948802.Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. [72] Liu Zhouyong, Luo Shun, Li Wubin, Lu Jingben, Wu Yufan, Li Chunguo, and Yang Luxi. 2020. ConvTransformer: A convolutional transformer network for video frame synthesis. arXiv preprint arXiv:2011.10185 (2020).Google ScholarGoogle Scholar
  73. [73] Liu Ziwei, Yeh Raymond A., Tang Xiaoou, Liu Yiming, and Agarwala Aseem. 2017. Video frame synthesis using deep voxel flow. In Proceedings of the IEEE International Conference on Computer Vision. 44634471.Google ScholarGoogle ScholarCross RefCross Ref
  74. [74] Long Gucan, Kneip Laurent, Alvarez Jose M., Li Hongdong, Zhang Xiaohu, and Yu Qifeng. 2016. Learning image matching by simply watching video. In Proceedings of the European Conference on Computer Vision. Springer, 434450.Google ScholarGoogle ScholarCross RefCross Ref
  75. [75] Lucas Bruce D. and Kanade Takeo. 1981. An iterative image registration technique with an application to stereo vision In. Proceedings of theIJCAI (IJCAI81) (1981), 674679.Google ScholarGoogle Scholar
  76. [76] Meister Simon, Hur Junhwa, and Roth Stefan. 2018. Unflow: Unsupervised learning of optical flow with a bidirectional census loss. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 32.Google ScholarGoogle ScholarCross RefCross Ref
  77. [77] Men Hui, Hosu Vlad, Lin Hanhe, Bruhn Andrés, and Saupe Dietmar. 2020. Visual quality assessment for interpolated slow-motion videos based on a novel database. In Proceedings of the 2020 12th International Conference on Quality of Multimedia Experience (QoMEX). 16.Google ScholarGoogle ScholarCross RefCross Ref
  78. [78] Meyer Simone, Djelouah Abdelaziz, McWilliams Brian, Sorkine-Hornung Alexander, Gross Markus, and Schroers Christopher. 2018. Phasenet for video frame interpolation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 498507.Google ScholarGoogle ScholarCross RefCross Ref
  79. [79] Meyer Simone, Wang Oliver, Zimmer Henning, Grosse Max, and Sorkine-Hornung Alexander. 2015. Phase-based frame interpolation for video. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 14101418.Google ScholarGoogle ScholarCross RefCross Ref
  80. [80] Moraes Thiago, Amorim Paulo, Silva Jorge Vicente Da, and Pedrini Helio. 2020. Medical image interpolation based on 3D lanczos filtering. Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization 8, 3 (2020), 294300.Google ScholarGoogle ScholarCross RefCross Ref
  81. [81] Nah Seungjun, Kim Tae Hyun, and Lee Kyoung Mu. 2017. Deep multi-scale convolutional neural network for dynamic scene deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 38833891.Google ScholarGoogle ScholarCross RefCross Ref
  82. [82] Niklaus Simon and Liu Feng. 2018. Context-aware synthesis for video frame interpolation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 17011710.Google ScholarGoogle ScholarCross RefCross Ref
  83. [83] Niklaus Simon and Liu Feng. 2020. Softmax splatting for video frame interpolation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 54375446.Google ScholarGoogle ScholarCross RefCross Ref
  84. [84] Niklaus Simon, Mai Long, and Liu Feng. 2017. Video frame interpolation via adaptive convolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 670679.Google ScholarGoogle Scholar
  85. [85] Niklaus Simon, Mai Long, and Liu Feng. 2017. Video frame interpolation via adaptive separable convolution. In Proceedings of the IEEE International Conference on Computer Vision. 261270.Google ScholarGoogle ScholarCross RefCross Ref
  86. [86] Niklaus Simon, Mai Long, and Wang Oliver. 2020. Revisiting adaptive convolutions for video frame interpolation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 10991109.Google ScholarGoogle Scholar
  87. [87] Oh Jihyong and Kim Munchurl. 2021. DeMFI: Deep joint deblurring and multi-frame interpolation with flow-guided attentive correlation and recursive boosting. arXiv preprint arXiv:2111.09985 (2021).Google ScholarGoogle Scholar
  88. [88] Paikin Genady, Ater Yotam, Shaul Roy, and Soloveichik Evgeny. 2021. EFI-net: Video frame interpolation from fusion of events and frames. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 12911301.Google ScholarGoogle ScholarCross RefCross Ref
  89. [89] Parihar Anil Singh, Varshney Disha, Pandya Kshitija, and Aggarwal Ashray. 2021. A comprehensive survey on video frame interpolation techniques. The Visual Computer (2021), 125.Google ScholarGoogle Scholar
  90. [90] Park Junheum, Ko Keunsoo, Lee Chul, and Kim Chang-Su. 2020. BRUBC: Bilateral motion estimation with bilateral cost volume for video interpolation. In Proceedings of the European Conference on Computer Vision. Springer, 109125.Google ScholarGoogle Scholar
  91. [91] Park Junheum, Lee Chul, and Kim Chang-Su. 2021. Asymmetric bilateral motion estimation for video frame interpolation. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 1453914548.Google ScholarGoogle ScholarCross RefCross Ref
  92. [92] Peleg Tomer, Szekely Pablo, Sabo Doron, and Sendik Omry. 2019. IM-net for high resolution video frame interpolation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 23982407.Google ScholarGoogle ScholarCross RefCross Ref
  93. [93] Perazzi Federico, Pont-Tuset Jordi, McWilliams Brian, Gool Luc Van, Gross Markus, and Sorkine-Hornung Alexander. 2016. A benchmark dataset and evaluation methodology for video object segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 724732.Google ScholarGoogle ScholarCross RefCross Ref
  94. [94] Poetsch Gabriel. 2020. Dain-app: Application for video interpolations. (2020).Google ScholarGoogle Scholar
  95. [95] Rakêt Lars Lau, Roholm Lars, Bruhn Andrés, and Weickert Joachim. 2012. Motion compensated frame interpolation with a symmetric optical flow constraint. In Proceedings of the International Symposium on Visual Computing. Springer, 447457.Google ScholarGoogle ScholarCross RefCross Ref
  96. [96] Ranjan Anurag and Black Michael J.. 2017. Optical flow estimation using a spatial pyramid network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 41614170.Google ScholarGoogle ScholarCross RefCross Ref
  97. [97] Reda Fitsum A., Sun Deqing, Dundar Aysegul, Shoeybi Mohammad, Liu Guilin, Shih Kevin J., Tao Andrew, Kautz Jan, and Catanzaro Bryan. 2019. Unsupervised video interpolation using cycle consistency. In Proceedings of the IEEE International Conference on Computer Vision. 892900.Google ScholarGoogle ScholarCross RefCross Ref
  98. [98] Ronneberger Olaf, Fischer Philipp, and Brox Thomas. 2015. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-assisted Intervention. Springer, 234241.Google ScholarGoogle ScholarCross RefCross Ref
  99. [99] Santurkar Shibani, Budden David, and Shavit Nir. 2018. Generative compression. In Proceedings of the2018 Picture Coding Symposium (PCS). 258262.Google ScholarGoogle ScholarCross RefCross Ref
  100. [100] Savian Stefano, Elahi Mehdi, and Tillo Tammam. 2020. Optical flow estimation with deep learning, a survey on recent advances. In Deep Biometrics. Springer, 257287.Google ScholarGoogle ScholarCross RefCross Ref
  101. [101] Schatz Kara Marie, Quintanilla Erik, Vyas Shruti, and Rawat Yogesh S.. 2020. A recurrent transformer network for novel view action synthesis. ECCV (27) (2020), 410426.Google ScholarGoogle Scholar
  102. [102] Seshadrinathan Kalpana and Bovik Alan Conrad. 2010. Motion tuned spatio-temporal quality assessment of natural videos. IEEE Transactions on Image Processing 19, 2 (2010), 335350.Google ScholarGoogle ScholarDigital LibraryDigital Library
  103. [103] Shen Wang, Bao Wenbo, Zhai Guangtao, Chen Li, Min Xiongkuo, and Gao Zhiyong. 2020. Blurry video frame interpolation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 51145123.Google ScholarGoogle ScholarCross RefCross Ref
  104. [104] Shen Wang, Bao Wenbo, Zhai Guangtao, Chen Li, Min Xiongkuo, and Gao Zhiyong. 2020. Video frame interpolation and enhancement via pyramid recurrent framework. IEEE Transactions on Image Processing 30 (2020), 277292.Google ScholarGoogle ScholarCross RefCross Ref
  105. [105] Shi Wenzhe, Caballero Jose, Huszár Ferenc, Totz Johannes, Aitken Andrew P., Bishop Rob, Rueckert Daniel, and Wang Zehan. 2016. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 18741883.Google ScholarGoogle ScholarCross RefCross Ref
  106. [106] Shi Zhihao, Liu Xiaohong, Shi Kangdi, Dai Linhui, and Chen Jun. 2021. Video frame interpolation via generalized deformable convolution. IEEE Transactions on Multimedia (2021).Google ScholarGoogle Scholar
  107. [107] Sim Hyeonjun, Oh Jihyong, and Kim Munchurl. 2021. XVFI: eXtreme video frame interpolation. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 1448914498.Google ScholarGoogle ScholarCross RefCross Ref
  108. [108] Siyao Li, Zhao Shiyu, Yu Weijiang, Sun Wenxiu, Metaxas Dimitris N., Loy Chen Change, and Liu Ziwei. 2021. Deep animation video interpolation in the wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 65876595.Google ScholarGoogle ScholarCross RefCross Ref
  109. [109] Son Sanghyun, Lee Jaerin, Nah Seungjun, Timofte Radu, Lee Kyoung Mu, Liu Yihao, Xie Liangbin, Siyao Li, Sun Wenxiu, Qiao Yu, et al. 2020. AIM 2020 challenge on video temporal super-resolution. In European Conference on Computer Vision. Springer, 2340.Google ScholarGoogle ScholarDigital LibraryDigital Library
  110. [110] Song Li, Tang Xun, Zhang Wei, Yang Xiaokang, and Xia Pingjian. 2013. The SJTU 4K video sequence dataset. In Proceedings of the 2013 5th International Workshop on Quality of Multimedia Experience (QoMEX). IEEE, 3435.Google ScholarGoogle ScholarCross RefCross Ref
  111. [111] Soomro Khurram, Zamir Amir Roshan, and Shah Mubarak. 2012. UCF101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402 (2012).Google ScholarGoogle Scholar
  112. [112] Su Shuochen, Delbracio Mauricio, Wang Jue, Sapiro Guillermo, Heidrich Wolfgang, and Wang Oliver. 2017. Deep video deblurring for hand-held cameras. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 12791288.Google ScholarGoogle ScholarCross RefCross Ref
  113. [113] Sun Deqing, Yang Xiaodong, Liu Ming-Yu, and Kautz Jan. 2018. PWC-NET: CRNS for optical flow using pyramid, warping, and cost volume. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 89348943.Google ScholarGoogle ScholarCross RefCross Ref
  114. [114] Sun Na and Li Huina. 2019. Super resolution reconstruction of images based on interpolation and full convolutional neural network and application in medical fields. IEEE Access 7 (2019), 186470186479.Google ScholarGoogle ScholarCross RefCross Ref
  115. [115] Tao Michael, Bai Jiamin, Kohli Pushmeet, and Paris Sylvain. 2012. SimpleFlow: A non-iterative, sublinear optical flow algorithm. In Computer Graphics Forum, Vol. 31. Wiley Online Library, 345353.Google ScholarGoogle Scholar
  116. [116] Teed Zachary and Deng Jia. 2020. RAFT: Recurrent all-pairs field transforms for optical flow. In European Conference on Computer Vision. Springer, 402419.Google ScholarGoogle ScholarDigital LibraryDigital Library
  117. [117] Tian Yapeng, Zhang Yulun, Fu Yun, and Xu Chenliang. 2020. TDAN: Temporally-deformable alignment network for video super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 33603369.Google ScholarGoogle ScholarCross RefCross Ref
  118. [118] Tran Phong, Tran Anh, Nguyen Thao, and Hoai Minh. 2021. FineNet: Frame interpolation and enhancement for face video deblurring. arXiv preprint arXiv:2103.00871 (2021).Google ScholarGoogle Scholar
  119. [119] Tran Quang Nhat and Yang Shih-Hsuan. 2020. Efficient video frame interpolation using generative adversarial networks. Applied Sciences 10, 18 (2020), 6245.Google ScholarGoogle ScholarCross RefCross Ref
  120. [120] Tu Zhigang, Xie Wei, Zhang Dejun, Poppe Ronald, Veltkamp Remco C., Li Baoxin, and Yuan Junsong. 2019. A survey of variational and CNN-based optical flow techniques. Signal Processing: Image Communication 72 (2019), 924.Google ScholarGoogle ScholarDigital LibraryDigital Library
  121. [121] Tulyakov Stepan, Gehrig Daniel, Georgoulis Stamatios, Erbach Julius, Gehrig Mathias, Li Yuanyou, and Scaramuzza Davide. 2021. Time lens: Event-based video frame interpolation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1615516164.Google ScholarGoogle ScholarCross RefCross Ref
  122. [122] Usman Muhammad, He Xiangjian, Lam Kin-Man, Xu Min, Bokhari Syed Mohsin Matloob, and Chen Jinjun. 2016. Frame interpolation for cloud-based mobile video streaming. IEEE Transactions on Multimedia 18, 5 (2016), 831839.Google ScholarGoogle ScholarDigital LibraryDigital Library
  123. [123] Amersfoort Joost van, Shi Wenzhe, Acosta Alejandro, Massa Francisco, Totz Johannes, Wang Zehan, and Caballero Jose. 2017. Frame interpolation with multi-scale deep loss functions and generative adversarial networks. arXiv preprint arXiv:1711.06045 (2017).Google ScholarGoogle Scholar
  124. [124] Vaswani Ashish, Shazeer Noam, Parmar Niki, Uszkoreit Jakob, Jones Llion, Gomez Aidan N., Kaiser Lukasz, and Polosukhin Illia. 2017. Attention is all you need. arXiv preprint arXiv:1706.03762 (2017).Google ScholarGoogle Scholar
  125. [125] Wang Demin, Vincent Andre, Blanchfield Philip, and Klepko Robert. 2010. Motion-compensated frame rate up-conversion-Part II: New algorithms for frame interpolation. IEEE Transactions on Broadcasting 56, 2 (2010), 142149.Google ScholarGoogle ScholarCross RefCross Ref
  126. [126] Wang Huiyu, Zhu Yukun, Adam Hartwig, Yuille Alan, and Chen Liang-Chieh. 2021. MaX-DeepLab: End-to-end panoptic segmentation with mask transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 54635474.Google ScholarGoogle ScholarCross RefCross Ref
  127. [127] Wang Xintao, Chan Kelvin C. K., Yu Ke, Dong Chao, and Loy Chen Change. 2019. EDVR: Video restoration with enhanced deformable convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 00.Google ScholarGoogle ScholarCross RefCross Ref
  128. [128] Wang Yuqing, Xu Zhaoliang, Wang Xinlong, Shen Chunhua, Cheng Baoshan, Shen Hao, and Xia Huaxia. 2021. End-to-end video instance segmentation with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 87418750.Google ScholarGoogle ScholarCross RefCross Ref
  129. [129] Wang Zhou, Bovik Alan C., Sheikh Hamid R., and Simoncelli Eero P.. 2004. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing 13, 4 (2004), 600612.Google ScholarGoogle ScholarDigital LibraryDigital Library
  130. [130] Wang Zejin, Liu Jing, Chen Xi, Li Guoqing, and Han Hua. 2021. Sparse self-attention aggregation networks for neural sequence slice interpolation. BioData Mining 14, 1 (2021), 119.Google ScholarGoogle ScholarCross RefCross Ref
  131. [131] Wang Zejin, Sun Guodong, Zhang Lina, Li Guoqing, and Han Hua. 2021. Temporal spatial-adaptive interpolation with deformable refinement for electron microscopic images. arXiv preprint arXiv:2101.06771 (2021).Google ScholarGoogle Scholar
  132. [132] Weinzaepfel Philippe, Revaud Jerome, Harchaoui Zaid, and Schmid Cordelia. 2013. DeepFlow: Large displacement optical flow with deep matching. In Proceedings of the IEEE International Conference on Computer Vision. 13851392.Google ScholarGoogle ScholarDigital LibraryDigital Library
  133. [133] Wen Shiping, Liu Weiwei, Yang Yin, Huang Tingwen, and Zeng Zhigang. 2019. Generating realistic videos from keyframes with concatenated GANs. IEEE Transactions on Circuits and Systems for Video Technology 29, 8 (2019), 23372348. Google ScholarGoogle ScholarCross RefCross Ref
  134. [134] Werlberger Manuel, Pock Thomas, Unger Markus, and Bischof Horst. 2011. Optical flow guided TV-L 1 video interpolation and restoration. In Proceedings of the International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition. Springer, 273286.Google ScholarGoogle ScholarCross RefCross Ref
  135. [135] Wu Chao-Yuan, Singhal Nayan, and Krahenbuhl Philipp. 2018. Video compression through image interpolation. In Proceedings of the European Conference on Computer Vision (ECCV). 416431.Google ScholarGoogle ScholarDigital LibraryDigital Library
  136. [136] Wu Jiyan, Yuen Chau, Cheung Ngai-Man, Chen Junliang, and Chen Chang Wen. 2015. Modeling and optimization of high frame rate video transmission over wireless networks. IEEE Transactions on Wireless Communications 15, 4 (2015), 27132726.Google ScholarGoogle ScholarDigital LibraryDigital Library
  137. [137] Wu Xuanyi, Zhou Zhenkun, and Basu Anup. 2021. DRVI: Dual refinement for video interpolation. IEEE Access 9 (2021), 113566113576.Google ScholarGoogle ScholarCross RefCross Ref
  138. [138] Wu Zhaotao, Wei Jia, Yuan Wenguang, Wang Jiabing, and Tasdizen Tolga. 2020. Inter-slice image augmentation based on frame interpolation for boosting medical image segmentation accuracy. arXiv preprint arXiv:2001.11698 (2020).Google ScholarGoogle Scholar
  139. [139] Xiang Xiaoyu, Tian Yapeng, Zhang Yulun, Fu Yun, Allebach Jan P., and Xu Chenliang. 2020. Zooming slow-mo: Fast and accurate one-stage space-time video super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 33703379.Google ScholarGoogle ScholarCross RefCross Ref
  140. [140] Xing Jinbo, Hu Wenbo, Zhang Yuechen, and Wong Tien-Tsin. 2021. Flow-aware synthesis: A generic motion model for video frame interpolation. Computational Visual Media (2021), 113.Google ScholarGoogle Scholar
  141. [141] Xu Xiangyu, Siyao Li, Sun Wenxiu, Yin Qian, and Yang Ming-Hsuan. 2019. Quadratic video interpolation. Advances in Neural Information Processing Systems 32 (2019), 16471656.Google ScholarGoogle Scholar
  142. [142] Xue Fanyong, Li Jie, Liu Jiannan, and Wu Chentao. 2021. BWIN: A bilateral warping method for video frame interpolation. In Proceedings of the 2021 IEEE International Conference on Multimedia and Expo (ICME). 16.Google ScholarGoogle ScholarCross RefCross Ref
  143. [143] Xue Tianfan, Chen Baian, Wu Jiajun, Wei Donglai, and Freeman William T.. 2019. Video enhancement with task-oriented flow. International Journal of Computer Vision 127, 8 (2019), 11061125.Google ScholarGoogle ScholarDigital LibraryDigital Library
  144. [144] Xue Wei, Ai Hong, Sun Tianyu, Song Chunfeng, Huang Yan, and Wang Liang. 2020. Frame-GAN: Increasing the frame rate of gait videos with generative adversarial networks. Neurocomputing 380 (2020), 95104.Google ScholarGoogle ScholarDigital LibraryDigital Library
  145. [145] Yan Bo, Tan Weimin, Lin Chuming, and Shen Liquan. 2020. Fine-grained motion estimation for video frame interpolation. IEEE Transactions on Broadcasting (2020).Google ScholarGoogle Scholar
  146. [146] Yang Kai-Chieh, Huang Ai-Mei, Nguyen Truong Q., Guest Clark C., and Das Pankaj K.. 2008. A new objective quality metric for frame interpolation used in video compression. IEEE Transactions on Broadcasting 54, 3 (2008), 68011.Google ScholarGoogle ScholarCross RefCross Ref
  147. [147] Yu Songhyun, Park Bumjun, and Jeong Jechang. 2019. PoSNet: 4x video frame interpolation using position-specific flow. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). IEEE, 35033511.Google ScholarGoogle ScholarCross RefCross Ref
  148. [148] Yu Zhefei, Li Houqiang, Wang Zhangyang, Hu Zeng, and Chen Chang Wen. 2013. Multi-level video frame interpolation: Exploiting the interaction among different levels. IEEE Transactions on Circuits and Systems for Video Technology 23, 7 (2013), 12351248.Google ScholarGoogle ScholarDigital LibraryDigital Library
  149. [149] Yuan Liangzhe, Chen Yibo, Liu Hantian, Kong Tao, and Shi Jianbo. 2019. Zoom-in-to-check: Boosting video interpolation via instance-level discrimination. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1218312191.Google ScholarGoogle ScholarCross RefCross Ref
  150. [150] Zhang Haoxian, Wang Ronggang, and Zhao Yang. 2019. Multi-frame pyramid refinement network for video frame interpolation. IEEE Access 7 (2019), 130610130621.Google ScholarGoogle ScholarCross RefCross Ref
  151. [151] Zhang Haoxian, Zhao Yang, and Wang Ronggang. 2020. A flexible recurrent residual pyramid network for video frame interpolation. In European Conference on Computer Vision. Springer, 474491.Google ScholarGoogle ScholarDigital LibraryDigital Library
  152. [152] Zhang Lin, Shen Ying, and Li Hongyu. 2014. VSI: A visual saliency-induced index for perceptual image quality assessment. IEEE Transactions on Image Processing 23, 10 (2014), 42704281.Google ScholarGoogle ScholarCross RefCross Ref
  153. [153] Zhang Richard, Isola Phillip, Efros Alexei A., Shechtman Eli, and Wang Oliver. 2018. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 586595.Google ScholarGoogle ScholarCross RefCross Ref
  154. [154] Zhang Yulun, Li Kunpeng, Li Kai, Wang Lichen, Zhong Bineng, and Fu Yun. 2018. Image super-resolution using very deep residual channel attention networks. In Proceedings of the European Conference on Computer Vision (ECCV). 286301.Google ScholarGoogle ScholarDigital LibraryDigital Library
  155. [155] Zhang Youjian, Wang Chaoyue, and Tao Dacheng. 2020. Video frame interpolation without temporal priors. Advances in Neural Information Processing Systems 33 (2020), 1330813318.Google ScholarGoogle Scholar
  156. [156] Zhao Bin and Li Xuelong. 2021. EA-Net: Edge-aware network for flow-based video frame interpolation. arXiv preprint arXiv:2105.07673 (2021).Google ScholarGoogle Scholar
  157. [157] Zhao Lei, Wang Shiqi, Zhang Xinfeng, Wang Shanshe, Ma Siwei, and Gao Wen. 2018. Enhanced CTU-level inter-prediction with deep frame rate up-conversion for high efficiency video coding. In Proceedings of the2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, 206210.Google ScholarGoogle ScholarCross RefCross Ref
  158. [158] Zheng Minghang, Gao Peng, Wang Xiaogang, Li Hongsheng, and Dong Hao. 2020. End-to-end object detection with adaptive clustering transformer. arXiv preprint arXiv:2011.09315 (2020).Google ScholarGoogle Scholar
  159. [159] Zhou Chengcheng, Lu Zongqing, Li Linge, Yan Qiangyu, and Xue Jing-Hao. 2021. How Video Super-Resolution and Frame Interpolation Mutually Benefit. Association for Computing Machinery, New York, NY, USA, 54455453.Google ScholarGoogle Scholar
  160. [160] Zhou Jingyue, Wang Yihuai, Ota Kaoru, and Dong Mianxiong. 2019. AAIoT: Accelerating artificial intelligence in IoT systems. IEEE Wireless Communications Letters 8, 3 (2019), 825828.Google ScholarGoogle ScholarCross RefCross Ref
  161. [161] Zhu Michael and Gupta Suyog. 2017. To prune, or not to prune: Exploring the efficacy of pruning for model compression. arXiv preprint arXiv:1710.01878 (2017).Google ScholarGoogle Scholar
  162. [162] Zhu Xizhou, Hu Han, Lin Stephen, and Dai Jifeng. 2019. Deformable convnets v2: More deformable, better results. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 93089316.Google ScholarGoogle ScholarCross RefCross Ref
  163. [163] Zhu Zezhi, Zhao Lili, Lin Xuhu, Guo Xuezhou, and Chen Jianwen. 2021. Deep inter prediction via reference frame interpolation for blurry video coding. In Proceedings of the 2021 International Conference on Visual Communications and Image Processing (VCIP). 15.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Video Frame Interpolation: A Comprehensive Survey

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Multimedia Computing, Communications, and Applications
      ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 19, Issue 2s
      April 2023
      545 pages
      ISSN:1551-6857
      EISSN:1551-6865
      DOI:10.1145/3572861
      • Editor:
      • Abdulmotaleb El Saddik
      Issue’s Table of Contents

      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s).

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 11 May 2023
      • Online AM: 31 January 2023
      • Accepted: 30 July 2022
      • Revised: 13 June 2022
      • Received: 24 February 2022
      Published in tomm Volume 19, Issue 2s

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • survey
    • Article Metrics

      • Downloads (Last 12 months)752
      • Downloads (Last 6 weeks)176

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    View Full Text

    HTML Format

    View this article in HTML Format .

    View HTML Format
    About Cookies On This Site

    We use cookies to ensure that we give you the best experience on our website.

    Learn more

    Got it!