skip to main content
research-article

Context-aware Pseudo-true Video Interpolation at 6G Edge

Authors Info & Claims
Published:01 November 2022Publication History
Skip Abstract Section

Abstract

In the 6G network, lots of edge devices facilitate the low-latency transmission of video. However, with limited processing and storage capabilities, the edge devices cannot afford to reconstruct the vast amount of video data. On the condition of edge computing in the 6G network, this article fuses a self-similarity-based context feature into Frame Rate Up-Conversion (FRUC) to generate the pseudo-true video sequences at high frame rate, and its core is the extraction of the context layer for each video frame. First, we extract the patch centered at each pixel and use the self-similarity descriptor to generate the correlation surface. Then, the expectation or skewness of the correlation surface in statistics is computed to represent its context feature. By attaching an expectation or a skewness to each pixel, the context layer is constructed and added to the video frame as a new channel. According to the context layer, we predict the motion vector field of the absent frame by using the bidirectional context match and finally produce the interpolated frame. From the experimental results, it can be seen that by deploying the proposed FRUC algorithm on edge devices, the output pseudo-true video sequences have satisfying objective and subjective qualities.

REFERENCES

  1. [1] Choi Byeong-Doo, Han Jong-Woo, Kim Chang-Su, and Ko Sung-Jea. 2007. Motion-compensated frame interpolation using bilateral motion estimation and adaptive overlapped block motion compensation. IEEE Transactions on Circuits and Systems for Video Technology 17, 4 (2007), 407416. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. [2] Choi Giyong, Heo PyeongGang, Oh Se Ri, and Park HyunWook. 2017. A new motion estimation method for motion-compensated frame interpolation using a convolutional neural network. In 2017 IEEE International Conference on Image Processing (ICIP’17). 800804. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. [3] Choi Giyong, Heo PyeongGang, and Park HyunWook. 2019. Triple-frame-based bi-directional motion estimation for motion-compensated frame interpolation. IEEE Transactions on Circuits and Systems for Video Technology 29, 5 (2019), 12511258. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. [4] Dikbas Salih and Altunbasak Yucel. 2013. Novel true-motion estimation algorithm and its application to motion-compensated temporal frame interpolation. IEEE Transactions on Image Processing 22, 8 (2013), 29312945. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  5. [5] Ding Xiangling, Zhu Ningbo, Li Leida, Li Yue, and Yang Gaobo. 2019. Robust localization of interpolated frames by motion-compensated frame interpolation based on an artifact indicated map and Tchebichef moments. IEEE Transactions on Circuits and Systems for Video Technology 29, 7 (2019), 18931906. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  6. [6] Guo Yong, Chen Li, Gao Zhiyong, and Zhang Xiaoyun. 2016. Frame rate up-conversion using linear quadratic motion estimation and trilateral filtering motion smoothing. Journal of Display Technology 12, 1 (2016), 8998. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  7. [7] He Jiale, Yang Gaobo, Liu Xin, and Ding Xiangling. 2020. Spatio-temporal saliency-based motion vector refinement for frame rate up-conversion. ACM Trans. Multimedia Comput. Commun. Appl. 16, 2, Article 55 (May 2020), 18 pages. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. [8] Hu Hezhen, Zhou Wengang, Li Xingze, Yan Ning, and Li Houqiang. 2021. MV2Flow: Learning motion representation for fast compressed video action recognition. ACM Trans. Multimedia Comput. Commun. Appl. 16, 3s, Article 102 (Dec. 2021), 19 pages. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. [9] Hu Yuzhang, Yang Wenhan, Liu Jiaying, and Guo Zongming. 2022. Deep inter prediction with error-corrected auto-regressive network for video coding. ACM Trans. Multimedia Comput. Commun. Appl. (March 2022). DOI:Just Accepted.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. [10] Jacobson Natan, Lee Yen-Lin, Mahadevan Vijay, Vasconcelos Nuno, and Nguyen Truong Q.. 2010. A novel approach to fruc using discriminant saliency and frame segmentation. IEEE Transactions on Image Processing 19, 11 (2010), 29242934. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. [11] Jeon Jinwoo, Jung Sungwook, Lee Eungchang, Choi Duckyu, and Myung Hyun. 2021. Run your visual-inertial odometry on NVIDIA Jetson: Benchmark tests on a micro aerial vehicle. IEEE Robotics and Automation Letters 6, 3 (2021), 53325339. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  12. [12] Jeong Seong-Gyun, Lee Chul, and Kim Chang-Su. 2013. Motion-compensated frame interpolation based on multihypothesis motion estimation and texture optimization. IEEE Transactions on Image Processing 22, 11 (2013), 44974509. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. [13] Kafadar Özkan. 2021. RaspMI: Raspberry pi assisted embedded system for monitoring and recording of seismic ambient noise. IEEE Sensors Journal 21, 5 (2021), 63066313. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  14. [14] Kim DongYoon, Lim Hyungjun, and Park HyunWook. 2013. Iterative true motion estimation for motion-compensated frame interpolation. IEEE Transactions on Circuits and Systems for Video Technology 23, 3 (2013), 445454. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. [15] Lee Gwo Giun, Chen Chun-Fu, Hsiao Ching-Jui, and Wu Jui-Che. 2014. Bi-directional trajectory tracking with variable block-size motion estimation for frame rate up-convertor. IEEE Journal on Emerging and Selected Topics in Circuits and Systems 4, 1 (2014), 2942. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  16. [16] Lee K. and Jeong J.. 2016. Bilateral frame rate up-conversion algorithm based on the comparison of texture complexity. Electronics Letters 52, 5 (2016), 354355. DOI:arXiv:https://ietresearch.onlinelibrary. wiley.com/doi/pdf/10.1049/el.2015.3612Google ScholarGoogle ScholarCross RefCross Ref
  17. [17] Li Mu, Zhong Baojiang, and Ma Kai-Kuang. 2022. MA-NET: Multi-scale attention-aware network for optical flow estimation. In 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’22). 28442848. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  18. [18] Liu Yaqiong, Peng Mugen, Shou Guochu, Chen Yudong, and Chen Siyu. 2020. Toward edge intelligence: Multiaccess edge computing for 5G and internet of things. IEEE Internet of Things Journal 7, 8 (2020), 67226747. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  19. [19] Lu Qingchun, Xu Ning, and Fang Xiangzhong. 2016. Motion-compensated frame interpolation with multiframe-based occlusion handling. Journal of Display Technology 12, 1 (2016), 4554. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  20. [20] Pan Zhaoqing, Lei Jianjun, Zhang Yajuan, and Wang Fu Lee. 2018. Adaptive fractional-pixel motion estimation skipped algorithm for efficient HEVC motion estimation. ACM Trans. Multimedia Comput. Commun. Appl. 14, 1, Article 12 (Jan. 2018), 19 pages. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. [21] Pham Chi Do-Kim and Zhou Jinjia. 2019. Deep learning-based luma and chroma fractional interpolation in video coding. IEEE Access 7 (2019), 112535112543. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  22. [22] Pokhrel Shiva Raj. 2020. Federated learning meets blockchain at 6G edge: A drone-assisted networking for disaster response. In Proceedings of the 2nd ACM MobiCom Workshop on Drone Assisted Wireless Communications for 5G and Beyond (DroneCom’20). Association for Computing Machinery, New York, NY, 4954. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. [23] Romano Yaniv and Elad Michael. 2016. Con-patch: When a patch meets its context. IEEE Transactions on Image Processing 25, 9 (2016), 39673978. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. [24] Shechtman Eli and Irani Michal. 2007. Matching local self-similarities across images and videos. In 2007 IEEE Conference on Computer Vision and Pattern Recognition. 18. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  25. [25] Song Wonseop, Heo PyeongGang, Choi Giyong, Oh Se Ri, and Park HyunWook. 2018. Motion compensated frame interpolation of occlusion and motion ambiguity regions using color-plus-depth information. In 2018 25th IEEE International Conference on Image Processing (ICIP’18). 14781482. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  26. [26] Vranješ Denis, Rimac-Drlje Snježana, and Vranješ Mario. 2020. Adaptive temporal frame interpolation algorithm for frame rate up-conversion. IEEE Consumer Electronics Magazine 9, 3 (2020), 1721. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  27. [27] Wu Caesar, Toosi Adel Nadjaran, Buyya Rajkumar, and Ramamohanarao Kotagiri. 2021. Hedonic pricing of cloud computing services. IEEE Transactions on Cloud Computing 9, 1 (2021), 182196. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  28. [28] Yoon Sung-Jun, Kim Hyun-Ho, and Kim Munchurl. 2018. Hierarchical extended bilateral motion estimation-based frame rate upconversion using learning-based linear mapping. IEEE Transactions on Image Processing 27, 12 (2018), 59185932. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. [29] Zhang Congxuan, Ge Liyue, Chen Zhen, Li Ming, Liu Wen, and Chen Hao. 2020. Refined TV- L1 optical flow estimation using joint filtering. IEEE Transactions on Multimedia 22, 2 (2020), 349364. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. [30] Zhang Yongbing, Chen Lixin, Yan Chenggang, Qin Peiwu, Ji Xiangyang, and Dai Qionghai. 2020. Weighted convolutional motion-compensated frame rate up-conversion using deep residual network. IEEE Transactions on Circuits and Systems for Video Technology 30, 1 (2020), 1122. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  31. [31] Zhang Yongbing, Zhao Debin, Ma Siwei, Wang Ronggang, and Gao Wen. 2010. A motion-aligned auto-regressive model for frame rate up conversion. IEEE Transactions on Image Processing 19, 5 (2010), 12481258. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. [32] Zhao Yunlong, Ge Guangying, and Sun Qun. 2019. Frame rate up-conversion based on edge information. In 2019 7th International Conference on Information, Communication and Networks (ICICN’19). 158162. DOI:Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Context-aware Pseudo-true Video Interpolation at 6G Edge

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image ACM Transactions on Multimedia Computing, Communications, and Applications
        ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 18, Issue 3s
        October 2022
        381 pages
        ISSN:1551-6857
        EISSN:1551-6865
        DOI:10.1145/3567476
        • Editor:
        • Abdulmotaleb El Saddik
        Issue’s Table of Contents

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 1 November 2022
        • Online AM: 12 August 2022
        • Accepted: 3 August 2022
        • Revised: 15 July 2022
        • Received: 11 December 2021
        Published in tomm Volume 18, Issue 3s

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Refereed

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Full Text

      View this article in Full Text.

      View Full Text

      HTML Format

      View this article in HTML Format .

      View HTML Format
      About Cookies On This Site

      We use cookies to ensure that we give you the best experience on our website.

      Learn more

      Got it!