skip to main content
research-article

Animating Still Natural Images Using Warping

Authors Info & Claims
Published:05 January 2023Publication History
Skip Abstract Section

Abstract

From a single still image, a looping video could be generated by imparting subtle motion to objects in the image. The results are a hybrid of photography and video. They contain gentle motion in some objects, while the rest of the image remains still. Existing techniques are successful in animating such images. However, there are still some drawbacks that need to be investigated, such as too-large computation time necessary to retrieve the matched videos or the challenges of controlling the desired motion not only in terms of a single region but also in terms of consistency in regions. In this work, we address these issues by proposing an interactive system with a novel warping method. The key idea of our approach is to utilize user’s annotations to impart motion to certain objects. With two proposed phases in terms of preserve-curve-warping and cycle warping, a looping video is generated. We demonstrate the effectiveness of our method via various experimental challenging results and evaluations. We show that with a simple and lightweight method, our system is able to deal with animating a still image’s problems and results in realistic motion and appealing videos. In addition, using our proposed system, it is easy to create plausible animation using simple user annotations without referencing the video database or machine learning models and allows ordinary users with minimal expertise to produce compelling results.

REFERENCES

  1. [1] Adobe Photoshop. 2004. Retrieved from https://www.adobe.com/products/photoshop.html.Google ScholarGoogle Scholar
  2. [2] Arfken George B. and Weber Hans J.. 1999. Mathematical Methods for Physicists: A Comprehensive Guide. Academic Press.Google ScholarGoogle Scholar
  3. [3] Bhat Kiran S., Seitz Steven M., Hodgins Jessica K., and Khosla Pradeep K.. 2004. Flow-based video synthesis and editing. In ACM SIGGRAPH 2004 Papers. 360363.Google ScholarGoogle Scholar
  4. [4] Chuang Yung-Yu, Goldman Dan B., Zheng Ke Colin, Curless Brian, Salesin David H., and Szeliski Richard. 2005. Animating pictures with stochastic motion textures. In ACM SIGGRAPH 2005 Papers. 853860.Google ScholarGoogle Scholar
  5. [5] Endo Yuki, Kanamori Yoshihiro, and Kuriyama Shigeru. 2019. Animating landscape: Self-supervised learning of decoupled motion and appearance for single-image video synthesis. ACM Transactions on Graphics (TOG) 38, 6 (2019), 119.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. [6] Eng Eirik. 1996. Qt GUI Toolkit: Porting graphics to multiple platforms using a GUI toolkit. Linux J. 1996, 31es (1996), 2–es.Google ScholarGoogle Scholar
  7. [7] Freeman William T., Adelson Edward H., and Heeger David J.. 1991. Motion without movement. ACM Siggraph Comput. Graph. 25, 4 (1991), 2730.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. [8] Gao Ruohan, Xiong Bo, and Grauman Kristen. 2018. Im2flow: Motion hallucination from static images for action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 59375947.Google ScholarGoogle ScholarCross RefCross Ref
  9. [9] Gastal Eduardo S. L. and Oliveira Manuel M.. 2010. Shared sampling for real-time alpha matting. In Computer Graphics Forum, Vol. 29. Wiley Online Library, 575584.Google ScholarGoogle Scholar
  10. [10] Guennebaud Gaël, Jacob Benoît, et al. 2010. Eigen v3. Retrieved from http://eigen.tuxfamily.org.Google ScholarGoogle Scholar
  11. [11] Gui Yan, Ma Li-zhuang, Yin Chao, and Chen Zhi-hua. 2012. Preserving global features of fluid animation from a single image using video examples. J. Zhejiang Univ. Sci. C 13, 7 (2012), 510519.Google ScholarGoogle ScholarCross RefCross Ref
  12. [12] Holynski Aleksander, Curless Brian, Seitz Steven M., and Szeliski Richard. 2020. Animating pictures with eulerian motion fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 58105819.Google ScholarGoogle Scholar
  13. [13] Kim Kyung-Rae, Choi Whan, Koh Yeong Jun, Jeong Seong-Gyun, and Kim Chang-Su. 2019. Instance-level future motion estimation in a single image based on ordinal regression. In Proceedings of the IEEE International Conference on Computer Vision. 273282.Google ScholarGoogle ScholarCross RefCross Ref
  14. [14] Kitani Kris M., Ziebart Brian D., Bagnell James Andrew, and Hebert Martial. 2012. Activity forecasting. In Proceedings of the European Conference on Computer Vision. Springer, 201214.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. [15] Lai Yu-Chi, Chen Bo-An, Chen Kuo-Wei, Si Wei-Lin, Yao Chih-Yuan, and Zhang Eugene. 2016. Data-driven npr illustrations of natural flows in chinese painting. IEEE Trans. Vis. Comput. Graph. 23, 12 (2016), 25352549.Google ScholarGoogle ScholarCross RefCross Ref
  16. [16] Liao Zicheng, Joshi Neel, and Hoppe Hugues. 2013. Automated video looping with progressive dynamism. ACM Trans Graph. 32, 4 (2013), 110.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. [17] Lin Chih-Yang, Huang Yun-Wen, and Shih Timothy K.. 2019. Creating waterfall animation on a single image. Multimedia Tools Appl. 78, 6 (2019), 66376653.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. [18] Logacheva Elizaveta, Suvorov Roman, Khomenko Oleg, Mashikhin Anton, and Lempitsky Victor. 2020. Deeplandscape: Adversarial modeling of landscape videos. In Proceedings of the European Conference on Computer Vision. Springer, 256272.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. [19] Ma Wei-Chiu, Huang De-An, Lee Namhoon, and Kitani Kris M.. 2017. Forecasting interactive dynamics of pedestrians with fictitious play. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 774782.Google ScholarGoogle ScholarCross RefCross Ref
  20. [20] Mortensen Eric N. and Barrett William A.. 1995. Intelligent scissors for image composition. In Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques. 191198.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. [21] Mottaghi Roozbeh, Bagherinezhad Hessam, Rastegari Mohammad, and Farhadi Ali. 2016. Newtonian scene understanding: Unfolding the dynamics of objects in static images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 35213529.Google ScholarGoogle Scholar
  22. [22] Okabe Makoto, Anjyo Ken, Igarashi Takeo, and Seidel Hans-Peter. 2009. Animating pictures of fluid using video examples. In Computer Graphics Forum, Vol. 28. Wiley Online Library, 677686.Google ScholarGoogle Scholar
  23. [23] Okabe Makoto, Anjyor Ken, and Onai Rikio. 2011. Creating fluid animation from a single image using video database. In Computer Graphics Forum, Vol. 30. Wiley Online Library, 19731982.Google ScholarGoogle Scholar
  24. [24] Okabe Makoto, Dobashi Yoshinori, and Anjyo Ken. 2018. Animating pictures of water scenes using video retrieval. Vis. Comput. 34, 3 (2018), 347358.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. [25] Pixaloop 2021. Retrieved from https://www.pixaloopapp.com/.Google ScholarGoogle Scholar
  26. [26] Plotagraph 2017. Retrieved from https://plotaverseapps.com/.Google ScholarGoogle Scholar
  27. [27] Prashnani Ekta, Noorkami Maneli, Vaquero Daniel, and Sen Pradeep. 2017. A phase-based approach for animating images using video examples. In Computer Graphics Forum, Vol. 36. Wiley Online Library, 303311.Google ScholarGoogle Scholar
  28. [28] Press William H., Press William H., Flannery Brian P., Teukolsky Saul A., Vetterling William T., Flannery Brian P., and Vetterling William T.. 1989. Numerical Recipes in Pascal: The Art of Scientific Computing. Vol. 1. Cambridge University Press.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. [29] Schödl Arno, Szeliski Richard, Salesin David H., and Essa Irfan. 2000. Video textures. In Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques. 489498.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. [30] Weisstein Eric W.. 2005. Moore neighborhood. In From MathWorld—A Wolfram Web Resource.Google ScholarGoogle Scholar
  31. [31] contributors Wikipedia. 2020. Linear interpolation. Retrieved January 8, 2021 from https://en.wikipedia.org/w/index.php?title=Linear_interpolation&oldid=986522475.Google ScholarGoogle Scholar
  32. [32] Yagi Takuma, Mangalam Karttikeya, Yonetani Ryo, and Sato Yoichi. 2018. Future person localization in first-person videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 75937602.Google ScholarGoogle ScholarCross RefCross Ref
  33. [33] Zhang Jin. 2007. Visualization for Information Retrieval. Vol. 23. Springer Science & Business Media.Google ScholarGoogle Scholar

Index Terms

  1. Animating Still Natural Images Using Warping

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Multimedia Computing, Communications, and Applications
      ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 19, Issue 1
      January 2023
      505 pages
      ISSN:1551-6857
      EISSN:1551-6865
      DOI:10.1145/3572858
      • Editor:
      • Abdulmotaleb El Saddik
      Issue’s Table of Contents

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 5 January 2023
      • Online AM: 18 February 2022
      • Accepted: 17 January 2022
      • Revised: 11 December 2021
      • Received: 4 September 2021
      Published in tomm Volume 19, Issue 1

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Refereed
    • Article Metrics

      • Downloads (Last 12 months)252
      • Downloads (Last 6 weeks)34

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    View Full Text

    HTML Format

    View this article in HTML Format .

    View HTML Format
    About Cookies On This Site

    We use cookies to ensure that we give you the best experience on our website.

    Learn more

    Got it!