skip to main content
research-article

Dynamosaicing: Mosaicing of Dynamic Scenes

Published:01 October 2007Publication History
Skip Abstract Section

Abstract

This paper explores the manipulation of time in video editing, enabling to control the chronological time of events. These time manipulations include slowing down (or postponing) some dynamic events while speeding up (or advancing) others. When a video camera scans a scene, aligning all the events to a single time interval will result in a panoramic movie. Time manipulations are obtained by first constructing an aligned space-time volume from the input video, and then sweeping a continuous 2D slice (time front) through that volume, generating a new sequence of images. For dynamic scenes, aligning the input video frames poses an important challenge. We propose to align dynamic scenes using a new notion of "dynamics constancy", which is more appropriate for this task than the traditional assumption of "brightness constancy".Another challenge is to avoid visual seams inside moving objects and other visual artifacts resulting from sweeping the space-time volumes with time fronts of arbitrary geometry. To avoid such artifacts, we formulate the problem of finding optimal time front geometry as one of finding a minimal cut in a 4D graph, and solve it using max-flow methods.

References

  1. A. Agarwala, M. Dontcheva, M. Agrawala, S. Drucker, A. Colburn, B. Curless, D. Salesin, and M. Cohen, “Interactive Digital Photomontage,” Proc. ACM SIGGRAPH '04, pp. 294-302, Aug. 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. A. Agarwala, C. Zheng, C. Pal, M. Agrawala, M. Cohen, B. Curless, D. Salesin, and R. Szeliski, “Panoramic Video Textures,” Proc. ACM SIGGRAPH '05, pp. 821-827, July 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Z. Bar-Joseph, R. El-Yaniv, D. Lischinski, and M. Werman, “Texture Mixing and Texture Movie Synthesis Using Statistical Learning,” IEEE Trans. Visualization and Computer Graphics, vol. 7, no. 2, pp. 120-135, Apr.-June 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. J.L. Barron, D.J. Fleet, S.S. Beauchemin, and T.A. Burkitt, “Performance of Optical Flow Techniques,” Proc. Computer Vision and Pattern Recognition, pp. 236-242, 1992.Google ScholarGoogle ScholarCross RefCross Ref
  5. J.R. Bergen, P. Anandan, K.J. Hanna, and R. Hingorani, “Hierarchical Model-Based Motion Estimation,” Proc. European Conf. Computer Vision, pp. 237-252, May 1992. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Y. Boykov, O. Veksler, and R. Zabih, “Fast Approximate Energy Minimization via Graph Cuts,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 23, no. 11, pp. 1222-1239, Nov. 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. L.G. Brown, “Survey of Image Registration Techniques,” ACM Computing Surveys, vol. 24, no. 4, pp. 325-376, Dec. 1992. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. G. Doretto, A. Chiuso, S. Soatto, and Y.N. Wu, “Dynamic Textures,” Int'l J. Computer Vision, vol. 51, no. 2, pp. 91-109, Feb. 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. G. Doretto and S. Soatto, “Towards Plenoptic Dynamic Textures,” Proc. Workshop Textures, pp. 25-30, Oct. 2003.Google ScholarGoogle Scholar
  10. A. Efros and T. Leung, “Texture Synthesis by Non-Parametric Sampling,” Proc. Int'l Conf. Computer Vision, vol. 2, pp. 1033-1038, Sept. 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. A.W. Fitzgibbon, “Stochastic Rigidity: Image Registration for Nowhere-Static Scenes,” Proc. Int'l Conf. Computer Vision, pp. 662-669, July 2001.Google ScholarGoogle ScholarCross RefCross Ref
  12. W.T. Freeman and H. Zhang, “Shape-Time Photography,” Proc. Conf. Computer Vision and Pattern Recognition, vol. 2, pp. 151-157, 2003.Google ScholarGoogle ScholarCross RefCross Ref
  13. S. Hsu, H.S. Sawhney, and R. Kumar, “Automated Mosaics via Topology Inference,” IEEE Trans. Computer Graphics and Applications, vol. 22, no. 2, pp. 44-54, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. M. Irani and P. Anandan, “Robust Multi-Sensor Image Alignment,” Proc. Int'l Conf. Computer Vision, pp. 959-966, Jan. 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. M. Irani, P. Anandan, J. Bergen, R. Kumar, and S. Hsu, “Mosaic Representations of Video Sequences and Their Applications,” Signal Processing: Image Comm., vol. 8, no. 4, pp. 327-351, May 1996.Google ScholarGoogle ScholarCross RefCross Ref
  16. A. Klein, P. Sloan, A. Colburn, A. Finkelstein, and M. Cohen, “Video Cubism,” Technical Report MSR-TR-2001-45, Microsoft Research, 2001.Google ScholarGoogle Scholar
  17. V. Kolmogorov and R. Zabih, “What Energy Functions can be Minimized via Graph Cuts?” Proc. European Conf. Computer Vision, pp. 65-81, May 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. V. Kwatra, A. Schödl, I. Essa, G. Turk, and A. Bobick, “Graphcut Textures: Image and Video Synthesis using Graph Cuts,” ACM Trans. Graphics, SIGGRAPH '03, vol. 22, no. 3, pp. 277-286, July 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. H. Lombaert, Y. Sun, L. Grady, and C. Xu, “A Multilevel Banded Graph Cuts Method for Fast Image Segmentation,” Proc. Int'l Conf. Computer Vision, pp. 259-265, Oct. 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. S. Peleg, M. Ben-Ezra, and Y. Pritch, “Omnistereo: Panoramic Stereo Imaging,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 23, no. 3, pp. 279-290, Mar. 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. S. Peleg, B. Rousso, A. Rav-Acha, and A. Zomet, “Mosaicing on Adaptive Manifolds,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, no. 10, pp. 1144-1154, Oct. 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. A. Rav-Acha, Y. Pritch, D. Lischinski, and S. Peleg, “Dynamosaicing: Video Mosaics with Non-Chronological Time,” Proc. Conf. Computer Vision and Pattern Recognition, June 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. A. Rav-Acha, Y. Pritch, and S. Peleg, “Online Registration of Dynamic Scenes Using Video Extrapolation,” Proc. Workshop Dynamical Vision at ICCV '05, Oct. 2005.Google ScholarGoogle Scholar
  24. P.H.S. Torr and A. Zisserman, “MLESAC: A New Robust Estimator with Application to Estimating Image Geometry,” J.Computer Vision and Image Understanding, vol. 78, no. 1, pp.138-156, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. M. Uyttendaele, A. Eden, and R. Szeliski, “Eliminating Ghosting and Exposure Artifacts in Image Mosaics,” Proc. Conf. Computer Vision and Pattern Recognition, vol. II, pp. 509-516, Dec. 2001.Google ScholarGoogle ScholarCross RefCross Ref
  26. R. Vidal and A. Ravichandran, “Optical Flow Estimation and Segmentation of Multiple Moving Dynamic Textures,” Proc. Computer Vision and Pattern Recognition, pp. 516-521, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Y. Weiss and W.T. Freeman, “On the Optimality of Solutions of the Max-Product Belief Propagation Algorithm in Arbitrary Graphs,” IEEE Trans. Information Theory, vol. 47, no. 2, pp. 723-735, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Y. Wexler, E. Shechtman, and M. Irani, “Space-Time Video Completion,” Proc. Conf. Computer Vision and Pattern Recognition, vol. 1, pp. 120-127, June 2004.Google ScholarGoogle ScholarCross RefCross Ref
  29. Y. Wexler and D. Simakov, “Space-Time Scene Manifolds,” Proc. Int'l Conf. Computer Vision, Oct. 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. A. Zomet, D. Feldman, S. Peleg, and D. Weinshall, “Mosaicing New Views: The Crossed-Slits Projection,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 25, no. 6, pp. 741-754, June 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Dynamosaicing: Mosaicing of Dynamic Scenes

            Recommendations

            Comments

            Login options

            Check if you have access through your login credentials or your institution to get full access on this article.

            Sign in

            Full Access