skip to main content
research-article

Thinking Like a Director: Film Editing Patterns for Virtual Cinematographic Storytelling

Published:23 October 2018Publication History
Skip Abstract Section

Abstract

This article introduces Film Editing Patterns (FEP), a language to formalize film editing practices and stylistic choices found in movies. FEP constructs are constraints, expressed over one or more shots from a movie sequence, that characterize changes in cinematographic visual properties, such as shot sizes, camera angles, or layout of actors on the screen. We present the vocabulary of the FEP language, introduce its usage in analyzing styles from annotated film data, and describe how it can support users in the creative design of film sequences in 3D. More specifically, (i) we define the FEP language, (ii) we present an application to craft filmic sequences from 3D animated scenes that uses FEPs as a high level mean to select cameras and perform cuts between cameras that follow best practices in cinema, and (iii) we evaluate the benefits of FEPs by performing user experiments in which professional filmmakers and amateurs had to create cinematographic sequences. The evaluation suggests that users generally appreciate the idea of FEPs, and that it can effectively help novice and medium experienced users in crafting film sequences with little training.

Skip Supplemental Material Section

Supplemental Material

References

  1. Dan Amerson and Shaun Kime. 2005. Real-time cinematic camera control for interactive narratives. In Proceedings of the ACM SIGCHI International Conference on Advances in Computer Entertainment Technology. ACM Press, 369--369. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. William H. Bares, Joël P. Grégoire, and James C. Lester. 1998. Realtime constraint-based cinematography for complex interactive 3D worlds. In Proceedings of the National Conference On Artificial Intelligence. Citeseer, 1101--1106. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. William H. Bares, Somying Thainimit, and Scott Mcdermott. 2000. A model for constraint-based camera planning. In Proceedings of the AAAI Spring Symposium.Google ScholarGoogle Scholar
  4. Jacopo Maria Corridoni and Alberto Del Bimbo. 1998. Structured representation and automatic indexing of movie information content. Pattern Recogn. 31, 12 (1998), 2027--2045.Google ScholarGoogle ScholarCross RefCross Ref
  5. David B. Christianson, Sean E. Anderson, Li-wei He, David H. Salesin, Daniel S. Weld, and Michael F. Cohen. 1996. Declarative camera control for automatic cinematography. Proceedings of the AAAI Conference on Artificial Intelligence. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Nicolas Davis, Alexander Zook, Brian O’Neill, Brandon Headrick, Mark Riedl, Ashton Grosz, and Nitsche Michael. 2013. Creativity support for novice digital filmmaking. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (2013), 651--660. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Steven M. Drucker and David Zeltzer. 1994. Intelligent camera control in a virtual environment. In Graphics Interface 94. 190--199.Google ScholarGoogle Scholar
  8. David K. Elson and Mark O. Riedl. 2007. A lightweight intelligent virtual cinematography system for machinima production. In Proceedings of the 3rd Conference on Artificial Intelligence and Interactive Digital Entertainment. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Quentin Galvane, Marc Christie, Rémi Ronfard, Chen-Kim Lim, and Marie-Paule Cani. 2013. Steering behaviors for autonomous cameras. In Proceedings of the Conference on Motion on Games (MIG’13). 93--102. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Quentin Galvane, Rémi Ronfard, Christophe Lino, and Marc Christie. 2015. Continuity editing for 3D animation. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI Press). Austin, Texas, United States. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Li-Wei He, Michael F. Cohen, and David H. Salesin. 1996. The virtual cinematographer: A paradigm for automatic real-time camera control and directing. In Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH’96). ACM Press, 217--224. Retrieved from Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Arnav Jhala and R. Michael Young. 2010. Cinematic visual discourse : Representation , generation , and evaluation. IEEE Trans. Comput. Intell. AI Games 2, 2 (2010), 69--81.Google ScholarGoogle ScholarCross RefCross Ref
  13. Donald E. Knuth, James H. Morris, and Vaughan R. Pratt. 1977. Fast pattern matching in strings. In SIAM J. Comput. 323--350.Google ScholarGoogle Scholar
  14. Mackenzie Leake, Abe Davis, Anh Truong, and Maneesh Agrawala. 2017. Computational video editing for dialogue-driven scenes. ACM Trans. Graph. 36, 4 (2017). Retrieved from Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Christophe Lino and Marc Christie. 2015. Intuitive and efficient camera control with the toric space. Trans. Graph. 34, 4 (2015). Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Christophe Lino, Marc Christie, Fabrice Lamarche, G. Schofield, and Patrick Olivier. 2010. A real-time cinematography system for interactive 3D environments. In Proceedings of the ACM SIGGRAPH Eurographics Symposium on Computer Animation. 139--148. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Christophe Lino, Marc Christie, Roberto Ranon, and William H. Bares. 2011. The director’s lens: An intelligent assistant for virtual cinematography. In Proceedings of the 19th ACM International Conference on Multimedia. 323--332. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Zeeshan Rasheed, Yaser Sheikh, and Mubarak Shah. 2005. On the use of computable features for film classification. IEEE Trans. Circ. Syst. Video Technol. 15, 1 (2005), 52--63. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Rémi Ronfard, Gandhi Vineet, and Laurent Boiron. 2013. The prose storyboard language. In Proceedings of the AAAI Workshop on Intelligent Cinematography and Editing.Google ScholarGoogle Scholar
  20. M. Svanera, S. Benini, N. Adami, R. Leonardi, and A. B. Kovcs. 2015. Over-the-shoulder shot detection in art films. In Proceedings of the International Workshop on Content-Based Multimedia Indexing.Google ScholarGoogle Scholar
  21. Wallapak Tavanapong and Junyu Zhou. 2004. Shot clustering techniques for story browsing. IEEE Trans. Multimedia 6(4) (2004), 517--527. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Roy Thompson and Christopher J. Bowen. 2009. Grammar of the Shot. Routledge.Google ScholarGoogle Scholar
  23. Jihua Wang and Tat-Seng Chua. 2003. A cinematic-based framework for scene boundary detection in video. Visual Comput. 19, 5 (2003), 329--341. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Hui-Yin Wu and Marc Christie. 2016. Analysing cinematography with embedded constrained patterns. In Proceedings of 2016 Eurographics Workshop on Intelligent Cinematography and Editing. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Hui-Yin Wu, Quentin Galvane, Christophe Lino, and Marc Christie. 2017. Analyzing elements of style in annotated film clips. In Proceedings of 2017 Eurographics Workshop on Intelligent Cinematography and Editing. 7. Retrieved fromGoogle ScholarGoogle Scholar
  26. Herbert Zettl. 2007. Sight, Sound, Motion: Applied Media Aesthetics. Wadsworth Publishing Company.Google ScholarGoogle Scholar

Index Terms

  1. Thinking Like a Director: Film Editing Patterns for Virtual Cinematographic Storytelling

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          Full Access

          • Published in

            cover image ACM Transactions on Multimedia Computing, Communications, and Applications
            ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 14, Issue 4
            Special Section on Deep Learning for Intelligent Multimedia Analytics
            November 2018
            221 pages
            ISSN:1551-6857
            EISSN:1551-6865
            DOI:10.1145/3282485
            Issue’s Table of Contents

            Copyright © 2018 ACM

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 23 October 2018
            • Accepted: 1 July 2018
            • Revised: 1 May 2018
            • Received: 1 December 2017
            Published in tomm Volume 14, Issue 4

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article
            • Research
            • Refereed

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader
          About Cookies On This Site

          We use cookies to ensure that we give you the best experience on our website.

          Learn more

          Got it!