skip to main content
10.1145/3532836.3536237acmconferencesArticle/Chapter ViewAbstractPublication PagessiggraphConference Proceedingsconference-collections
invited-talk

Accelerating facial motion capture with video-driven animation transfer

Published:24 July 2022Publication History

ABSTRACT

We describe a hybrid pipeline that leverages: 1) video-driven animation transfer [Moser et al. 2021] for regressing high-quality animation under partially-controlled conditions from a single input image, and 2) a marker-based tracking approach [Moser et al. 2017] that, while more complex and slower, is capable of handling the most challenging scenarios seen in the capture set. By applying the most suited approach to each shot, we have an overall pipeline that, without loss of quality, is faster and has less user intervention. We also improve the prior work [Moser et al. 2021] with augmentations during training to make it more robust for the Head Mounted Camera (HMC) scenario. The new pipeline is currently being integrated into our offline and real-time workflows.

Skip Supplemental Material Section

Supplemental Material

siggraph2022_FinalSequence.mp4

Supplemental video

References

  1. DD. 2022. Digital Domain’s Masquerade Offline Capture. Retrieved Feb 20, 2022 from https://digitaldomain.com/technology/masquerade-offline-capture/Google ScholarGoogle Scholar
  2. DI4D. 2022. DI4D Pro. Retrieved Feb 20, 2022 from https://www.di4d.com/di4d-pro/Google ScholarGoogle Scholar
  3. Disney. 2022. Anyma. Retrieved Feb 20, 2022 from https://studios.disneyresearch.com/anyma/Google ScholarGoogle Scholar
  4. Martin Klaudiny, Steven McDonagh, Derek Bradley, Thabo Beeler, and Kenny Mitchell. 2017. Real-Time Multi-View Facial Capture with Synthetic Training. Computer Graphics Forum 36 (2017).Google ScholarGoogle Scholar
  5. Lucio Moser, Chinyu Chien, Mark Williams, Jose Serra, Darren Hendler, and Doug Roble. 2021. Semi-Supervised Video-Driven Facial Animation Transfer for Production. ACM Trans. Graph. 40, 6 (2021).Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Lucio Moser, Darren Hendler, and Doug Roble. 2017. Masquerade: Fine-Scale Details for Head-Mounted Camera Motion Capture Data. In ACM SIGGRAPH 2017 Talks. New York, NY, USA, Article 18, 2 pages.Google ScholarGoogle Scholar
  7. Weta. 2022. Weta Digital - FACETS. Retrieved Feb 20, 2022 from https://www.wetafx.co.nz/research-and-tech/technology/facets/Google ScholarGoogle Scholar
  8. Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. 2017. Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks. In Computer Vision (ICCV), 2017 IEEE International Conference on.Google ScholarGoogle ScholarCross RefCross Ref

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Conferences
    SIGGRAPH '22: ACM SIGGRAPH 2022 Talks
    July 2022
    108 pages
    ISBN:9781450393713
    DOI:10.1145/3532836

    Copyright © 2022 Owner/Author

    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 24 July 2022

    Check for updates

    Qualifiers

    • invited-talk
    • Research
    • Refereed limited

    Acceptance Rates

    Overall Acceptance Rate1,822of8,601submissions,21%
  • Article Metrics

    • Downloads (Last 12 months)79
    • Downloads (Last 6 weeks)2

    Other Metrics

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format