skip to main content
research-article

Online modeling for realtime facial animation

Published:21 July 2013Publication History
Skip Abstract Section

Abstract

We present a new algorithm for realtime face tracking on commodity RGB-D sensing devices. Our method requires no user-specific training or calibration, or any other form of manual assistance, thus enabling a range of new applications in performance-based facial animation and virtual interaction at the consumer level. The key novelty of our approach is an optimization algorithm that jointly solves for a detailed 3D expression model of the user and the corresponding dynamic tracking parameters. Realtime performance and robust computations are facilitated by a novel subspace parameterization of the dynamic facial expression space. We provide a detailed evaluation that shows that our approach significantly simplifies the performance capture workflow, while achieving accurate facial tracking for realtime applications.

Skip Supplemental Material Section

Supplemental Material

tp062.mp4

References

  1. Amberg, B., Blake, A., and Vetter, T. 2009. On compositional image alignment, with an application to active appearance models. In CVPR, 1714--1721.Google ScholarGoogle Scholar
  2. Baltrušaitis, T., Robinson, P., and Morency, L.-P. 2012. 3d constrained local model for rigid and non-rigid facial tracking. In CVPR, 2610--2617. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Barrett, R., Berry, M., Chan, T. F., Demmel, J., Donato, J., Dongarra, J., Eijkhout, V., Pozo, R., Romine, C., and der Vorst, H. V. 1994. Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, 2nd Edition. SIAM.Google ScholarGoogle Scholar
  4. Beeler, T., Hahn, F., Bradley, D., Bickel, B., Beardsley, P., Gotsman, C., Sumner, R. W., and Gross, M. 2011. High-quality passive facial performance capture using anchor frames. ACM Trans. Graph. 30, 75:1--75:10. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Bickel, B., Lang, M., Botsch, M., Otaduy, M. A., and Gross, M. 2008. Pose-space animation and transfer of facial details. In SCA, 57--66. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Blanz, V., and Vetter, T. 1999. A morphable model for the synthesis of 3d faces. In Proc. SIGGRAPH, 187--194. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Botsch, M., Sumner, R., Pauly, M., and Gross, M. 2006. Deformation Transfer for Detail-Preserving Surface Editing. In VMV, 357--364.Google ScholarGoogle Scholar
  8. Botsch, M., Kobbelt, L., Pauly, M., Alliez, P., and Levy, B. 2010. Polygon Mesh Processing. AK Peters.Google ScholarGoogle Scholar
  9. Bradley, D., Heidrich, W., Popa, T., and Sheffer, A. 2010. High resolution passive facial performance capture. ACM Trans. Graph. 29, 41:1--41:10. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Chai, J., Xiao, J., and Hodgins, J. 2003. Vision-based control of 3d facial animation. In Proc. SCA, 193--206. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Deng, Z., Chiang, P.-Y., Fox, P., and Neumann, U. 2006. Animating blendshape faces by cross-mapping motion capture data. In I3D, 43--48. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Faceshift, 2013. http://www.faceshift.com.Google ScholarGoogle Scholar
  13. Fu, W. J. 1998. Penalized Regressions: The Bridge versus the Lasso. J. Comp. Graph. Stat. 7, 397--416.Google ScholarGoogle Scholar
  14. Furukawa, Y., and Ponce, J. 2009. Dense 3d motion capture for human faces. In CVPR, 1674--1681.Google ScholarGoogle Scholar
  15. Huang, H., Chai, J., Tong, X., and Wu, H. 2011. Leveraging motion capture and 3d scanning for high-fidelity facial performance acquisition. ACM Trans. Graph. 30, 74:1--74:10. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Levy, B., and Zhang, R. H. 2010. Spectral geometry processing. In ACM SIGGRAPH Course Notes.Google ScholarGoogle Scholar
  17. Li, H., Weise, T., and Pauly, M. 2010. Example-based facial rigging. ACM Trans. Graph. 29, 32:1--32:6. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Lin, I.-C., and Ouhyoung, M. 2005. Mirror mocap: Automatic and efficient capture of dense 3d facial motion parameters from video. The Visual Computer 21, 355--372.Google ScholarGoogle ScholarCross RefCross Ref
  19. Ma, W.-C., Jones, A., Chiang, J.-Y., Hawkins, T., Frederiksen, S., Peers, P., Vukovic, M., Ouhyoung, M., and Debevec, P. 2008. Facial performance synthesis using deformation-driven polynomial displacement maps. ACM Trans. Graph. 27, 121:1--121:10. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Madsen, K., Nielsen, H. B., and Tingleff, O., 2004. Methods for non-linear least squares problems (2nd ed.).Google ScholarGoogle Scholar
  21. Pighin, F., and Lewis, J. P. 2006. Performance-driven facial animation. In ACM SIGGRAPH Course Notes.Google ScholarGoogle Scholar
  22. Rusinkiewicz, S., and Levoy, M. 2001. Efficient variants of the ICP algorithm. In 3DIM, 145--152.Google ScholarGoogle Scholar
  23. Saragih, J. M., Lucey, S., and Cohn, J. F. 2011. Real-time avatar animation from a single image. In FG, 213--220.Google ScholarGoogle Scholar
  24. Sumner, R. W., and Popović, J. 2004. Deformation transfer for triangle meshes. ACM Trans. Graph. 23, 399--405. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Valgaerts, L., Wu, C., Bruhn, A., Seidel, H.-P., and Theobalt, C. 2012. Lightweight binocular facial performance capture under uncontrolled lighting. ACM Trans. Graph. 31, 187:1--187:11. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Viola, P., and Jones, M. 2001. Rapid object detection using a boosted cascade of simple features. In CVPR, 511--518.Google ScholarGoogle Scholar
  27. Weise, T., Li, H., Gool, L. V., and Pauly, M. 2009. Face/off: Live facial puppetry. In SCA, 7--16. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Weise, T., Bouaziz, S., Li, H., and Pauly, M. 2011. Real-time performance-based facial animation. ACM Trans. Graph. 30, 77:1--77:10. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Zhang, L., Snavely, N., Curless, B., and Seitz, S. M. 2004. Spacetime faces: high resolution capture for modeling and animation. ACM Trans. Graph. 23, 548--558. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Online modeling for realtime facial animation

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM Transactions on Graphics
          ACM Transactions on Graphics  Volume 32, Issue 4
          July 2013
          1215 pages
          ISSN:0730-0301
          EISSN:1557-7368
          DOI:10.1145/2461912
          Issue’s Table of Contents

          Copyright © 2013 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 21 July 2013
          Published in tog Volume 32, Issue 4

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader