Abstract
We present a new algorithm for realtime face tracking on commodity RGB-D sensing devices. Our method requires no user-specific training or calibration, or any other form of manual assistance, thus enabling a range of new applications in performance-based facial animation and virtual interaction at the consumer level. The key novelty of our approach is an optimization algorithm that jointly solves for a detailed 3D expression model of the user and the corresponding dynamic tracking parameters. Realtime performance and robust computations are facilitated by a novel subspace parameterization of the dynamic facial expression space. We provide a detailed evaluation that shows that our approach significantly simplifies the performance capture workflow, while achieving accurate facial tracking for realtime applications.
Supplemental Material
Available for Download
Supplemental material.
- Amberg, B., Blake, A., and Vetter, T. 2009. On compositional image alignment, with an application to active appearance models. In CVPR, 1714--1721.Google Scholar
- Baltrušaitis, T., Robinson, P., and Morency, L.-P. 2012. 3d constrained local model for rigid and non-rigid facial tracking. In CVPR, 2610--2617. Google Scholar
Digital Library
- Barrett, R., Berry, M., Chan, T. F., Demmel, J., Donato, J., Dongarra, J., Eijkhout, V., Pozo, R., Romine, C., and der Vorst, H. V. 1994. Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, 2nd Edition. SIAM.Google Scholar
- Beeler, T., Hahn, F., Bradley, D., Bickel, B., Beardsley, P., Gotsman, C., Sumner, R. W., and Gross, M. 2011. High-quality passive facial performance capture using anchor frames. ACM Trans. Graph. 30, 75:1--75:10. Google Scholar
Digital Library
- Bickel, B., Lang, M., Botsch, M., Otaduy, M. A., and Gross, M. 2008. Pose-space animation and transfer of facial details. In SCA, 57--66. Google Scholar
Digital Library
- Blanz, V., and Vetter, T. 1999. A morphable model for the synthesis of 3d faces. In Proc. SIGGRAPH, 187--194. Google Scholar
Digital Library
- Botsch, M., Sumner, R., Pauly, M., and Gross, M. 2006. Deformation Transfer for Detail-Preserving Surface Editing. In VMV, 357--364.Google Scholar
- Botsch, M., Kobbelt, L., Pauly, M., Alliez, P., and Levy, B. 2010. Polygon Mesh Processing. AK Peters.Google Scholar
- Bradley, D., Heidrich, W., Popa, T., and Sheffer, A. 2010. High resolution passive facial performance capture. ACM Trans. Graph. 29, 41:1--41:10. Google Scholar
Digital Library
- Chai, J., Xiao, J., and Hodgins, J. 2003. Vision-based control of 3d facial animation. In Proc. SCA, 193--206. Google Scholar
Digital Library
- Deng, Z., Chiang, P.-Y., Fox, P., and Neumann, U. 2006. Animating blendshape faces by cross-mapping motion capture data. In I3D, 43--48. Google Scholar
Digital Library
- Faceshift, 2013. http://www.faceshift.com.Google Scholar
- Fu, W. J. 1998. Penalized Regressions: The Bridge versus the Lasso. J. Comp. Graph. Stat. 7, 397--416.Google Scholar
- Furukawa, Y., and Ponce, J. 2009. Dense 3d motion capture for human faces. In CVPR, 1674--1681.Google Scholar
- Huang, H., Chai, J., Tong, X., and Wu, H. 2011. Leveraging motion capture and 3d scanning for high-fidelity facial performance acquisition. ACM Trans. Graph. 30, 74:1--74:10. Google Scholar
Digital Library
- Levy, B., and Zhang, R. H. 2010. Spectral geometry processing. In ACM SIGGRAPH Course Notes.Google Scholar
- Li, H., Weise, T., and Pauly, M. 2010. Example-based facial rigging. ACM Trans. Graph. 29, 32:1--32:6. Google Scholar
Digital Library
- Lin, I.-C., and Ouhyoung, M. 2005. Mirror mocap: Automatic and efficient capture of dense 3d facial motion parameters from video. The Visual Computer 21, 355--372.Google Scholar
Cross Ref
- Ma, W.-C., Jones, A., Chiang, J.-Y., Hawkins, T., Frederiksen, S., Peers, P., Vukovic, M., Ouhyoung, M., and Debevec, P. 2008. Facial performance synthesis using deformation-driven polynomial displacement maps. ACM Trans. Graph. 27, 121:1--121:10. Google Scholar
Digital Library
- Madsen, K., Nielsen, H. B., and Tingleff, O., 2004. Methods for non-linear least squares problems (2nd ed.).Google Scholar
- Pighin, F., and Lewis, J. P. 2006. Performance-driven facial animation. In ACM SIGGRAPH Course Notes.Google Scholar
- Rusinkiewicz, S., and Levoy, M. 2001. Efficient variants of the ICP algorithm. In 3DIM, 145--152.Google Scholar
- Saragih, J. M., Lucey, S., and Cohn, J. F. 2011. Real-time avatar animation from a single image. In FG, 213--220.Google Scholar
- Sumner, R. W., and Popović, J. 2004. Deformation transfer for triangle meshes. ACM Trans. Graph. 23, 399--405. Google Scholar
Digital Library
- Valgaerts, L., Wu, C., Bruhn, A., Seidel, H.-P., and Theobalt, C. 2012. Lightweight binocular facial performance capture under uncontrolled lighting. ACM Trans. Graph. 31, 187:1--187:11. Google Scholar
Digital Library
- Viola, P., and Jones, M. 2001. Rapid object detection using a boosted cascade of simple features. In CVPR, 511--518.Google Scholar
- Weise, T., Li, H., Gool, L. V., and Pauly, M. 2009. Face/off: Live facial puppetry. In SCA, 7--16. Google Scholar
Digital Library
- Weise, T., Bouaziz, S., Li, H., and Pauly, M. 2011. Real-time performance-based facial animation. ACM Trans. Graph. 30, 77:1--77:10. Google Scholar
Digital Library
- Zhang, L., Snavely, N., Curless, B., and Seitz, S. M. 2004. Spacetime faces: high resolution capture for modeling and animation. ACM Trans. Graph. 23, 548--558. Google Scholar
Digital Library
Index Terms
Online modeling for realtime facial animation
Recommendations
Displaced dynamic expression regression for real-time facial tracking and animation
We present a fully automatic approach to real-time facial tracking and animation with a single video camera. Our approach does not need any calibration for each individual user. It learns a generic regressor from public image datasets, which can be ...
Example-based facial rigging
We introduce a method for generating facial blendshape rigs from a set of example poses of a CG character. Our system transfers controller semantics and expression dynamics from a generic template to the target blendshape model, while solving for an ...
Realtime performance-based facial animation
This paper presents a system for performance-based character animation that enables any user to control the facial expressions of a digital avatar in realtime. The user is recorded in a natural environment using a non-intrusive, commercially available ...





Comments