Abstract
In computer graphics the motion of the jaw is commonly modelled by up-down and left-right rotation around a fixed pivot plus a forward-backward translation, yielding a three dimensional rig that is highly suited for intuitive artistic control. The anatomical motion of the jaw is, however, much more complex since the joints that connect the jaw to the skull exhibit both rotational and translational components. In reality the jaw does not move in a three dimensional subspace but on a constrained manifold in six dimensions. We analyze this manifold in the context of computer animation and show how the manifold can be parameterized with three degrees of freedom, providing a novel jaw rig that preserves the intuitive control while providing more accurate jaw positioning. The chosen parameterization furthermore places anatomically correct limits on the motion, preventing the rig from entering physiologically infeasible poses. Our new jaw rig is empirically designed from accurate capture data, and we provide a simple method to retarget the rig to new characters, both human and fantasy.
Supplemental Material
- Sameer Agarwal, Keir Mierle, and Others. 2016. Ceres Solver. http://ceres-solver.org. (2016).Google Scholar
- M Oliver Ahlers, Olaf Bernhardt, Holger A Jakstat, Bernd Kordass, Jens C Turp, Hans-Jurgen Schindler, and Alfons Hugger. 2015. Motion analysis of the mandible: guidelines for standardized analysis of computer-assisted recording of condylar movements. International journal of computerized dentistry 18, 3 (2015), 201--223.Google Scholar
- Eiichi Bando, Keisuke Nishigawa, Masanori Nakano, Hisahiro Takeuchi, Shuji Shigemoto, Kazuo Okura, Toyoko Satsuma, and Takeshi Yamamoto. 2009. Current status of researches on jaw movement and occlusion for clinical application. Japanese Dental Science Review 45, 2 (2009), 83--97.Google Scholar
Cross Ref
- Thabo Beeler, Bernd Bickel, Paul Beardsley, Bob Sumner, and Markus Gross. 2010. High-quality single-shot capture of facial geometry. ACM Transactions on Graphics 29, 4 (2010), 1. Google Scholar
Digital Library
- Thabo Beeler, Bernd Bickel, Gioacchino Noris, Paul Beardsley, Steve Marschner, Robert W. Sumner, and Markus Gross. 2012. Coupled 3D Reconstruction of Sparse Facial Hair and Skin. ACM Trans. Graph. 31, 4, Article 117 (2012), 117:1--117:10 pages. Google Scholar
Digital Library
- Thabo Beeler and Derek Bradley. 2014. Rigid stabilization of facial expressions. ACM Transactions on Graphics 33, 4 (2014), 1--9. Google Scholar
Digital Library
- Thabo Beeler, Fabian Hahn, Derek Bradley, Bernd Bickel, Paul Beardsley, Craig Gotsman, Robert W. Sumner, and Markus Gross. 2011. High-quality passive facial performance capture using anchor frames. ACM Transactions on Graphics (2011), 1. Google Scholar
Digital Library
- Pascal Bérard, Derek Bradley, Markus Gross, and Thabo Beeler. 2016. Lightweight eye capture using a parametric model. ACM Transactions on Graphics 35, 4 (2016), 1--12. Google Scholar
Digital Library
- Pascal Bérard, Derek Bradley, Maurizio Nitti, Thabo Beeler, and Markus Gross. 2014. High-quality Capture of Eyes. ACM Trans. Graph. 33, 6 (2014), 223:1--223:12. Google Scholar
Digital Library
- Amit Bermano, Thabo Beeler, Yeara Kozlov, Derek Bradley, Bernd Bickel, and Markus Gross. 2015. Detailed spatio-temporal reconstruction of eyelids. ACM Transactions on Graphics 34, 4 (2015). Google Scholar
Digital Library
- Enrique Bermejo, Carmen Campomanes-Álvarez, Andrea Valsecchi, Oscar Ibáñez, Sergio Damas, and Oscar Cordón. 2017. Genetic algorithms for skull-face overlay including mandible articulation. Information Sciences 420 (2017), 200--217. Google Scholar
Digital Library
- Volker Blanz and Thomas Vetter. 1999. A morphable model for the synthesis of 3D faces. Proceedings of the 26th annual conference on Computer graphics and interactive techniques - SIGGRAPH '99 (1999), 187--194. Google Scholar
Digital Library
- Mario Botsch and Olga Sorkine. 2008. On linear variational surface deformation methods. IEEE transactions on visualization and computer graphics 14, 1 (2008), 213--230. Google Scholar
Digital Library
- Sofien Bouaziz, Yangang Wang, and Mark Pauly. 2013. Online modeling for realtime facial animation. ACM Trans. Graphics (Proc. SIGGRAPH) 32, 4, Article 40 (2013), 40:1--40:10 pages. Google Scholar
Digital Library
- D. Bradley, W. Heidrich, T. Popa, and A. Sheffer. 2010. High Resolution Passive Facial Performance Capture. ACM Trans. Graphics (Proc. SIGGRAPH) 29, Article 41 (2010), 41:1--41:10 pages. Issue 4. Google Scholar
Digital Library
- Chen Cao, Derek Bradley, Kun Zhou, and Thabo Beeler. 2015. Real-time high-fidelity facial performance capture. ACM Transactions on Graphics 34, 4 (2015), 46:1--46:9. Google Scholar
Digital Library
- Chen Cao, Qiming Hou, and Kun Zhou. 2014. Displaced Dynamic Expression Regression for Real-time Facial Tracking and Animation. ACM Trans. Graph. 33, 4, Article 43 (2014), 43:1--43:10 pages. Google Scholar
Digital Library
- Cao Chen, Yanlin Weng, Shun Zhou, Yiying Tong, and Kun Zhou. 2014. FaceWarehouse: a 3D Facial Expression Database for Visual Computing. IEEE TVCG 20, 3 (2014), 413--425. Google Scholar
Digital Library
- Matthew Cong, Michael Bao, Jane L. E, Kiran S. Bhat, and Ronald Fedkiw. 2015. Fully Automatic Generation of Anatomical Face Simulation Models. In Proc. SCA. 175--183. Google Scholar
Digital Library
- Matthew Cong, Kiran S. Bhat, and Ronald Fedkiw. 2016. Art-directed Muscle Simulation for High-end Facial Animation. In Proc. SCA. 119--127. Google Scholar
Digital Library
- B. Daumas, W. L. Xu, and J. Bronlund. 2005. Jaw mechanism modeling and simulation. Mechanism and Machine Theory 40, 7 (2005), 821--833.Google Scholar
Cross Ref
- S. De Greef, P. Claes, D. Vandermeulen, W. Mollemans, P. Suetens, and G. Willems. 2006. Large-scale in-vivo Caucasian facial soft tissue thickness database for craniofacial reconstruction. Forensic Science International 159, 1 (2006).Google Scholar
- Virgilio F. Ferrario, Chiarella Sforza, Nicola Lovecchio, and Fabrizio Mian. 2005. Quantification of translational and gliding components in human temporomandibular joint during mouth opening. Archives of Oral Biology 50, 5 (2005), 507--515.Google Scholar
Cross Ref
- Graham Fyffe, Tim Hawkins, Chris Watts, Wan-Chun Ma, and Paul Debevec. 2011. Comprehensive Facial Performance Capture. In Eurographics.Google Scholar
- Graham Fyffe, Andrew Jones, Oleg Alexander, Ryosuke Ichikari, and Paul Debevec. 2014. Driving High-Resolution Facial Scans with Video Performance Capture. ACM Trans. Graph. 34, 1, Article 8 (2014), 8:1--8:14 pages Google Scholar
Digital Library
- G. Fyffe, K. Nagano, L. Huynh, S. Saito, J. Busch, A. Jones, H. Li, and P. Debevec. 2017. Multi-View Stereo on Consistent Face Topology. Comput. Graph. Forum 36, 2 (2017), 295--309. Google Scholar
Digital Library
- Pablo Garrido, Levi Valgaerts, Chenglei Wu, and Christian Theobalt. 2013. Reconstructing Detailed Dynamic Face Geometry from Monocular Video. In ACM Trans. Graph. (Proceedings of SIGGRAPH Asia 2013), Vol. 32. 158:1--158:10. Google Scholar
Digital Library
- Pablo Garrido, Michael Zollhoefer, Dan Casas, Levi Valgaerts, Kiran Varanasi, Patrick Perez, and Christian Theobalt. 2016a. Reconstruction of Personalized 3D Face Rigs from Monocular Video. 35, 3 (2016), 28:1--28:15.Google Scholar
- P. Garrido, M. Zollhöfer, C. Wu, D. Bradley, P. Perez, T. Beeler, and C. Theobalt. 2016b. Corrective 3D Reconstruction of Lips from Monocular Video. ACM Transactions on Graphics (TOG) 35, 6 (2016). Google Scholar
Digital Library
- S. Garrido-Jurado, R. Muñoz-Salinas, F.J. Madrid-Cuevas, and M.J. Marín-Jiménez. 2014. Automatic generation and detection of highly reliable fiducial markers under occlusion. Pattern Recognition 47, 6 (2014), 2280--2292. Google Scholar
Digital Library
- S. Garrido-Jurado, R. Muñoz-Salinas, F. J. Madrid-Cuevas, and R. Medina-Carnicer. 2016. Generation of fiducial marker dictionaries using Mixed Integer Linear Programming. Pattern Recognition 51, October (2016), 481--491. Google Scholar
Digital Library
- S. Garrido-Jurado, R. Muñoz Salinas, F.J. Madrid-Cuevas, and M.J. Marín-Jiménez. 2014. Automatic generation and detection of highly reliable fiducial markers under occlusion. Pattern Recognition 47, 6 (2014), 2280 -- 2292. Google Scholar
Digital Library
- John C Gower. 1975. Generalized procrustes analysis. Psychometrika 40, 1 (1975), 33--51.Google Scholar
Cross Ref
- Pei-Lun Hsieh, Chongyang Ma, Jihun Yu, and Hao Li. 2015. Unconstrained Realtime Facial Performance Capture. In Computer Vision and Pattern Recognition (CVPR).Google Scholar
- Alexandru-Eugen Ichim, Petr Kadleček, Ladislav Kavan, and Mark Pauly. 2017. Phace: Physics-based Face Modeling and Animation. ACM Transactions on Graphics 36, 4 (2017), 1--14. Google Scholar
Digital Library
- Alexandru-Eugen Ichim, Ladislav Kavan, Merlin Nimier-David, and Mark Pauly. 2016. Building and Animating User-Specific Volumetric Face Rigs. Ladislav Kavan and Chris Wojtan (2016).Google Scholar
- K Kähler, J Haber, and H Seidel. 2003. Reanimating the Dead : Reconstruction of Expressive Faces from Skull Data. ACM/SIGGRAPH Computer Graphics Proceedings 22, July (2003), 554--567. Google Scholar
Digital Library
- Tero Karras, Timo Aila, Samuli Laine, Antti Herva, and Jaakko Lehtinen. 2017. Audio-driven Facial Animation by Joint End-to-end Learning of Pose and Emotion. ACM Trans. Graph. 36, 4, Article 94 (2017), 94:1--94:12 pages. Google Scholar
Digital Library
- F. J. Knap, B. L. Richardson, and J. Bogstad. 1970. Study of Mandibular Motion in Six Degrees of Freedom. Journal of Dental Research 49, 2 (1970), 289--292.Google Scholar
Cross Ref
- Yeara Kozlov, Derek Bradley, Moritz Bächer, Bernhard Thomaszewski, Thabo Beeler, and Markus Gross. 2017. Enriching Facial Blendshape Rigs with Physical Simulation. Comput. Graph. Forum 36, 2 (2017), 75--84. Google Scholar
Digital Library
- Rafael Laboissière, David J. Ostry, and Anatol G. Feldman. 1996. The control of multi-muscle systems: Human jaw and hyoid movements. Biological Cybernetics 74, 4 (1996), 373--384. Google Scholar
Digital Library
- Samuli Laine, Tero Karras, Timo Aila, Antti Herva, Shunsuke Saito, Ronald Yu, Hao Li, and Jaakko Lehtinen. 2017. Production-level Facial Performance Capture Using Deep Convolutional Neural Networks. In Proc. SCA. 10:1--10:10. Google Scholar
Digital Library
- Jeremy J. Lemoine, James J. Xia, Clark R. Andersen, Jaime Gateno, William Buford Jr., and Michael A.K. Liebschner. 2007. Geometry-Based Algorithm for the Prediction of Nonpathologic Mandibular Movement. Journal of Oral and Maxillofacial Surgery 65, 12 (2007), 2411--2417.Google Scholar
Cross Ref
- J.P. Lewis, Ken Anjyo, Taehyun Rhee, Mengjie Zhang, Fred Pighin, and Zhigang Deng. 2014. Practice and Theory of Blendshape Facial Models. In Eurographics State of The Art Reports.Google Scholar
- Hao Li, Jihun Yu, Yuting Ye, and Chris Bregler. 2013. Realtime facial animation with on-the-fly correctives. ACM Trans. Graphics (Proc. SIGGRAPH) 32, 4, Article 42 (2013), 42:1--42:10 pages. Google Scholar
Digital Library
- Ran Luo, Qiang Fang, Jianguo Wei, Weiwei Xu, and Yin Yang. 2017. Acoustic VR of Human Tongue: Real-time Speech-driven Visual Tongue System. In IEEE VR.Google Scholar
- Andrea Mapelli, Domenico Galante, Nicola Lovecchio, Chiarella Sforza, and Virgilio F. Ferrario. 2009. Translation and rotation movements of the mandible during mouth opening and closing. Clinical Anatomy 22, 3 (2009), 311--318.Google Scholar
Cross Ref
- Jeffrey P. Okeson. 2013. Mechanics Of Mandibular Movement. In Management of Temporomandibular Disorders And Occlusion.Google Scholar
- Verónica Orvalho, Pedro Bastos, Frederic Parke, Bruno Oliveira, and Xenxo Alvarez. 2012. A Facial Rigging Survey. In Eurographics State of The Art Reports.Google Scholar
- David J. Ostry, Eric Vatikiotis-Bateson, and Paul L. Gribble. 1997. An Examination of the Degrees of Freedom of Human Jaw Motion in Speech and Mastication. Journal of Speech, Language, and Hearing Research Volume 40 (1997), 1341--1351.Google Scholar
Cross Ref
- U. Posselt. 1952. Studies in the Mobility of the Human Mandible. Acta Odontologica Scandinavica. https://books.google.ch/books?id=1MBpAAAAMAAJGoogle Scholar
- Ulf Posselt and Odont. D. 1958. Range of movement of the mandible. The Journal of the American Dental Association 56, 1 (1958), 10--13.Google Scholar
Cross Ref
- Fuhao Shi, Hsiang-Tao Wu, Xin Tong, and Jinxiang Chai. 2014. Automatic Acquisition of High-fidelity Facial Performances Using Monocular Videos. ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia 2014) 33 (2014). Issue 6. Google Scholar
Digital Library
- Eftychios Sifakis, Igor Neverov, and Ronald Fedkiw. 2005. Automatic determination of facial muscle activations from sparse motion capture marker data. ACM Transactions on Graphics 24, 3 (2005), 417. Google Scholar
Digital Library
- Supasorn Suwajanakorn, Ira Kemelmacher-Shlizerman, and Steven M. Seitz. 2014. Total Moving Face Reconstruction. In ECCV.Google Scholar
- Ayush Tewari, Michael Zollöfer, Hyeongwoo Kim, Pablo Garrido, Florian Bernard, Patrick Perez, and Christian Theobalt. 2017. MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction. In Proc. of IEEE ICCV.Google Scholar
- Justus Thies, Michael Zollhöfer, Matthias Nießner, Levi Valgaerts, Marc Stamminger, and Christian Theobalt. 2015. Real-time Expression Transfer for Facial Reenactment. ACM Trans. Graph. 34, 6, Article 183 (2015), 183:1--183:14 pages. Google Scholar
Digital Library
- J. Thies, M. Zollhöfer, M. Stamminger, C. Theobalt, and M. Nießner. 2016. Face2Face: Real-time Face Capture and Reenactment of RGB Videos. In Proc. of IEEE CVPR.Google Scholar
- Levi Valgaerts, Chenglei Wu, Andrés Bruhn, Hans-Peter Seidel, and Christian Theobalt. 2012. Lightweight Binocular Facial Performance Capture under Uncontrolled Lighting. In ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia 2012), Vol. 31. 187:1--187:11. Google Scholar
Digital Library
- E. Vatikiotis-Bateson and D.J. Ostry. 1999. Analysis and modeling of 3D jaw motion in speech and mastication. IEEE SMC'99 Conference Proceedings. 1999 IEEE International Conference on Systems, Man, and Cybernetics (Cat. No.99CH37028) 2 (1999), 442--447.Google Scholar
Cross Ref
- Eric Vatikiotis-Bateson and David J. Ostry. 1995. An analysis of the dimensionality of jaw motion in speech. Journal of Phonetics 23, 1-2 (1995), 101--117.Google Scholar
Cross Ref
- Marta B. Villamil and Luciana P. Nedel. 2005. A model to simulate the mastication motion at the temporomandibular joint. Proceedings of SPIE 5746, February 2014 (2005), 303--313.Google Scholar
- Marta B. Villamil, Luciana P. Nedel, Carla M D S Freitas, and Benoit Macq. 2012. Simulation of the human TMJ behavior based on interdependent joints topology. Computer Methods and Programs in Biomedicine 105, 3 (2012), 217--232. Google Scholar
Digital Library
- Daniel Vlasic, Matthew Brand, Hanspeter Pfister, and Jovan Popović. 2005. Face Transfer with Multilinear Models. ACM Transactions on Graphics 24, 3 (2005), 426--433. Google Scholar
Digital Library
- Thibaut Weise, Sofien Bouaziz, Hao Li, and Mark Pauly. 2011. Realtime Performance-Based Facial Animation. ACM Trans. Graphics (Proc. SIGGRAPH) 30, 4 (2011), 77:1--77:10. Google Scholar
Digital Library
- Chenglei Wu, Derek Bradley, Pablo Garrido, Michael Zollhöfer, Christian Theobalt, Markus Gross, and Thabo Beeler. 2016a. Model-based teeth reconstruction. ACM Transactions on Graphics 35, 6 (2016), 1--13. Google Scholar
Digital Library
- Chenglei Wu, Derek Bradley, Markus Gross, and Thabo Beeler. 2016b. Ananatomically-constrained local deformation model for monocular face capture. ACM Transactions on Graphics 35, 4 (2016), 1--12. Google Scholar
Digital Library
- Po-Chen Wu, Robert Wang, Kenrick Kin, Christopher Twigg, Shangchen Han, Ming-Hsuan Yang, and Shao-Yi Chien. 2017. DodecaPen: Accurate 6DoF Tracking of a Passive Stylus. ACM Symposium on User Interface Software and Technology (2017), 365--374. Google Scholar
Digital Library
Index Terms
An empirical rig for jaw animation
Recommendations
Accurate markerless jaw tracking for facial performance capture
We present the first method to accurately track the invisible jaw based solely on the visible skin surface, without the need for any markers or augmentation of the actor. As such, the method can readily be integrated with off-the-shelf facial ...
Physically-based forehead animation including wrinkles
Physically-based animation techniques enable more realistic and accurate animation to be created. We present a fully physically-based approach for efficiently producing realistic-looking animations of facial movement, including animation of expressive ...
Expressive Facial Animation Synthesis by Learning Speech Coarticulation and Expression Spaces
Synthesizing expressive facial animation is a very challenging topic within the graphics community. In this paper, we present an expressive facial animation synthesis system enabled by automated learning from facial motion capture data. Accurate 3D ...





Comments