skip to main content
research-article

An empirical rig for jaw animation

Published:30 July 2018Publication History
Skip Abstract Section

Abstract

In computer graphics the motion of the jaw is commonly modelled by up-down and left-right rotation around a fixed pivot plus a forward-backward translation, yielding a three dimensional rig that is highly suited for intuitive artistic control. The anatomical motion of the jaw is, however, much more complex since the joints that connect the jaw to the skull exhibit both rotational and translational components. In reality the jaw does not move in a three dimensional subspace but on a constrained manifold in six dimensions. We analyze this manifold in the context of computer animation and show how the manifold can be parameterized with three degrees of freedom, providing a novel jaw rig that preserves the intuitive control while providing more accurate jaw positioning. The chosen parameterization furthermore places anatomically correct limits on the motion, preventing the rig from entering physiologically infeasible poses. Our new jaw rig is empirically designed from accurate capture data, and we provide a simple method to retarget the rig to new characters, both human and fantasy.

Skip Supplemental Material Section

Supplemental Material

059-517.mp4

References

  1. Sameer Agarwal, Keir Mierle, and Others. 2016. Ceres Solver. http://ceres-solver.org. (2016).Google ScholarGoogle Scholar
  2. M Oliver Ahlers, Olaf Bernhardt, Holger A Jakstat, Bernd Kordass, Jens C Turp, Hans-Jurgen Schindler, and Alfons Hugger. 2015. Motion analysis of the mandible: guidelines for standardized analysis of computer-assisted recording of condylar movements. International journal of computerized dentistry 18, 3 (2015), 201--223.Google ScholarGoogle Scholar
  3. Eiichi Bando, Keisuke Nishigawa, Masanori Nakano, Hisahiro Takeuchi, Shuji Shigemoto, Kazuo Okura, Toyoko Satsuma, and Takeshi Yamamoto. 2009. Current status of researches on jaw movement and occlusion for clinical application. Japanese Dental Science Review 45, 2 (2009), 83--97.Google ScholarGoogle ScholarCross RefCross Ref
  4. Thabo Beeler, Bernd Bickel, Paul Beardsley, Bob Sumner, and Markus Gross. 2010. High-quality single-shot capture of facial geometry. ACM Transactions on Graphics 29, 4 (2010), 1. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Thabo Beeler, Bernd Bickel, Gioacchino Noris, Paul Beardsley, Steve Marschner, Robert W. Sumner, and Markus Gross. 2012. Coupled 3D Reconstruction of Sparse Facial Hair and Skin. ACM Trans. Graph. 31, 4, Article 117 (2012), 117:1--117:10 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Thabo Beeler and Derek Bradley. 2014. Rigid stabilization of facial expressions. ACM Transactions on Graphics 33, 4 (2014), 1--9. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Thabo Beeler, Fabian Hahn, Derek Bradley, Bernd Bickel, Paul Beardsley, Craig Gotsman, Robert W. Sumner, and Markus Gross. 2011. High-quality passive facial performance capture using anchor frames. ACM Transactions on Graphics (2011), 1. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Pascal Bérard, Derek Bradley, Markus Gross, and Thabo Beeler. 2016. Lightweight eye capture using a parametric model. ACM Transactions on Graphics 35, 4 (2016), 1--12. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Pascal Bérard, Derek Bradley, Maurizio Nitti, Thabo Beeler, and Markus Gross. 2014. High-quality Capture of Eyes. ACM Trans. Graph. 33, 6 (2014), 223:1--223:12. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Amit Bermano, Thabo Beeler, Yeara Kozlov, Derek Bradley, Bernd Bickel, and Markus Gross. 2015. Detailed spatio-temporal reconstruction of eyelids. ACM Transactions on Graphics 34, 4 (2015). Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Enrique Bermejo, Carmen Campomanes-Álvarez, Andrea Valsecchi, Oscar Ibáñez, Sergio Damas, and Oscar Cordón. 2017. Genetic algorithms for skull-face overlay including mandible articulation. Information Sciences 420 (2017), 200--217. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Volker Blanz and Thomas Vetter. 1999. A morphable model for the synthesis of 3D faces. Proceedings of the 26th annual conference on Computer graphics and interactive techniques - SIGGRAPH '99 (1999), 187--194. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Mario Botsch and Olga Sorkine. 2008. On linear variational surface deformation methods. IEEE transactions on visualization and computer graphics 14, 1 (2008), 213--230. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Sofien Bouaziz, Yangang Wang, and Mark Pauly. 2013. Online modeling for realtime facial animation. ACM Trans. Graphics (Proc. SIGGRAPH) 32, 4, Article 40 (2013), 40:1--40:10 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. D. Bradley, W. Heidrich, T. Popa, and A. Sheffer. 2010. High Resolution Passive Facial Performance Capture. ACM Trans. Graphics (Proc. SIGGRAPH) 29, Article 41 (2010), 41:1--41:10 pages. Issue 4. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Chen Cao, Derek Bradley, Kun Zhou, and Thabo Beeler. 2015. Real-time high-fidelity facial performance capture. ACM Transactions on Graphics 34, 4 (2015), 46:1--46:9. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Chen Cao, Qiming Hou, and Kun Zhou. 2014. Displaced Dynamic Expression Regression for Real-time Facial Tracking and Animation. ACM Trans. Graph. 33, 4, Article 43 (2014), 43:1--43:10 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Cao Chen, Yanlin Weng, Shun Zhou, Yiying Tong, and Kun Zhou. 2014. FaceWarehouse: a 3D Facial Expression Database for Visual Computing. IEEE TVCG 20, 3 (2014), 413--425. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Matthew Cong, Michael Bao, Jane L. E, Kiran S. Bhat, and Ronald Fedkiw. 2015. Fully Automatic Generation of Anatomical Face Simulation Models. In Proc. SCA. 175--183. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Matthew Cong, Kiran S. Bhat, and Ronald Fedkiw. 2016. Art-directed Muscle Simulation for High-end Facial Animation. In Proc. SCA. 119--127. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. B. Daumas, W. L. Xu, and J. Bronlund. 2005. Jaw mechanism modeling and simulation. Mechanism and Machine Theory 40, 7 (2005), 821--833.Google ScholarGoogle ScholarCross RefCross Ref
  22. S. De Greef, P. Claes, D. Vandermeulen, W. Mollemans, P. Suetens, and G. Willems. 2006. Large-scale in-vivo Caucasian facial soft tissue thickness database for craniofacial reconstruction. Forensic Science International 159, 1 (2006).Google ScholarGoogle Scholar
  23. Virgilio F. Ferrario, Chiarella Sforza, Nicola Lovecchio, and Fabrizio Mian. 2005. Quantification of translational and gliding components in human temporomandibular joint during mouth opening. Archives of Oral Biology 50, 5 (2005), 507--515.Google ScholarGoogle ScholarCross RefCross Ref
  24. Graham Fyffe, Tim Hawkins, Chris Watts, Wan-Chun Ma, and Paul Debevec. 2011. Comprehensive Facial Performance Capture. In Eurographics.Google ScholarGoogle Scholar
  25. Graham Fyffe, Andrew Jones, Oleg Alexander, Ryosuke Ichikari, and Paul Debevec. 2014. Driving High-Resolution Facial Scans with Video Performance Capture. ACM Trans. Graph. 34, 1, Article 8 (2014), 8:1--8:14 pages Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. G. Fyffe, K. Nagano, L. Huynh, S. Saito, J. Busch, A. Jones, H. Li, and P. Debevec. 2017. Multi-View Stereo on Consistent Face Topology. Comput. Graph. Forum 36, 2 (2017), 295--309. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Pablo Garrido, Levi Valgaerts, Chenglei Wu, and Christian Theobalt. 2013. Reconstructing Detailed Dynamic Face Geometry from Monocular Video. In ACM Trans. Graph. (Proceedings of SIGGRAPH Asia 2013), Vol. 32. 158:1--158:10. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Pablo Garrido, Michael Zollhoefer, Dan Casas, Levi Valgaerts, Kiran Varanasi, Patrick Perez, and Christian Theobalt. 2016a. Reconstruction of Personalized 3D Face Rigs from Monocular Video. 35, 3 (2016), 28:1--28:15.Google ScholarGoogle Scholar
  29. P. Garrido, M. Zollhöfer, C. Wu, D. Bradley, P. Perez, T. Beeler, and C. Theobalt. 2016b. Corrective 3D Reconstruction of Lips from Monocular Video. ACM Transactions on Graphics (TOG) 35, 6 (2016). Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. S. Garrido-Jurado, R. Muñoz-Salinas, F.J. Madrid-Cuevas, and M.J. Marín-Jiménez. 2014. Automatic generation and detection of highly reliable fiducial markers under occlusion. Pattern Recognition 47, 6 (2014), 2280--2292. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. S. Garrido-Jurado, R. Muñoz-Salinas, F. J. Madrid-Cuevas, and R. Medina-Carnicer. 2016. Generation of fiducial marker dictionaries using Mixed Integer Linear Programming. Pattern Recognition 51, October (2016), 481--491. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. S. Garrido-Jurado, R. Muñoz Salinas, F.J. Madrid-Cuevas, and M.J. Marín-Jiménez. 2014. Automatic generation and detection of highly reliable fiducial markers under occlusion. Pattern Recognition 47, 6 (2014), 2280 -- 2292. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. John C Gower. 1975. Generalized procrustes analysis. Psychometrika 40, 1 (1975), 33--51.Google ScholarGoogle ScholarCross RefCross Ref
  34. Pei-Lun Hsieh, Chongyang Ma, Jihun Yu, and Hao Li. 2015. Unconstrained Realtime Facial Performance Capture. In Computer Vision and Pattern Recognition (CVPR).Google ScholarGoogle Scholar
  35. Alexandru-Eugen Ichim, Petr Kadleček, Ladislav Kavan, and Mark Pauly. 2017. Phace: Physics-based Face Modeling and Animation. ACM Transactions on Graphics 36, 4 (2017), 1--14. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Alexandru-Eugen Ichim, Ladislav Kavan, Merlin Nimier-David, and Mark Pauly. 2016. Building and Animating User-Specific Volumetric Face Rigs. Ladislav Kavan and Chris Wojtan (2016).Google ScholarGoogle Scholar
  37. K Kähler, J Haber, and H Seidel. 2003. Reanimating the Dead : Reconstruction of Expressive Faces from Skull Data. ACM/SIGGRAPH Computer Graphics Proceedings 22, July (2003), 554--567. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Tero Karras, Timo Aila, Samuli Laine, Antti Herva, and Jaakko Lehtinen. 2017. Audio-driven Facial Animation by Joint End-to-end Learning of Pose and Emotion. ACM Trans. Graph. 36, 4, Article 94 (2017), 94:1--94:12 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. F. J. Knap, B. L. Richardson, and J. Bogstad. 1970. Study of Mandibular Motion in Six Degrees of Freedom. Journal of Dental Research 49, 2 (1970), 289--292.Google ScholarGoogle ScholarCross RefCross Ref
  40. Yeara Kozlov, Derek Bradley, Moritz Bächer, Bernhard Thomaszewski, Thabo Beeler, and Markus Gross. 2017. Enriching Facial Blendshape Rigs with Physical Simulation. Comput. Graph. Forum 36, 2 (2017), 75--84. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Rafael Laboissière, David J. Ostry, and Anatol G. Feldman. 1996. The control of multi-muscle systems: Human jaw and hyoid movements. Biological Cybernetics 74, 4 (1996), 373--384. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Samuli Laine, Tero Karras, Timo Aila, Antti Herva, Shunsuke Saito, Ronald Yu, Hao Li, and Jaakko Lehtinen. 2017. Production-level Facial Performance Capture Using Deep Convolutional Neural Networks. In Proc. SCA. 10:1--10:10. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Jeremy J. Lemoine, James J. Xia, Clark R. Andersen, Jaime Gateno, William Buford Jr., and Michael A.K. Liebschner. 2007. Geometry-Based Algorithm for the Prediction of Nonpathologic Mandibular Movement. Journal of Oral and Maxillofacial Surgery 65, 12 (2007), 2411--2417.Google ScholarGoogle ScholarCross RefCross Ref
  44. J.P. Lewis, Ken Anjyo, Taehyun Rhee, Mengjie Zhang, Fred Pighin, and Zhigang Deng. 2014. Practice and Theory of Blendshape Facial Models. In Eurographics State of The Art Reports.Google ScholarGoogle Scholar
  45. Hao Li, Jihun Yu, Yuting Ye, and Chris Bregler. 2013. Realtime facial animation with on-the-fly correctives. ACM Trans. Graphics (Proc. SIGGRAPH) 32, 4, Article 42 (2013), 42:1--42:10 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Ran Luo, Qiang Fang, Jianguo Wei, Weiwei Xu, and Yin Yang. 2017. Acoustic VR of Human Tongue: Real-time Speech-driven Visual Tongue System. In IEEE VR.Google ScholarGoogle Scholar
  47. Andrea Mapelli, Domenico Galante, Nicola Lovecchio, Chiarella Sforza, and Virgilio F. Ferrario. 2009. Translation and rotation movements of the mandible during mouth opening and closing. Clinical Anatomy 22, 3 (2009), 311--318.Google ScholarGoogle ScholarCross RefCross Ref
  48. Jeffrey P. Okeson. 2013. Mechanics Of Mandibular Movement. In Management of Temporomandibular Disorders And Occlusion.Google ScholarGoogle Scholar
  49. Verónica Orvalho, Pedro Bastos, Frederic Parke, Bruno Oliveira, and Xenxo Alvarez. 2012. A Facial Rigging Survey. In Eurographics State of The Art Reports.Google ScholarGoogle Scholar
  50. David J. Ostry, Eric Vatikiotis-Bateson, and Paul L. Gribble. 1997. An Examination of the Degrees of Freedom of Human Jaw Motion in Speech and Mastication. Journal of Speech, Language, and Hearing Research Volume 40 (1997), 1341--1351.Google ScholarGoogle ScholarCross RefCross Ref
  51. U. Posselt. 1952. Studies in the Mobility of the Human Mandible. Acta Odontologica Scandinavica. https://books.google.ch/books?id=1MBpAAAAMAAJGoogle ScholarGoogle Scholar
  52. Ulf Posselt and Odont. D. 1958. Range of movement of the mandible. The Journal of the American Dental Association 56, 1 (1958), 10--13.Google ScholarGoogle ScholarCross RefCross Ref
  53. Fuhao Shi, Hsiang-Tao Wu, Xin Tong, and Jinxiang Chai. 2014. Automatic Acquisition of High-fidelity Facial Performances Using Monocular Videos. ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia 2014) 33 (2014). Issue 6. Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Eftychios Sifakis, Igor Neverov, and Ronald Fedkiw. 2005. Automatic determination of facial muscle activations from sparse motion capture marker data. ACM Transactions on Graphics 24, 3 (2005), 417. Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Supasorn Suwajanakorn, Ira Kemelmacher-Shlizerman, and Steven M. Seitz. 2014. Total Moving Face Reconstruction. In ECCV.Google ScholarGoogle Scholar
  56. Ayush Tewari, Michael Zollöfer, Hyeongwoo Kim, Pablo Garrido, Florian Bernard, Patrick Perez, and Christian Theobalt. 2017. MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction. In Proc. of IEEE ICCV.Google ScholarGoogle Scholar
  57. Justus Thies, Michael Zollhöfer, Matthias Nießner, Levi Valgaerts, Marc Stamminger, and Christian Theobalt. 2015. Real-time Expression Transfer for Facial Reenactment. ACM Trans. Graph. 34, 6, Article 183 (2015), 183:1--183:14 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. J. Thies, M. Zollhöfer, M. Stamminger, C. Theobalt, and M. Nießner. 2016. Face2Face: Real-time Face Capture and Reenactment of RGB Videos. In Proc. of IEEE CVPR.Google ScholarGoogle Scholar
  59. Levi Valgaerts, Chenglei Wu, Andrés Bruhn, Hans-Peter Seidel, and Christian Theobalt. 2012. Lightweight Binocular Facial Performance Capture under Uncontrolled Lighting. In ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia 2012), Vol. 31. 187:1--187:11. Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. E. Vatikiotis-Bateson and D.J. Ostry. 1999. Analysis and modeling of 3D jaw motion in speech and mastication. IEEE SMC'99 Conference Proceedings. 1999 IEEE International Conference on Systems, Man, and Cybernetics (Cat. No.99CH37028) 2 (1999), 442--447.Google ScholarGoogle ScholarCross RefCross Ref
  61. Eric Vatikiotis-Bateson and David J. Ostry. 1995. An analysis of the dimensionality of jaw motion in speech. Journal of Phonetics 23, 1-2 (1995), 101--117.Google ScholarGoogle ScholarCross RefCross Ref
  62. Marta B. Villamil and Luciana P. Nedel. 2005. A model to simulate the mastication motion at the temporomandibular joint. Proceedings of SPIE 5746, February 2014 (2005), 303--313.Google ScholarGoogle Scholar
  63. Marta B. Villamil, Luciana P. Nedel, Carla M D S Freitas, and Benoit Macq. 2012. Simulation of the human TMJ behavior based on interdependent joints topology. Computer Methods and Programs in Biomedicine 105, 3 (2012), 217--232. Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. Daniel Vlasic, Matthew Brand, Hanspeter Pfister, and Jovan Popović. 2005. Face Transfer with Multilinear Models. ACM Transactions on Graphics 24, 3 (2005), 426--433. Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. Thibaut Weise, Sofien Bouaziz, Hao Li, and Mark Pauly. 2011. Realtime Performance-Based Facial Animation. ACM Trans. Graphics (Proc. SIGGRAPH) 30, 4 (2011), 77:1--77:10. Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. Chenglei Wu, Derek Bradley, Pablo Garrido, Michael Zollhöfer, Christian Theobalt, Markus Gross, and Thabo Beeler. 2016a. Model-based teeth reconstruction. ACM Transactions on Graphics 35, 6 (2016), 1--13. Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. Chenglei Wu, Derek Bradley, Markus Gross, and Thabo Beeler. 2016b. Ananatomically-constrained local deformation model for monocular face capture. ACM Transactions on Graphics 35, 4 (2016), 1--12. Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. Po-Chen Wu, Robert Wang, Kenrick Kin, Christopher Twigg, Shangchen Han, Ming-Hsuan Yang, and Shao-Yi Chien. 2017. DodecaPen: Accurate 6DoF Tracking of a Passive Stylus. ACM Symposium on User Interface Software and Technology (2017), 365--374. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. An empirical rig for jaw animation

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image ACM Transactions on Graphics
        ACM Transactions on Graphics  Volume 37, Issue 4
        August 2018
        1670 pages
        ISSN:0730-0301
        EISSN:1557-7368
        DOI:10.1145/3197517
        Issue’s Table of Contents

        Copyright © 2018 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 30 July 2018
        Published in tog Volume 37, Issue 4

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader