skip to main content
research-article

Facial hair tracking for high fidelity performance capture

Published:22 July 2022Publication History
Skip Abstract Section

Abstract

Facial hair is a largely overlooked topic in facial performance capture. Most production pipelines in the entertainment industry do not have a way to automatically capture facial hair or track the skin underneath it. Thus, actors are asked to shave clean before face capture, which is very often undesirable. Capturing the geometry of individual facial hairs is very challenging, and their presence makes it harder to capture the deforming shape of the underlying skin surface. Some attempts have already been made at automating this task, but only for static faces with relatively sparse 3D hair reconstructions. In particular, current methods lack the temporal correspondence needed when capturing a sequence of video frames depicting facial performance. The problem of robustly tracking the skin underneath also remains unaddressed. In this paper, we propose the first multiview reconstruction pipeline that tracks both the dense 3D facial hair, as well as the underlying 3D skin for entire performances. Our method operates with standard setups for face photogrammetry, without requiring dense camera arrays. For a given capture subject, our algorithm first reconstructs a dense, high-quality neutral 3D facial hairstyle by registering sparser hair reconstructions over multiple frames that depict a neutral face under quasi-rigid motion. This custom-built, reference facial hairstyle is then tracked throughout a variety of changing facial expressions in a captured performance, and the result is used to constrain the tracking of the 3D skin surface underneath. We demonstrate the proposed capture pipeline on a variety of different facial hairstyles and lengths, ranging from sparse and short to dense full-beards.

References

  1. Sameer Agarwal, Keir Mierle, and Others. 2016. Ceres Solver. http://ceres-solver.org.Google ScholarGoogle Scholar
  2. Thabo Beeler, Bernd Bickel, Paul Beardsley, Bob Sumner, and Markus Gross. 2010. High-Quality Single-Shot Capture of Facial Geometry. ACM Trans. Graphics (Proc. SIGGRAPH) 29, 4, Article 40 (2010).Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Thabo Beeler, Bernd Bickel, Gioacchino Noris, Paul Beardsley, Steve Marschner, Robert W. Sumner, and Markus Gross. 2012. Coupled 3D Reconstruction of Sparse Facial Hair and Skin. ACM Trans. Graphics (Proc. SIGGRAPH) 31, 4, Article 117 (2012).Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Thabo Beeler, Fabian Hahn, Derek Bradley, Bernd Bickel, Paul Beardsley, Craig Gotsman, Robert W. Sumner, and Markus Gross. 2011. High-Quality Passive Facial Performance Capture Using Anchor Frames. ACM Trans. Graphics (Proc. SIGGRAPH) 30, 4, Article 75 (2011).Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Pascal Bérard, Derek Bradley, Markus Gross, and Thabo Beeler. 2016. Lightweight Eye Capture Using a Parametric Model. ACM Trans. Graphics (Proc. SIGGRAPH) 35, 4, Article 117 (2016).Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Amit Bermano, Thabo Beeler, Yeara Kozlov, Derek Bradley, Bernd Bickel, and Markus Gross. 2015. Detailed Spatio-Temporal Reconstruction of Eyelids. ACM Trans. Graphics (Proc. SIGGRAPH) 34, 4, Article 44 (2015).Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Derek Bradley, Wolfgang Heidrich, Tiberiu Popa, and Alla Sheffer. 2010. High Resolution Passive Facial Performance Capture. ACM Trans. Graphics (Proc. SIGGRAPH) 29, 4, Article 41 (2010).Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Menglei Chai, Lvdi Wang, Yanlin Weng, Xiaogang Jin, and Kun Zhou. 2013. Dynamic Hair Manipulation in Images and Videos. ACM Trans. Graphics (Proc. SIGGRAPH) 32, 4, Article 75 (2013).Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Prashanth Chandran, Derek Bradley, Markus Gross, and Thabo Beeler. 2020. Semantic Deep Face Models. In Int. Conf. on 3D Vision. 345--354.Google ScholarGoogle Scholar
  10. Graham Fyffe. 2012. High Fidelity Facial Hair Capture. In ACM SIGGRAPH 2012 Talks. Article 23.Google ScholarGoogle Scholar
  11. Graham Fyffe, Koki Nagano, Loc Huynh, Shunsuke Saito, Jay Busch, Andrew Jones, Hao Li, and Paul Debevec. 2017. Multi-View Stereo on Consistent Face Topology. Comp. Graphics Forum (Proc. Eurographics) 36, 2 (2017), 295--309.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Guy Gafni, Justus Thies, Michael Zollhofer, and Matthias Niessner. 2021. Dynamic Neural Radiance Fields for Monocular 4D Facial Avatar Reconstruction. In IEEE Computer Vision and Pattern Recognition (CVPR). 8649--8658.Google ScholarGoogle Scholar
  13. Abhijeet Ghosh, Graham Fyffe, Borom Tunwattanapong, Jay Busch, Xueming Yu, and Paul Debevec. 2011. Multiview face capture using polarized spherical gradient illumination. ACM Trans. Graphics (Proc. SIGGRAPH Asia) 30, 6 (2011), 1--10.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Paulo Gotardo, Jérémy Riviere, Derek Bradley, Abhijeet Ghosh, and Thabo Beeler. 2018. Practical Dynamic Facial Appearance Modeling and Acquisition. ACM Trans. Graphics (Proc. SIGGRAPH Asia) 37, 6, Article 232 (2018).Google ScholarGoogle Scholar
  15. Stéphane Grabli, François X. Sillion, Stephen R. Marschner, and Jerome E. Lengyel. 2002. Image-Based Hair Capture by Inverse Lighting. In Proc. of Graphics Interface (GI). 51--58.Google ScholarGoogle Scholar
  16. Tomas Lay Herrera, Arno Zinke, and Andreas Weber. 2012. Lighting Hair from the inside: A Thermal Approach to Hair Reconstruction. ACM Trans. Graphics (Proc. SIGGRAPH Asia) 31, 6, Article 146 (2012).Google ScholarGoogle Scholar
  17. Tomas Lay Herrera, Arno Zinke, Andreas Weber, and Thomas Vetter. 2010. Toward Image-Based Facial Hair Modeling. In Proc. of the 26th Spring Conf. on Computer Graphics. 93--100.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Osamu Hirose. 2021. A Bayesian Formulation of Coherent Point Drift. IEEE TPAMI 43, 7 (2021), 2269--2286.Google ScholarGoogle ScholarCross RefCross Ref
  19. Liwen Hu, Derek Bradley, Hao Li, and Thabo Beeler. 2017. Simulation-Ready Hair Capture. Comp. Graphics Forum (Proc. Eurographics) 36, 2 (2017), 281--294.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Liwen Hu, Chongyang Ma, Linjie Luo, and Hao Li. 2014. Robust Hair Capture Using Simulated Examples. ACM Trans. Graphics (Proc. SIGGRAPH) 33, 4, Article 126 (2014).Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Takahito Ishikawa, Yosuke Kazama, Eiji Sugisaki, and Shigeo Morishima. 2007. Hair Motion Reconstruction Using Motion Capture System. In ACM SIGGRAPH 2007 Posters. 78--es.Google ScholarGoogle Scholar
  22. Wenzel Jakob, Jonathan T. Moon, and Steve Marschner. 2009. Capturing Hair Assemblies Fiber by Fiber. ACM Trans. Graphics (Proc. SIGGRAPH Asia) 28, 5 (2009), 1--9.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Samuli Laine, Tero Karras, Timo Aila, Antti Herva, Shunsuke Saito, Ronald Yu, Hao Li, and Jaakko Lehtinen. 2017. Production-Level Facial Performance Capture Using Deep Convolutional Neural Networks. In Proc. of Eurographics Symposium on Computer Animation. Article 10.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Chloe LeGendre, Loc Hyunh, Shanhe Wang, and Paul Debevec. 2017. Modeling Vellus Facial Hair from Asperity Scattering Silhouettes. In ACM SIGGRAPH 2017 Talks.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Tianye Li, Shichen Liu, Timo Bolkart, Jiayi Liu, Hao Li, and Yajie Zhao. 2021. Topologically Consistent Multi-View Face Inference Using Volumetric Sampling. In IEEE Int. Conf. on Computer Vision (ICCV). 3824--3834.Google ScholarGoogle Scholar
  26. Shu Liang, Xiufeng Huang, Xianyu Meng, Kunyao Chen, Linda G. Shapiro, and Ira Kemelmacher-Shlizerman. 2018. Video to Fully Automatic 3D Hair Model. ACM Trans. Graphics (Proc. SIGGRAPH Asia) 37, 6, Article 206 (2018).Google ScholarGoogle Scholar
  27. Stephen Lombardi, Tomas Simon, Jason Saragih, Gabriel Schwartz, Andreas Lehrmann, and Yaser Sheikh. 2019. Neural Volumes: Learning Dynamic Renderable Volumes from Images. ACM Trans. Graphics (Proc. SIGGRAPH) 38, 4, Article 65 (2019).Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Linjie Luo, Hao Li, and Szymon Rusinkiewicz. 2013. Structure-Aware Hair Capture. ACM Trans. Graphics (Proc. SIGGRAPH) 32, 4, Article 76 (2013).Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Wan-Chun Ma, Tim Hawkins, Pieter Peers, Charles-Felix Chabert, Malte Weiss, and Paul Debevec. 2007. Rapid Acquisition of Specular and Diffuse Normal Maps from Polarized Spherical Gradient Illumination. In Proc. Eurographics Conf. on Rendering Techniques. 183--194.Google ScholarGoogle Scholar
  30. Masayuki Nakajima, Kong Wai Ming, and Hiroki Takashi. 1997. Generation of 3d hair model from multiple pictures. IEEE Comp. Graphics and Applications (1997).Google ScholarGoogle Scholar
  31. Giljoo Nam, Chenglei Wu, Min H. Kim, and Yaser Sheikh. 2019. Strand-Accurate Multi-View Hair Capture. In IEEE Computer Vision and Pattern Recognition (CVPR). 155--164.Google ScholarGoogle Scholar
  32. Sylvain Paris, Hector M. Briceño, and François X. Sillion. 2004. Capture of Hair Geometry from Multiple Images. ACM Trans. Graphics (Proc. SIGGRAPH) 23, 3 (2004), 712--719.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Sylvain Paris, Will Chang, Oleg I. Kozhushnyan, Wojciech Jarosz, Wojciech Matusik, Matthias Zwicker, and Frédo Durand. 2008. Hair Photobooth: Geometric and Photometric Acquisition of Real Hairstyles. ACM Trans. Graphics (Proc. SIGGRAPH) 27, 3 (2008), 1--9.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Keunhong Park, Utkarsh Sinha, Peter Hedman, Jonathan T. Barron, Sofien Bouaziz, Dan B Goldman, Ricardo Martin-Brualla, and Steven M. Seitz. 2021. HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields. ACM Trans. Graphics (Proc. SIGGRAPH Asia) 40, 6, Article 238 (2021).Google ScholarGoogle Scholar
  35. Jérémy Riviere, Paulo Gotardo, Derek Bradley, Abhijeet Ghosh, and Thabo Beeler. 2020. Single-Shot High-Quality Facial Geometry and Skin Appearance Capture. ACM Trans. Graphics (Proc. SIGGRAPH) 39, 4, Article 81 (2020).Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Gemma Rotger, Francesc Moreno-Noguer, Felipe Lumbreras, and Antonio Agudo. 2019. Single View Facial Hair 3D Reconstruction. In Pattern Rec. and Image Anal. 423--436.Google ScholarGoogle Scholar
  37. Olga Sorkine, Daniel Cohen-Or, Yaron Lipman, Marc Alexa, Christian Rössl, and Hans-Peter Seidel. 2004. Laplacian Surface Editing. In Proc. of the Symposium on Geometry Processing. 175--184.Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Tiancheng Sun, Giljoo Nam, Carlos Aliaga, Christophe Hery, and Ravi Ramamoorthi. 2021. Human Hair Inverse Rendering using Multi-View Photometric data. In Eurographics Symposium on Rendering.Google ScholarGoogle Scholar
  39. Ayush Tewari, Mohamed Elgharib, Mallikarjun B R, Florian Bernard, Hans-Peter Seidel, Patrick Pérez, Michael Zollhöfer, and Christian Theobalt. 2020. PIE: Portrait Image Embedding for Semantic Control. ACM Trans. Graphics (Proc. SIGGRAPH Asia) 39, 6, Article 223 (2020).Google ScholarGoogle Scholar
  40. Ayush Tewari, Michael Zollhofer, Hyeongwoo Kim, Pablo Garrido, Florian Bernard, Patrick Perez, and Christian Theobalt. 2017. MoFA: Model-Based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction. In Proc. ICCV Workshops.Google ScholarGoogle Scholar
  41. Chenglei Wu, Derek Bradley, Pablo Garrido, Michael Zollhöfer, Christian Theobalt, Markus Gross, and Thabo Beeler. 2016a. Model-Based Teeth Reconstruction. ACM Trans. Graphics (Proc. SIGGRAPH Asia) 35, 6, Article 220 (2016).Google ScholarGoogle Scholar
  42. Chenglei Wu, Derek Bradley, Markus Gross, and Thabo Beeler. 2016b. An Anatomically-Constrained Local Deformation Model for Monocular Face Capture. ACM Trans. Graphics (Proc. SIGGRAPH) 35, 4, Article 115 (2016).Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Zexiang Xu, Hsiang-Tao Wu, Lvdi Wang, Changxi Zheng, Xin Tong, and Yue Qi. 2014. Dynamic Hair Capture Using Spacetime Optimization. ACM Trans. Graphics (Proc. SIGGRAPH Asia) 33, 6, Article 224 (2014).Google ScholarGoogle Scholar
  44. Tatsuhisa Yamaguchi, Bennett Wilburn, and Eyal Ofek. 2009. Video-Based Modeling of Dynamic Hair. In Adv. in Image and Video Technology. 585--596.Google ScholarGoogle Scholar
  45. Lingchen Yang, Zefeng Shi, Youyi Zheng, and Kun Zhou. 2019. Dynamic Hair Modeling from Monocular Videos Using Deep Neural Networks. ACM Trans. Graphics (Proc. SIGGRAPH Asia) 38, 6, Article 235 (2019).Google ScholarGoogle Scholar
  46. Meng Zhang, Menglei Chai, Hongzhi Wu, Hao Yang, and Kun Zhou. 2017. A Data-Driven Approach to Four-View Image-Based Hair Modeling. ACM Trans. Graphics (Proc. SIGGRAPH) 36, 4, Article 156 (2017).Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Gaspard Zoss, Thabo Beeler, Markus Gross, and Derek Bradley. 2019. Accurate Markerless Jaw Tracking for Facial Performance Capture. ACM Trans. Graphics (Proc. SIGGRAPH) 38, 4, Article 50 (2019).Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Facial hair tracking for high fidelity performance capture

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image ACM Transactions on Graphics
        ACM Transactions on Graphics  Volume 41, Issue 4
        July 2022
        1978 pages
        ISSN:0730-0301
        EISSN:1557-7368
        DOI:10.1145/3528223
        Issue’s Table of Contents

        Copyright © 2022 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 22 July 2022
        Published in tog Volume 41, Issue 4

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader