skip to main content
10.1145/3532836.3536273acmconferencesArticle/Chapter ViewAbstractPublication PagessiggraphConference Proceedingsconference-collections
invited-talk

High-fidelity facial reconstruction from a single photo using photo-realistic rendering

Published:24 July 2022Publication History

ABSTRACT

We propose a fully automated method for realistic 3D face reconstruction from a single frontal photo that produces a high-resolution head mesh and a diffuse map. The photo is input to a convolutional neural network that estimates the weights of a morphable model to produce an initial head shape that is further adjusted through landmark-guided deformation. Two key features of the method are: 1) the network is exclusively trained on synthetic photos that are photo-realistic enough to learn real shape predictive features, making it unnecessary to train with real facial photos and corresponding 3D scans; 2) the landmarking statistical errors are incorporated in the reconstruction for optimal accuracy. While the method is based on a limited amount of real data, we show that it robustly and quickly performs plausible face reconstructions from real photos.

Skip Supplemental Material Section

Supplemental Material

siggraph22talks-48-demo.mp4

Video with demonstrations of several aspects of the proposed method and extra results.

References

  1. V. Blanz and T. Vetter. 1999. A morphable model for the synthesis of 3D faces. In Proc. SIGGRAPH’99. ACM Press/Addison-Wesley Publishing Co., USA, 187–194.Google ScholarGoogle Scholar
  2. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. 2009. ImageNet: A large-scale hierarchical image database. In Proc. CVPR’09. 248–255.Google ScholarGoogle Scholar
  3. G. Huang, Z. Liu, L. Van Der Maaten, and K.Q. Weinberger. 2017. Densely connected convolutional networks. In Proc. CVPR’20. 4700–4708.Google ScholarGoogle Scholar
  4. V. Kazemi and J. Sullivan. 2014. One millisecond face alignment with an ensemble of regression trees. In Proc. CVPR’14. 1867–1874.Google ScholarGoogle Scholar
  5. R. Li, K. Bladin, Y. Zhao, C. Chinara, O. Ingraham, P. Xiang, X. Ren, P. Prasad, B. Kishore, J. Xing, and H. Li. 2020. Learning formation of physically-based face attributes. In Proc. CVPR’20. 3410–3419.Google ScholarGoogle Scholar
  6. R. Wang, C.-F. Chen, H. Peng, X. Liu, O. Liu, and X. Li. 2019. Digital Twin: Acquiring High-Fidelity 3D Avatar from a Single Image. Technical Report. arxiv:1912.03455Google ScholarGoogle Scholar
  7. E. Wood, T. Baltrušaitis, C. Hewitt, S. Dziadzio, T.J. Cashman, and J. Shotton. 2021. Fake It Till You Make It: Face analysis in the wild using synthetic data alone. In Proc. IEEE International Conference on Computer Vision. 3681–3691.Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Conferences
    SIGGRAPH '22: ACM SIGGRAPH 2022 Talks
    July 2022
    108 pages
    ISBN:9781450393713
    DOI:10.1145/3532836

    Copyright © 2022 Owner/Author

    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 24 July 2022

    Check for updates

    Qualifiers

    • invited-talk
    • Research
    • Refereed limited

    Acceptance Rates

    Overall Acceptance Rate1,822of8,601submissions,21%

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format