skip to main content
research-article

High-Fidelity Face Reenactment Via Identity-Matched Correspondence Learning

Authors Info & Claims
Published:25 February 2023Publication History
Skip Abstract Section

Abstract

Face reenactment aims to generate an animation of a source face using the poses and expressions from a target face. Although recent methods have made remarkable progress by exploiting generative adversarial networks, they are limited in generating high-fidelity and identity-preserving results due to the inappropriate driving information and insufficiently effective animating strategies. In this work, we propose a novel face reenactment framework that achieves both high-fidelity generation and identity preservation. Instead of sparse face representations (e.g., facial landmarks and keypoints), we utilize the Projected Normalized Coordinate Code (PNCC) to better preserve facial details. We propose to reconstruct the PNCC with the source identity parameters and the target pose and expression parameters estimated by 3D face reconstruction to factor out the target identity. By adopting the reconstructed representation as the driving information, we address the problem of identity mismatch. To effectively utilize the driving information, we establish the correspondence between the reconstructed representation and the source representation based on the features extracted by an encoder network. This identity-matched correspondence is then utilized to animate the source face using a novel feature transformation strategy. The generator network is further enhanced by the proposed geometry-aware skip connection. Once trained, our model can be applied to previously unseen faces without further training or fine-tuning. Through extensive experiments, we demonstrate the effectiveness of our method in face reenactment and show that our model outperforms state-of-the-art approaches both qualitatively and quantitatively. Additionally, the proposed PNCC reconstruction module can be easily inserted into other methods and improve their performance in cross-identity face reenactment.

REFERENCES

  1. [1] Abdal Rameen, Zhu Peihao, Mitra Niloy J., and Wonka Peter. 2021. StyleFlow: Attribute-conditioned exploration of StyleGAN-generated images using conditional continuous normalizing flows. ACM Transactions on Graphics 40, 3 (2021), 121.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. [2] Baltrusaitis Tadas, Zadeh Amir, Lim Yao Chong, and Morency Louis-Philippe. 2018. OpenFace 2.0: Facial behavior analysis toolkit. In Proceedings of the 2018 13th IEEE International Conference on Automatic Face and Gesture Recognition (FG’18). IEEE, Los Alamitos, CA, 5966.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. [3] Bansal Aayush, Ma Shugao, Ramanan Deva, and Sheikh Yaser. 2018. Recycle-GAN: Unsupervised video retargeting. In Proceedings of the European Conference on Computer Vision (ECCV’18). 119135.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. [4] Blanz Volker and Vetter Thomas. 1999. A morphable model for the synthesis of 3D faces. In Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques. 187194.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. [5] Blanz Volker and Vetter Thomas. 2003. Face recognition based on fitting a 3D morphable model. IEEE Transactions on Pattern Analysis and Machine Intelligence 25, 9 (2003), 10631074.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. [6] Bounareli Stella, Argyriou Vasileios, and Tzimiropoulos Georgios. 2022. Finding directions in GAN’s latent space for neural face reenactment. arXiv preprint arXiv:2202.00046 (2022).Google ScholarGoogle Scholar
  7. [7] Bulat Adrian and Tzimiropoulos Georgios. 2017. How far are we from solving the 2D and 3D face alignment problem? (and a dataset of 230,000 3D facial landmarks). In Proceedings of the IEEE International Conference on Computer Vision. 10211030.Google ScholarGoogle ScholarCross RefCross Ref
  8. [8] Burkov Egor, Pasechnik Igor, Grigorev Artur, and Lempitsky Victor. 2020. Neural head reenactment with latent pose descriptors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1378613795.Google ScholarGoogle ScholarCross RefCross Ref
  9. [9] Cao Chen, Weng Yanlin, Zhou Shun, Tong Yiying, and Zhou Kun. 2013. FaceWarehouse: A 3D facial expression database for visual computing. IEEE Transactions on Visualization and Computer Graphics 20, 3 (2013), 413425.Google ScholarGoogle Scholar
  10. [10] Chen Zhuo, Wang Chaoyue, Yuan Bo, and Tao Dacheng. 2020. PuppeteerGAN: Arbitrary portrait animation with semantic-aware appearance transformation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1351813527.Google ScholarGoogle ScholarCross RefCross Ref
  11. [11] Chung J. S., Nagrani A., and Zisserman A.. 2018. VoxCeleb2: Deep speaker recognition. In Proceedings of INTERSPEECH 2018.Google ScholarGoogle ScholarCross RefCross Ref
  12. [12] Deng Jiankang, Guo Jia, Xue Niannan, and Zafeiriou Stefanos. 2019. ArcFace: Additive angular margin loss for deep face recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 46904699.Google ScholarGoogle ScholarCross RefCross Ref
  13. [13] Deng Yu, Yang Jiaolong, Chen Dong, Wen Fang, and Tong Xin. 2020. Disentangled and controllable face image generation via 3D imitative-contrastive learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 51545163.Google ScholarGoogle ScholarCross RefCross Ref
  14. [14] Doukas Michail Christos, Koujan Mohammad Rami, Sharmanska Viktoriia, Roussos Anastasios, and Zafeiriou Stefanos. 2021. Head2Head++: Deep facial attributes re-targeting. IEEE Transactions on Biometrics, Behavior, and Identity Science 3, 1 (2021), 3143.Google ScholarGoogle ScholarCross RefCross Ref
  15. [15] Fincato Matteo, Cornia Marcella, Landi Federico, Cesari Fabio, and Cucchiara Rita. 2022. Transform, warp, and dress: A new transformation-guided model for virtual try-on. ACM Transactions on Multimedia Computing, Communications, and Applications 18, 2 (2022), 124.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. [16] Genova Kyle, Cole Forrester, Maschinot Aaron, Sarna Aaron, Vlasic Daniel, and Freeman William T.. 2018. Unsupervised training for 3D morphable model regression. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 83778386.Google ScholarGoogle ScholarCross RefCross Ref
  17. [17] Ghosh Partha, Gupta Pravir Singh, Uziel Roy, Ranjan Anurag, Black Michael J., and Bolkart Timo. 2020. GIF: Generative interpretable faces. In Proceedings of the 2020 International Conference on 3D Vision (3DV’20). IEEE, Los Alamitos, CA, 868878.Google ScholarGoogle ScholarCross RefCross Ref
  18. [18] Goodfellow Ian, Pouget-Abadie Jean, Mirza Mehdi, Xu Bing, Warde-Farley David, Ozair Sherjil, Courville Aaron, and Bengio Yoshua. 2014. Generative adversarial nets. In Advances in Neural Information Processing Systems. 26722680.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. [19] Gulrajani Ishaan, Ahmed Faruk, Arjovsky Martin, Dumoulin Vincent, and Courville Aaron C.. 2017. Improved training of Wasserstein GANs. In Advances in Neural Information Processing Systems. 57675777.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. [20] Guo Jianzhu, Zhu Xiangyu, Yang Yang, Yang Fan, Lei Zhen, and Li Stan Z. 2020. Towards fast, accurate and stable 3D dense face alignment. In Computer Vision—ECCV 2020. Lecture Notes in Computer Science, Vol. 12364. Springer, 152–168.Google ScholarGoogle Scholar
  21. [21] Ha Sungjoo, Kersner Martin, Kim Beomsu, Seo Seokjun, and Kim Dongyoung. 2020. Marionette: Few-shot face reenactment preserving identity of unseen targets. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 1089310900.Google ScholarGoogle ScholarCross RefCross Ref
  22. [22] He Kaiming, Zhang Xiangyu, Ren Shaoqing, and Sun Jian. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 770778.Google ScholarGoogle ScholarCross RefCross Ref
  23. [23] Heusel Martin, Ramsauer Hubert, Unterthiner Thomas, Nessler Bernhard, and Hochreiter Sepp. 2017. GANs trained by a two time-scale update rule converge to a local Nash equilibrium. In Advances in Neural Information Processing Systems. 66266637.Google ScholarGoogle Scholar
  24. [24] Hsu Gee-Sern, Tsai Chun-Hung, and Wu Hung-Yi. 2022. Dual-generator face reenactment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 642650.Google ScholarGoogle ScholarCross RefCross Ref
  25. [25] Huang Po-Hsiang, Yang Fu-En, and Wang Yu-Chiang Frank. 2020. Learning identity-invariant motion representations for cross-ID face reenactment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 70847092.Google ScholarGoogle ScholarCross RefCross Ref
  26. [26] Huang Xun and Belongie Serge. 2017. Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE International Conference on Computer Vision. 15011510.Google ScholarGoogle ScholarCross RefCross Ref
  27. [27] Isola Phillip, Zhu Jun-Yan, Zhou Tinghui, and Efros Alexei A.. 2017. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 11251134.Google ScholarGoogle ScholarCross RefCross Ref
  28. [28] Johnson Justin, Alahi Alexandre, and Fei-Fei Li. 2016. Perceptual losses for real-time style transfer and super-resolution. In Proceedings of the European Conference on Computer Vision. 694711.Google ScholarGoogle ScholarCross RefCross Ref
  29. [29] Kang Wonjun, Lee Geonsu, Koo Hyung Il, and Cho Nam Ik. 2022. One-shot face reenactment on megapixels. arXiv preprint arXiv:2205.13368 (2022).Google ScholarGoogle Scholar
  30. [30] Karras Tero, Laine Samuli, and Aila Timo. 2019. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 44014410.Google ScholarGoogle ScholarCross RefCross Ref
  31. [31] Kim Hyeongwoo, Garrido Pablo, Tewari Ayush, Xu Weipeng, Thies Justus, Niessner Matthias, Pérez Patrick, Richardt Christian, Zollhöfer Michael, and Theobalt Christian. 2018. Deep video portraits. ACM Transactions on Graphics 37, 4 (2018), 114.Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. [32] Kingma Diederik P. and Ba Jimmy. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations.Google ScholarGoogle Scholar
  33. [33] Koujan Mohammad Rami, Doukas Michail Christos, Roussos Anastasios, and Zafeiriou Stefanos. 2020. Head2Head: Video-based neural head synthesis. In Proceedings of the 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG’20). IEEE, Los Alamitos, CA, 1623.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. [34] Kowalski Marek, Garbin Stephan J., Estellers Virginia, Baltrušaitis Tadas, Johnson Matthew, and Shotton Jamie. 2020. CONFIG: Controllable neural face image generation. In Computer Vision—ECCV 2020. Lecture Notes in Computer Science, Vol. 12356. Springer, 299–315.Google ScholarGoogle Scholar
  35. [35] Mirza Mehdi and Osindero Simon. 2014. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784 (2014).Google ScholarGoogle Scholar
  36. [36] Nagrani Arsha, Albanie Samuel, and Zisserman Andrew. 2018. Seeing voices and hearing faces: Cross-modal biometric matching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 84278436.Google ScholarGoogle ScholarCross RefCross Ref
  37. [37] Nagrani A., Chung J. S., and Zisserman A.. 2017. VoxCeleb: A large-scale speaker identification dataset. In Proceedings of INTERSPEECH2017.Google ScholarGoogle ScholarCross RefCross Ref
  38. [38] Nirkin Yuval, Keller Yosi, and Hassner Tal. 2019. FSGAN: Subject agnostic face swapping and reenactment. In Proceedings of the IEEE International Conference on Computer Vision. 71847193.Google ScholarGoogle ScholarCross RefCross Ref
  39. [39] Nirkin Yuval, Keller Yosi, and Hassner Tal. 2022. FSGANv2: Improved subject agnostic face swapping and reenactment. arXiv preprint arXiv:2202.12972 (2022).Google ScholarGoogle Scholar
  40. [40] Park Taesung, Liu Ming-Yu, Wang Ting-Chun, and Zhu Jun-Yan. 2019. Semantic image synthesis with spatially-adaptive normalization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 23372346.Google ScholarGoogle ScholarCross RefCross Ref
  41. [41] Parkhi Omkar M., Vedaldi Andrea, and Zisserman Andrew. 2015. Deep face recognition. In Proceedings of the British Machine Vision Conference.Google ScholarGoogle ScholarCross RefCross Ref
  42. [42] Paysan Pascal, Knothe Reinhard, Amberg Brian, Romdhani Sami, and Vetter Thomas. 2009. A 3D face model for pose and illumination invariant face recognition. In Proceedings of the 2009 6th IEEE International Conference on Advanced Video and Signal Based Surveillance. IEEE, Los Alamitos, CA, 296301.Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. [43] Roth Joseph, Tong Yiying, and Liu Xiaoming. 2016. Adaptive 3D face reconstruction from unconstrained photo collections. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 41974206.Google ScholarGoogle ScholarCross RefCross Ref
  44. [44] Shamai Gil, Slossberg Ron, and Kimmel Ron. 2019. Synthesizing facial photometries and corresponding geometries using generative adversarial networks. ACM Transactions on Multimedia Computing, Communications, and Applications 15, 3s (2019), 124.Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. [45] Siarohin Aliaksandr, Lathuilière Stéphane, Tulyakov Sergey, Ricci Elisa, and Sebe Nicu. 2019. Animating arbitrary objects via deep motion transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 23772386.Google ScholarGoogle ScholarCross RefCross Ref
  46. [46] Siarohin Aliaksandr, Lathuilière Stéphane, Tulyakov Sergey, Ricci Elisa, and Sebe Nicu. 2019. First order motion model for image animation. In Advances in Neural Information Processing Systems. 71377147.Google ScholarGoogle Scholar
  47. [47] Siarohin Aliaksandr, Sangineto Enver, Lathuiliere Stéphane, and Sebe Nicu. 2018. Deformable GANs for pose-based human image generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 34083416.Google ScholarGoogle ScholarCross RefCross Ref
  48. [48] Siarohin Aliaksandr, Woodford Oliver J., Ren Jian, Chai Menglei, and Tulyakov Sergey. 2021. Motion representations for articulated animation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1365313662.Google ScholarGoogle ScholarCross RefCross Ref
  49. [49] Simonyan Karen and Zisserman Andrew. 2015. Very deep convolutional networks for large-scale image recognition. In Proceedings of the International Conference on Learning Representations.Google ScholarGoogle Scholar
  50. [50] Song Jingkuan, Zhang Jingqiu, Gao Lianli, Liu Xianglong, and Shen Heng Tao. 2018. Dual conditional GANs for face aging and rejuvenation. In Proceedings of the 27th International Conference on Artificial Intelligence (IJCAI’18). 899905.Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. [51] Song Linsen, Wu Wayne, Fu Chaoyou, Qian Chen, Loy Chen Change, and He Ran. 2021. Pareidolia face reenactment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 22362245.Google ScholarGoogle ScholarCross RefCross Ref
  52. [52] Tewari Ayush, Elgharib Mohamed, Bharaj Gaurav, Bernard Florian, Seidel Hans-Peter, Pérez Patrick, Zollhofer Michael, and Theobalt Christian. 2020. StyleRig: Rigging StyleGAN for 3D control over portrait images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 61426151.Google ScholarGoogle ScholarCross RefCross Ref
  53. [53] Tewari Ayush, Zollhofer Michael, Kim Hyeongwoo, Garrido Pablo, Bernard Florian, Perez Patrick, and Theobalt Christian. 2017. MoFA: Model-based deep convolutional face autoencoder for unsupervised monocular reconstruction. In Proceedings of the IEEE International Conference on Computer Vision Workshops. 12741283.Google ScholarGoogle Scholar
  54. [54] Thies Justus, Zollhöfer Michael, Nießner Matthias, Valgaerts Levi, Stamminger Marc, and Theobalt Christian. 2015. Real-time expression transfer for facial reenactment.ACM Transactions on Graphics 34, 6 (2015), Article 183, 14 pages.Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. [55] Thies Justus, Zollhofer Michael, Stamminger Marc, Theobalt Christian, and Nießner Matthias. 2016. Face2Face: Real-time face capture and reenactment of RGB videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 23872395.Google ScholarGoogle ScholarCross RefCross Ref
  56. [56] Tripathy Soumya, Kannala Juho, and Rahtu Esa. 2020. ICface: Interpretable and controllable face reenactment using GANs. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 33853394.Google ScholarGoogle ScholarCross RefCross Ref
  57. [57] Unterthiner Thomas, Steenkiste Sjoerd van, Kurach Karol, Marinier Raphael, Michalski Marcin, and Gelly Sylvain. 2018. Towards accurate generative models of video: A new metric and challenges. arXiv preprint arXiv:1812.01717 (2018).Google ScholarGoogle Scholar
  58. [58] Wang Ting-Chun, Liu Ming-Yu, Tao Andrew, Liu Guilin, Kautz Jan, and Catanzaro Bryan. 2019. Few-shot video-to-video synthesis. In Advances in Neural Information Processing Systems. 50135024.Google ScholarGoogle Scholar
  59. [59] Wang Ting-Chun, Mallya Arun, and Liu Ming-Yu. 2021. One-shot free-view neural talking-head synthesis for video conferencing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1003910049.Google ScholarGoogle ScholarCross RefCross Ref
  60. [60] Wang Xueping, Wang Yunhong, and Li Weixin. 2019. U-Net conditional GANs for photo-realistic and identity-preserving facial expression synthesis. ACM Transactions on Multimedia Computing, Communications, and Applications 15, 3s (2019), 123.Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. [61] Wang Zhou, Bovik Alan C., Sheikh Hamid R., and Simoncelli Eero P.. 2004. Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing 13, 4 (2004), 600612.Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. [62] Wiles Olivia, Koepke A. Sophia, and Zisserman Andrew. 2018. X2Face: A network for controlling face generation using images, audio, and pose codes. In Proceedings of the European Conference on Computer Vision (ECCV’18). 670686.Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. [63] Wu Wayne, Zhang Yunxuan, Li Cheng, Qian Chen, and Loy Chen Change. 2018. ReenactGAN: Learning to reenact faces via boundary transfer. In Proceedings of the European Conference on Computer Vision (ECCV’18). 603619.Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. [64] Xiang Sitao, Gu Yuming, Xiang Pengda, He Mingming, Nagano Koki, Chen Haiwei, and Li Hao. 2020. One-shot identity-preserving portrait reenactment. arXiv preprint arXiv:2004.12452 (2020).Google ScholarGoogle Scholar
  65. [65] Xu Runze, Zhou Zhiming, Zhang Weinan, and Yu Yong. 2017. Face transfer with generative adversarial network. arXiv preprint arXiv:1710.06090 (2017).Google ScholarGoogle Scholar
  66. [66] Yao Guangming, Yuan Yi, Shao Tianjia, Li Shuang, Liu Shanqi, Liu Yong, Wang Mengmeng, and Zhou Kun. 2021. One-shot face reenactment using appearance adaptive normalization. In Proceedings of the AAAI Conference on Artificial Intelligence. 31723180.Google ScholarGoogle ScholarCross RefCross Ref
  67. [67] Yao Guangming, Yuan Yi, Shao Tianjia, and Zhou Kun. 2020. Mesh guided one-shot face reenactment using graph convolutional networks. In Proceedings of the 28th ACM International Conference on Multimedia. 17731781.Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. [68] Zakharov Egor, Ivakhnenko Aleksei, Shysheya Aliaksandra, and Lempitsky Victor. 2020. Fast bi-layer neural synthesis of one-shot realistic head avatars. In Proceedings of the European Conference on Computer Vision. 524540.Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. [69] Zakharov Egor, Shysheya Aliaksandra, Burkov Egor, and Lempitsky Victor. 2019. Few-shot adversarial learning of realistic neural talking head models. In Proceedings of the IEEE International Conference on Computer Vision. 94599468.Google ScholarGoogle ScholarCross RefCross Ref
  70. [70] Zhang Haichao, Ben Youcheng, Zhang Weixi, Chen Tao, Yu Gang, and Fu Bin. 2021. Fine-grained identity preserving landmark synthesis for face reenactment. arXiv preprint arXiv:2110.04708 (2021).Google ScholarGoogle Scholar
  71. [71] Zhang Ji, Song Jingkuan, Gao Lianli, Liu Ye, and Shen Heng Tao. 2022. Progressive meta-learning with curriculum. IEEE Transactions on Circuits and Systems for Video Technology 32, 5 (2022), 5916–5930.Google ScholarGoogle Scholar
  72. [72] Zhang Pan, Zhang Bo, Chen Dong, Yuan Lu, and Wen Fang. 2020. Cross-domain correspondence learning for exemplar-based image translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 51435153.Google ScholarGoogle ScholarCross RefCross Ref
  73. [73] Zhang Yunxuan, Zhang Siwei, He Yue, Li Cheng, Loy Chen Change, and Liu Ziwei. 2019. One-shot face reenactment. In Proceedings of the British Machine Vision Conference (BMVC’19).Google ScholarGoogle Scholar
  74. [74] Zhu Jun-Yan, Park Taesung, Isola Phillip, and Efros Alexei A.. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision. 22232232.Google ScholarGoogle ScholarCross RefCross Ref
  75. [75] Zhu Xiangyu, Liu Xiaoming, Lei Zhen, and Li Stan Z.. 2017. Face alignment in full pose range: A 3D total solution. IEEE Transactions on Pattern Analysis and Machine Intelligence 41, 1 (2017), 7892.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. High-Fidelity Face Reenactment Via Identity-Matched Correspondence Learning

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Multimedia Computing, Communications, and Applications
      ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 19, Issue 3
      May 2023
      514 pages
      ISSN:1551-6857
      EISSN:1551-6865
      DOI:10.1145/3582886
      • Editor:
      • Abdulmotaleb El Saddik
      Issue’s Table of Contents

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 25 February 2023
      • Online AM: 23 November 2022
      • Accepted: 6 November 2022
      • Revised: 2 October 2022
      • Received: 24 June 2022
      Published in tomm Volume 19, Issue 3

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
    • Article Metrics

      • Downloads (Last 12 months)369
      • Downloads (Last 6 weeks)45

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    View Full Text

    HTML Format

    View this article in HTML Format .

    View HTML Format
    About Cookies On This Site

    We use cookies to ensure that we give you the best experience on our website.

    Learn more

    Got it!