MarioNETte: Few-shot Face Reenactment Preserving Identity of Unseen Targets

by   Sungjoo Ha, et al.
Hyperconnect, Inc.

When there is a mismatch between the target identity and the driver identity, face reenactment suffers severe degradation in the quality of the result, especially in a few-shot setting. The identity preservation problem, where the model loses the detailed information of the target leading to a defective output, is the most common failure mode. The problem has several potential sources such as the identity of the driver leaking due to the identity mismatch, or dealing with unseen large poses. To overcome such problems, we introduce components that address the mentioned problem: image attention block, target feature alignment, and landmark transformer. Through attending and warping the relevant features, the proposed architecture, called MarioNETte, produces high-quality reenactments of unseen identities in a few-shot setting. In addition, the landmark transformer dramatically alleviates the identity preservation problem by isolating the expression geometry through landmark disentanglement. Comprehensive experiments are performed to verify that the proposed framework can generate highly realistic faces, outperforming all other baselines, even under a significant mismatch of facial characteristics between the target and the driver.



There are no comments yet.


page 14

page 15

page 16

page 17

page 18

page 19

page 20

page 21


One-Shot Identity-Preserving Portrait Reenactment

We present a deep learning-based framework for portrait reenactment from...

LI-Net: Large-Pose Identity-Preserving Face Reenactment Network

Face reenactment is a challenging task, as it is difficult to maintain a...

FaceSwapNet: Landmark Guided Many-to-Many Face Reenactment

Recent face reenactment studies have achieved remarkable success either ...

ActGAN: Flexible and Efficient One-shot Face Reenactment

This paper introduces ActGAN - a novel end-to-end generative adversarial...

Identity-preserving Face Recovery from Stylized Portraits

Given an artistic portrait, recovering the latent photorealistic face th...

Single Source One Shot Reenactment using Weighted motion From Paired Feature Points

Image reenactment is a task where the target object in the source image ...

Motion deblurring of faces

Face analysis is a core part of computer vision, in which remarkable pro...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.


Figure 1: Examples of identity preservation failures and improved results generated by the proposed method. Each row shows (a) driver shape interference, (b) losing details of target identity, and (c) failure of warping at large poses.
Figure 2: The overall architecture of MarioNETte.

Given a target face and a driver face, face reenactment aims to synthesize a reenacted face which is animated by the movement of a driver while preserving the identity of the target.

Many approaches make use of generative adversarial networks (GAN) which have demonstrated a great success in image generation tasks. xu-arxiv-2017-facetransfer,wu-eccv-2018-reenactgan xu-arxiv-2017-facetransfer,wu-eccv-2018-reenactgan achieved high-fidelity face reenactment results by exploiting CycleGAN [zhu-cvpr-2017-cyclegan]. However, the CycleGAN-based approaches require at least a few minutes of training data for each target and can only reenact predefined identities, which is less attractive in-the-wild where a reenactment of unseen targets cannot be avoided.

The few-shot face reenactment approaches, therefore, try to reenact any unseen targets by utilizing operations such as adaptive instance normalization (AdaIN) [zakharov-arxiv-2019-samsung] or warping module [wiles-eccv-2018-x2face, siarohin-cvpr-2019-monkeynet]. However, current state-of-the-art methods suffer from the problem we call identity preservation problem: the inability to preserve the identity of the target leading to defective reenactments. As the identity of the driver diverges from that of the target, the problem is exacerbated even further.

Examples of flawed and successful face reenactments, generated by previous approaches and the proposed model, respectively, are illustrated in Figure 1. The failures of previous approaches, for the most part, can be broken down into three different modes 111Additional example images and videos can be found at the following URL:

  1. Neglecting the identity mismatch may lead to a identity of the driver interfere with the face synthesis such that the generated face resembles the driver (Figure 1a).

  2. Insufficient capacity of the compressed vector representation (e.g., AdaIN layer) to preserve the information of the target identity may lead the produced face to lose the detailed characteristics (Figure 


  3. Warping operation incurs a defect when dealing with large poses (Figure 1c).

We propose a framework called MarioNETte, which aims to reenact the face of unseen targets in a few-shot manner while preserving the identity without any fine-tuning. We adopt image attention block and target feature alignment, which allow MarioNETte to directly inject features from the target when generating image. In addition, we propose a novel landmark transformer which further mitigates the identity preservation problem by adjusting for the identity mismatch in an unsupervised fashion. Our contributions are as follows:

  • We propose a few-shot face reenactment framework called MarioNETte, which preserves the target identity even in situations where the facial characteristics of the driver differs widely from those of the target. Utilizing image attention block, which allows the model to attend to relevant positions of the target feature map, together with target feature alignment, which includes multiple feature-level warping operations, proposed method improves the quality of the face reenactment under different identities.

  • We introduce a novel method of landmark transformation which copes with varying facial characteristics of different people. The proposed method adapts the landmark of a driver to that of the target in an unsupervised manner, thereby mitigating the identity preservation problem without any additional labeled data.

  • We compare the state-of-the-art methods when the target and the driver identities coincide and differ using VoxCeleb1 [nagrani-interspeech-2017-voxceleb] and CelebV [wu-eccv-2018-reenactgan] dataset, respectively. Our experiments including user studies show that the proposed method outperforms the state-of-the-art methods.

MarioNETte Architecture

Figure 2 illustrates the overall architecture of the proposed model. A conditional generator generates the reenacted face given the driver and the target images , and the discriminator predicts whether the image is real or not. The generator consists of following components:

  • The preprocessor utilizes a 3D landmark detector [bulat-iccv-2017-facealignment] to extract facial keypoints and renders them to landmark image, yielding and , corresponding to the driver and the target input respectively. Note that proposed landmark transformer is included in the preprocessor. Since we normalize the scale, translation and rotation of landmarks before using them in a landmark transformer, we utilize 3D landmarks instead of 2D ones.

  • The driver encoder extracts pose and expression information from the driver input and produces driver feature map .

  • The target encoder adopts a U-Net architecture to extract style information from the target input and generates target feature map along with the warped target feature maps .

  • The blender receives driver feature map and target feature maps to produce mixed feature map . Proposed image attention block is basic building block of the blender.

  • The decoder utilizes warped target feature maps and mixed feature map to synthesize reenacted image. The decoder improves quality of reenacted image exploiting proposed target feature alignment.

For further details, refer to Supplementary Material A1.

Image attention block

Figure 3: Architecture of the image attention block. Red boxes conceptually visualize how each position of and are associated. Our attention can attend different position of each target feature maps with different importance.

To transfer style information of targets to the driver, previous studies encoded target information as a vector and mixed it with driver feature by concatenation or AdaIN layers [liu-arxiv-2019-funit, zakharov-arxiv-2019-samsung]. However, encoding targets as a spatial-agnostic vector leads to losing spatial information of targets. In addition, these methods are absent of innate design for multiple target images, and thus, summary statistics (e.g. mean or max) are used to deal with multiple targets which might cause losing details of the target.

We suggest image attention block (Figure 3) to alleviate aforementioned problem. The proposed attention block is inspired by the encoder-decoder attention of transformer [vaswani-nips-2017-transformer], where the driver feature map acts as an attention query and the target feature maps act as attention memory. The proposed attention block attends to proper positions of each feature (red boxes in Figure  3) while handling multiple target feature maps (i.e., ).

Given driver feature map and target feature maps , the attention is calculated as follows:


where is a flattening function, all are linear projection matrices that map to proper number of channels at the last dimension, and and are sinusoidal positional encodings which encode the coordinate of feature maps (further details of sinusoidal positional encodings we used are described in Supplementary Material A2). Finally, the output is reshaped to .

Instance normalization, residual connection, and convolution layer follow the attention layer to generate output feature map

. The image attention block offers a direct mechanism of transferring information from multiple target images to the pose of driver.

Target feature alignment

Figure 4: Architecture of target feature alignment.

The fine-grained details of the target identity can be preserved through the warping of low-level features [siarohin-cvpr-2019-monkeynet]

. Unlike previous approaches that estimate a warping flow map or an affine transform matrix by computing the difference between keypoints of the target and the driver 

[balakrishnan-cvpr-2018-synthesizing, siarohin-cvpr-2018-deformablegan, siarohin-cvpr-2019-monkeynet], we propose a target feature alignment (Figure 4) which warps the target feature maps in two stages: (1) target pose normalization generates pose normalized target feature maps and (2) driver pose adaptation aligns normalized target feature maps to the pose of the driver. The two-stage process allows the model to better handle the structural disparities of different identities. The details are as follows:

  1. Target pose normalization. In the target encoder , encoded feature maps are processed into by estimated normalization flow map of target and warping function (\⃝raisebox{-0.9pt}{1} in Figure 4). The following warp-alignment block at decoder treats in a target pose-agnostic manner.

  2. Driver pose adaptation. The warp-alignment block in the decoder receives and the output of the previous block of the decoder. In a few-shot setting, we average resolution-compatible feature maps from different target images (i.e., ). To adapt pose-normalized feature maps to the pose of the driver, we generate an estimated flow map of the driver using convolution that takes as the input. Alignment by follows (\⃝raisebox{-0.9pt}{2} in Figure 4). Then, the result is concatenated to and fed into the following residual upsampling block.

Landmark Transformer

Large structural differences between two facial landmarks may lead to severe degradation of the quality of the reenactment. The usual approach to such a problem has been to learn a transformation for every identity [wu-eccv-2018-reenactgan] or by preparing a paired landmark data with the same expressions [zhang-arxiv-2019-faceswapnet]. However, these methods are unnatural in a few-shot setting where we handle unseen identities, and moreover, the labeled data is hard to be acquired. To overcome this difficulty, we propose a novel landmark transformer which transfers the facial expression of the driver to an arbitrary target identity. The landmark transformer utilizes multiple videos of unlabeled human faces and is trained in an unsupervised manner.

Landmark decomposition

Given video footages of different identities, we denote as the -th frame of the -th video, and as a 3D facial landmark. We first transform every landmark into a normalized landmark by normalizing the scale, translation, and rotation. Inspired by 3D morphable models of face [blanz-siggraph-1999-3dmm], we assume that normalized landmarks can be decomposed as follows:


where is the average facial landmark geometry computed by taking the mean over all landmarks, denotes the landmark geometry of identity , computed by where is the number of frames of -th video, and corresponds to the expression geometry of -th frame. The decomposition leads to .

Given a target landmark and a driver landmark we wish to generate the following landmark:


i.e., a landmark with the identity of the target and the expression of the driver. Computing and is possible if enough images of are given, but in a few-shot setting, it is difficult to disentangle landmark of unseen identity into two terms.

Landmark disentanglement

To decouple the identity and the expression geometry in a few-shot setting, we introduce a neural network to regress the coefficients for linear bases. Previously, such an approach has been widely used in modeling complex face geometries 

[blanz-siggraph-1999-3dmm]. We separate expression landmarks into semantic groups of the face (e.g., mouth, nose and eyes) and perform PCA on each group to extract the expression bases from the training data:


where and represent the basis and the corresponding coefficient, respectively.

The proposed neural network, a landmark disentangler , estimates given an image and a landmark . Figure 5 illustrates the architecture of the landmark disentangler. Once the model is trained, the identity and the expression geometry can be computed as follows:



is a hyperparameter that controls the intensity of the predicted expressions from the network. Image feature extracted by a ResNet-50 and the landmark,

, are fed into a 2-layer MLP to predict .

During the inference, the target and the driver landmarks are processed according to Equation 6. When multiple target images are given, we take the mean value over all . Finally, landmark transformer converts landmark as:


Denormalization to recover the original scale, translation, and rotation is followed by the rasterization that generates a landmark adequate for the generator to consume. Further details of landmark transformer are described in Supplementary Material B.

Figure 5: Architecture of landmark disentangler. Note that is a set of landmark points but visualized as an image in the figure.

Experimental Setup


We trained our model and the baselines using VoxCeleb1 [nagrani-interspeech-2017-voxceleb], which contains size videos of 1,251 different identities. We utilized the test split of VoxCeleb1 and CelebV [wu-eccv-2018-reenactgan]

for evaluating self-reenactment and reenactment under a different identity, respectively. We created the test set by sampling 2,083 image sets from randomly selected 100 videos of VoxCeleb1 test split, and uniformly sampled 2,000 image sets from every identity from CelebV. The CelebV data includes the videos of five different celebrities of widely varying characteristics, which we utilize to evaluate the performance of the models reenacting unseen targets, similar to in-the-wild scenario. Further details of the loss function and the training method can be found at Supplementary Material A3 and A4.

Figure 6: Images generated by the proposed method and baselines, reenacting different identity on CelebV in one-shot setting.


MarioNETte variants, with and without the landmark transformer (MarioNETte+LT and MarioNETte, respectively), are compared with state-of-the-art models for few-shot face reenactment. Details of each baseline are as follows:

  • X2Face [wiles-eccv-2018-x2face]. X2face utilizes direct image warping. We used the pre-trained model provided by the authors, trained on VoxCeleb1.

  • Monkey-Net [siarohin-cvpr-2019-monkeynet]. Monkey-Net adopts feature-level warping. We used the implementation provided by the authors. Due to the structure of the method, Monkey-Net can only receive a single target image.

  • NeuralHead [zakharov-arxiv-2019-samsung]. NeuralHead exploits AdaIN layers. Since a reference implementation is absent, we made an honest attempt to reproduce the results. Our implementation is a feed-forward version of their model (NeuralHead-FF) where we omit the meta-learning as well as fine-tuning phase, because we are interested in using a single model to deal with multiple identities.


We compare the models based on the following metrics to evaluate the quality of the generated images. Structured similarity (SSIM[wang-tip-2004-ssim]

and peak signal-to-noise ratio (

PSNR) evaluate the low-level similarity between the generated image and the ground-truth image. We also report the masked-SSIM (M-SSIM) and masked-PSNR (M-PSNR) where the measurements are restricted to the facial region.

In the absence of the ground truth image where different identity drives the target face, the following metrics are more relevant. Cosine similarity (


) of embedding vectors generated by pre-trained face recognition model 

[deng-cvpr-2019-csim] is used to evaluate the quality of identity preservation. To inspect the capability of the model to properly reenact the pose and the expression of the driver, we compute PRMSE, the root mean square error of the head pose angles, and AUCON, the ratio of identical facial action unit values, between the generated images and the driving images. OpenFace [baltrusaitis-ieee-2018-openface] is utilized to compute pose angles and action unit values.

Experimental Results

Models were compared under self-reenactment and reenactment of different identities, including a user study. Ablation tests were conducted as well. All experiments were conducted under two different settings: one-shot and few-shot, where one or eight target images were used respectively.


X2face (1) 0.689 0.719 0.941 22.537 31.529 3.26 0.813
Monkey-Net (1) 0.697 0.734 0.934 23.472 30.580 3.46 0.770
NeuralHead-FF (1) 0.229 0.635 0.923 20.818 29.599 3.76 0.791
MarioNETte (1) 0.755 0.744 0.948 23.244 32.380 3.13 0.825
X2face (8) 0.762 0.776 0.956 24.326 33.328 3.21 0.826
NeuralHead-FF (8) 0.239 0.645 0.925 21.362 29.952 3.69 0.795
MarioNETte (8) 0.828 0.786 0.958 24.905 33.645 2.57 0.850
Table 1: Evaluation result of self-reenactment setting on VoxCeleb1. Upward/downward pointing arrows correspond to metrics that are better when the values are higher/lower.

Table 1 illustrates the evaluation results of the models under self-reenactment settings on VoxCeleb1. MarioNETte surpasses other models in every metric under few-shot setting and outperforms other models in every metric except for PSNR under the one-shot setting. However, MarioNETte shows the best performance in M-PSNR which implies that it performs better on facial region compared to baselines. The low CSIM yielded from NeuralHead-FF is an indirect evidence of the lack of capacity in AdaIN-based methods.

Reenacting Different Identity

Model (# target) CSIM PRMSE AUCON
X2face (1) 0.450 3.62 0.679
Monkey-Net (1) 0.451 4.81 0.584
NeuralHead-FF (1) 0.108 3.30 0.722
MarioNETte (1) 0.520 3.41 0.710
MarioNETte+LT (1) 0.568 3.70 0.684
X2face (8) 0.484 3.15 0.709
NeuralHead-FF (8) 0.120 3.26 0.723
MarioNETte (8) 0.608 3.26 0.717
MarioNETte+LT (8) 0.661 3.57 0.691
Table 2: Evaluation result of reenacting a different identity on CelebV. Bold and underlined values correspond to the best and the second-best value of each metric, respectively.
Model (# target)
X2Face (1) 0.07 0.09 0.093
Monkey-Net (1) 0.05 0.09 0.100
NeuralHead-FF (1) 0.17 0.17 0.087
MarioNETte (1) - 0.51 0.140
MarioNETte+LT (1) - - 0.187
X2Face (8) 0.09 0.07 0.047
NeuralHead-FF (8) 0.15 0.16 0.080
MarioNETte (8) - 0.52 0.147
MarioNETte+LT (8) - - 0.280
Table 3: User study results of reenacting different identity on CelebV. Ours stands for our proposed model, MarioNETte, and Ours+LT stands for MarioNETte+LT.
Model (# target) CSIM PRMSE AUCON
AdaIN (1) 0.063 3.47 0.724
+Attention (1) 0.333 3.17 0.729
+Alignment (1) 0.530 3.44 0.700
MarioNETte (1) 0.520 3.41 0.710
AdaIN (8) 0.069 3.40 0.723
+Attention (8) 0.472 3.22 0.727
+Alignment (8) 0.605 3.27 0.709
MarioNETte (8) 0.608 3.26 0.717
Table 4: Comparison of ablation models for reenacting different identity on CelebV.

Table 2 displays the evaluation result of reenacting a different identity on CelebV, and Figure 6 shows generated images from proposed method and baselines. MarioNETte and MarioNETte+LT preserve target identity adequately, thereby outperforming other models in CSIM. The proposed method alleviates the identity preservation problem regardless of the driver being of the same identity or not. While NeuralHead-FF exhibits slightly better performance in terms of PRMSE and AUCON compared to MarioNETte, the low CSIM of NeuralHead-FF portrays the failure to preserve the target identity. The landmark transformer significantly boosts identity preservation at the cost of a slight decrease in PRMSE and AUCON. The decrease may be due to the PCA bases for the expression disentanglement not being diverse enough to span the whole space of expressions. Moreover, the disentanglement of identity and expression itself is a non-trivial problem, especially in a one-shot setting.

User Study

Two types of user studies are conducted to assess the performance of the proposed model:

  • Comparative analysis. Given three example images of the target and a driver image, we displayed two images generated by different models and asked human evaluators to select an image with higher quality. The users were asked to assess the quality of an image in terms of (1) identity preservation, (2) reenactment of driver’s pose and expression, and (3) photo-realism. We report the winning ratio of baseline models compared to our proposed models. We believe that user reported score better reflects the quality of different models than other indirect metrics.

  • Realism analysis. Similar to the user study protocol of zakharov-arxiv-2019-samsung zakharov-arxiv-2019-samsung, three images of the same person, where two of the photos were taken from a video and the remaining generated by the model, were presented to human evaluators. Users were instructed to choose an image that differs from the other two in terms of the identity under a three-second time limit. We report the ratio of deception, which demonstrates the identity preservation and the photo-realism of each model.

For both studies, 150 examples were sampled from CelebV, which were evenly distributed to 100 different human evaluators.

Table 3 illustrates that our models are preferred over existing methods achieving realism scores with a large margin. The result demonstrates the capability of MarioNETte in creating photo-realistic reenactments while preserving the target identity in terms of human perception. We see a slight preference of MarioNETte over MarioNETte+LT, which agrees with the Table 2, as MarioNETte+LT has better identity preservation capability at the expense of slight degradation in expression transfer. Since the identity preservation capability of MarioNETte+LT surpasses all other models in realism score, almost twice the score of even MarioNETte on few-shot settings, we consider the minor decline in expression transfer a good compromise.

Ablation Test

Figure 7: (a) Driver and target images overlapped with attention map. Brightness signifies the intensity of the attention. (b) Failure case of +Alignment and improved result generated by MarioNETte.

We performed ablation test to investigate the effectiveness of the proposed components. While keeping all other things the same, we compare the following configurations reenacting different identities: (1) MarioNETte is the proposed method where both image attention block and target feature alignment are applied. (2) AdaIN corresponds to the same model as MarioNETte, where the image attention block is replaced with AdaIN residual block while the target feature alignment is omitted. (3) +Attention is a MarioNETte where only the image attention block is applied. (4) +Alignment only employs the target feature alignment.

Table 4 shows result of ablation test. For identity preservation (i.e., CSIM), AdaIN has a hard time combining style features depending solely on AdaIN residual blocks. +Attention alleviates the problem immensely in both one-shot and few-shot settings by attending to proper coordinates. While +Alignment exhibits a higher CSIM compared to +Attention, it struggles in generating plausible images for unseen poses and expressions leading to worse PRMSE and AUCON. Taking advantage of both attention and target feature alignment, MarioNETte outperforms +Alignment in every metric under consideration.

Entirely relying on target feature alignment for reenactment, +Alignment is vulnerable to failures due to large differences in pose between target and driver that MarioNETte can overcome. Given a single driver image along with three target images (Figure 7a), +Alignment has defects on the forehead (denoted by arrows in Figure 7b). This is due to (1) warping low-level features from a large-pose input and (2) aggregating features from multiple targets with diverse poses. MarioNETte, on the other hand, gracefully handles the situation by attending to proper image among several target images as well as adequate spatial coordinates in the target image. The attention map, highlighting the area where the image attention block is focusing on, is illustrated with white in Figure 7a. Note that MarioNETte attends to the forehead and adequate target images (Target 2 and 3 in Figure 7a) which has similar pose with driver.

Related Works

The classical approach to face reenactment commonly involves the use of explicit 3D modeling of human faces [blanz-siggraph-1999-3dmm] where the 3DMM parameters of the driver and the target are computed from a single image, and blended eventually [thies-siggraphasia-2015-realtimeexpression, thies-cvpr-2016-face2face]. Image warping is another popular approach where the target image is modified using the estimated flow obtained form 3D models [cao-ieee-2013-facewarehouse] or sparse landmarks [averbuch-2017-siggraph-portraitstolife]

. Face reenactment studies have embraced the recent success of neural networks exploring different image-to-image translation architectures 

[isola-cvpr-2017-pix2pix] such as the works of xu-arxiv-2017-facetransfer xu-arxiv-2017-facetransfer and that of wu-eccv-2018-reenactgan wu-eccv-2018-reenactgan, which combined the cycle consistency loss [zhu-cvpr-2017-cyclegan]. A hybrid of two approaches has been studied as well. kim-siggraph-2018-deepvideoportraits kim-siggraph-2018-deepvideoportraits trained an image translation network which maps reenacted render of a 3D face model into a photo-realistic output.

Architectures, capable of blending the style information of the target with the spatial information of the driver, have been proposed recently. AdaIN [huang-cvpr-2017-adain, huang-eccv-2018-munit, liu-arxiv-2019-funit] layer, attention mechanism [zhu-2019-cvpr-progressiveattention, lathuiliere-arxiv-2019-attentionfusion, park-arxiv-2019-styleattention], deformation operation [siarohin-cvpr-2018-deformablegan, dong-2018-nips-softgatedgan], and GAN-based method [bao2018towards] have all seen a wide adoption. Similar idea has been applied to few-shot face reenactment settings such as the use of image-level [wiles-eccv-2018-x2face] and feature-level [siarohin-cvpr-2019-monkeynet] warping, and AdaIN layer in conjuction with a meta-learning [zakharov-arxiv-2019-samsung]. The identity mismatch problem has been studied through methods such as CycleGAN-based landmark transformers [wu-eccv-2018-reenactgan] and landmark swappers [zhang-arxiv-2019-faceswapnet]. While effective, these methods either require an independent model per person or a dataset with image pairs that may be hard to acquire.


In this paper, we have proposed a framework for few-shot face reenactment. Our proposed image attention block and target feature alignment, together with the landmark transformer, allow us to handle the identity mismatch caused by using the landmarks of a different person. Proposed method do not need additional fine-tuning phase for identity adaptation, which significantly increases the usefulness of the model when deployed in-the-wild. Our experiments including human evaluation suggest the excellence of the proposed method.

One exciting avenue for future work is to improve the landmark transformer to better handle the landmark disentanglement to make the reenactment even more convincing.


Appendix A MarioNETte Architecture Details

Architecture design

Given a driver image and target images , the proposed few-shot face reenactment framework which we call MarioNETte first generates 2D landmark images (i.e. and ). We utilize a 3D landmark detector  [bulat-iccv-2017-facealignment] to extract facial keypoints which includes information about pose and expression denoted as and , respectively. We further rasterize 3D landmarks to an image by rasterizer , resulting in .

We utilize simple rasterizer that orthogonally projects 3D landmark points, e.g., , into 2D -plane, e.g., , and we group the projected landmarks into 8 categories: left eye, right eye, contour, nose, left eyebrow, right eyebrow, inner mouth, and outer mouth. For each group, lines are drawn between predefined order of points with predefined colors (e.g., red, red, green, blue, yellow, yellow, cyan, and cyan respectively), resulting in a rasterized image as shown in Figure 8.

Figure 8: Example of the rasterized facial landmarks.

MarioNETte consists of conditional image generator and projection discriminator . The discriminator determines whether the given image is a real image from the data distribution taking into account the conditional input of the rasterized landmarks and identity .

The generator is further broken down into four components: namely, target encoder, drvier encoder, blender, and decoder. Target encoder takes target image and generates encoded target feature map together with the warped target feature map . Driver encoder receives a driver image and creates a driver feature map . Blender combines encoded feature maps to produce a mixed feature map . Decoder generates the reenacted image. Input image and the landmark image are concatenated channel-wise and fed into the target encoder.

The target encoder adopts a U-Net [ronneberger-miccai-2015-unet] style architecture including five downsampling blocks and four upsampling blocks with skip connections. Among five feature maps generated by the downsampling blocks, the most downsampled feature map, , is used as the encoded target feature map , while the others, , are transformed into normalized feature maps. A normalization flow map transforms each feature map into normalized feature map, , through warping function as follows:


Flow map is generated at the end of upsampling blocks followed by an additional convolution layer and a hyperbolic tangent activation layer, thereby producing a 2-channel feature map, where each channel denotes a flow for the horizontal and vertical direction, respectively.

We adopt bilinear sampler based warping function which is widely used along with neural networks due to its differentiability [jaderberg-nips-2015-stn, balakrishnan-cvpr-2018-synthesizing, siarohin-cvpr-2019-monkeynet]. Since each has a different width and height, average pooling is applied to downsample to match the size of to that of .

The driver encoder , which consists of four residual downsampling blocks, takes driver landmark image and generates driver feature map .

The blender produces mixed feature map by blending the positional information of with the target style feature maps . We stacked three image attention blocks to build our blender.

The decoder consists of four warp-alignment blocks

followed by residual upsampling blocks. Note that the last upsampling block is followed by an additional convolution layer and a hyperbolic tangent activation function.

The discriminator consists of five residual downsampling blocks without self-attention layers. We adopt a projection discriminator with a slight modification of removing the global sum-pooling layer from the original structure. By removing the global sum-pooling layer, discriminator generates scores on multiple patches like PatchGAN discriminator [isola-cvpr-2017-pix2pix].

We adopt the residual upsampling and downsampling block proposed by brock-iclr-2019-biggan brock-iclr-2019-biggan to build our networks. All batch normalization layers are substituted with instance normalization except for the target encoder and the discriminator, where the normalization layer is absent. We utilized ReLU as an activation function. The number of channels is doubled (or halved) when the output is downsampled (or upsampled). The minimum number of channels is set to 64 and the maximum number of channels is set to 512 for every layer. Note that the input image, which is used as an input for the target encoder, driver encoder, and discriminator, is first projected through a convolutional layer to match the channel size of 64.

Positional encoding

We utilize a sinusoidal positional encoding introduced by vaswani-nips-2017-transformer vaswani-nips-2017-transformer with a slight modification. First, we divide the number of channels of the positional encoding in half. Then, we utilize half of them to encode the horizontal coordinate and the rest of them to encode the vertical coordinate. To encode the relative position, we normalize the absolute coordinate by the width and the height of the feature map. Thus, given a feature map of , the corresponding positional encoding is computed as follows:


Loss functions

Our model is trained in an adversarial manner using a projection discriminator  [miyato-arxiv-2018-projdisc]. The discriminator aims to distinguish between the real image of the identity and a synthesized image of generated by . Since the paired target and the driver images from different identities cannot be acquired without explicit annotation, we trained our model using the target and the driver image extracted from the same video. Thus, identities of and are always the same, e.g., , for every target and driver image pair, i.e., , during the training.

We use hinge GAN loss [lim-2017-arxiv-geometricgan] to optimize discriminator as follows:


The loss function of the generator consists of four components including the GAN loss , the perceptual losses ( and ), and the feature matching loss . The GAN loss is a generator part of the hinge GAN loss and defined as follows:


The perceptual loss [johnson-eccv-2016-perceptual-loss] is calculated by averaging -distances between the intermediate features of the pre-trained network using ground truth image and the generated image . We use two different networks for perceptual losses where and

are extracted from VGG19 and VGG-VD-16 each trained for ImageNet classification task 

[simonyan-2014-arxiv-vgg] and a face recognition task [parkhi-bmvc-2015-vggface], respectively. We use features from the following layers to compute the perceptual losses: relu1_1, relu2_1, relu3_1, relu4_1, and relu5_1. Feature matching loss is the sum of -distances between the intermediate features of the discriminator when processing the ground truth image and the generated image which helps with the stabilization of the adversarial training. It helps to stabilize the adversarial training. The overall generator loss is the weighted sum of the four losses:


Training details

To stabilize the adversarial training, we apply spectral normalization [miyato-iclr-2018-spectralnorm] for every layer of the discriminator and the generator. In addition, we use the convex hull of the facial landmarks as a facial region mask and give three-fold weights to the corresponding masked position while computing the perceptual loss. We use Adam optimizer to train our model where the learning rate of is used for the discriminator and is used for the generator and the style encoder. Unlike the setting of brock-iclr-2019-biggan brock-iclr-2019-biggan, we only update the discriminator once per every generator updates. We set to 10, to 0.01, to 10, and the number of target images to 4 during the training.

Appendix B Landmark Transformer Details

Landmark decomposition

Formally, landmark decomposition is calculated as:


where is the number of videos, is the number of frames of -th video, and . We can easily compute the components shown in Equation 13 from the training dataset.

However, when an image of unseen identity is given, the decomposition of the identity and the expression shown in Equation 13 is not possible since will be zero for a single image. Even when a few frames of an unseen identity is given, will be zero (or near zero) if the expressions in the given frames are not diverse enough. Thus, to perform the decomposition shown in Equation 13 even under the one-shot or few-shot settings, we introduce landmark disentangler.

Landmark disentanglement

To compute the expression basis , using the expression geometry obtained from the VoxCeleb1 training data, we divide a landmark into different groups (e.g., left eye, right eye, eyebrows, mouth, and any other) and perform PCA on each group. We utilize PCA dimensions of 8, 8, 8, 16 and 8, for each group, resulting in a total number of expression bases, , of 48.

We train landmark disentangler on the VoxCeleb1 training set, separately. Before training landmark disentangler, we normalized each expression parameter

to follow a standard normal distribution

for the ease of regression training. We employ ResNet50, which is pre-trained on ImageNet [he-2016-cvpr-deep], and extract features from the first layer to the last layer right before the global average pooling layer. Extracted image features are concatenated with the normalized landmark subtracted by the mean landmark , and fed into a 2-layer MLP followed by a ReLU activation. The whole network is optimized by minimizing the MSE loss between the predicted expression parameters and the target expression parameters, using Adam optimizer with a learning rate of

. We use gradient clipping with the maximum gradient norm of 1 during the training. We set the expression intensity parameter

to 1.5.

Appendix C Additional Ablation Tests

Quantitative results

In Table 1 and Table 2 of the main paper, MarioNETte shows better PRMSE and AUCON under the self-reenactment setting on VoxCeleb1 compared to NeuralHead-FF, which, however, is reversed under the reenactment of a different identity on CelebV. We provide an explanation of this phenomenon through an ablation study.

Table 5 illustrates the evaluation results of ablation models under self-reenactment settings on VoxCeleb1. Unlike the evaluation results of reenacting a different identity on CelebV (Table 4 of the main paper), +Alignment and MarioNETte show better PRMSE and AUCON compared to the AdaIN. The phenomenon may be attributed to the characteristics of the training dataset as well as the different inductive biases of different models. VoxCeleb1 consists of short video clips (usually 5-10s long), leading to similar poses and expressions between drivers and targets. Unlike the AdaIN-based model which is unaware of spatial information, the proposed image attention block and the target feature alignment encode spatial information from the target image. We suspect that this may lead to possible overfitting of the proposed model to the same identity pair with a similar pose and expression setting.

Model (# target) CSIM PRMSE AUCON
AdaIN (1) 0.183 3.719 0.781
+Attention (1) 0.611 3.257 0.825
+Alignment (1) 0.756 3.069 0.827
MarioNETte (1) 0.755 3.125 0.825
AdaIN (8) 0.188 3.649 0.787
+Attention (8) 0.717 2.909 0.843
+Alignment (8) 0.826 2.563 0.845
MarioNETte (8) 0.828 2.571 0.850
Table 5: Comparison of ablation models for self-reenactment setting on VoxCeleb1 dataset.

Qualitative results

Figure 9 and Figure 10 illustrate the results of ablation models reenacting a different identity on CelebV under the one-shot and few-shot settings, respectively. While AdaIN fails to generate an image that resembles the target identity, +Attention successfully maintains the key characteristics of the target. The target feature alignment module adds fine-grained details to the generated image. However, MarioNETte tends to generate more natural images in a few-shot setting, while +Alignment struggles to deal with multiple target images with diverse poses and expressions.

Appendix D Inference Time

In this section, we report the inference time of our model. We measured the latency of the proposed method while generating images with different number of target images, K . We ran each setting for 300 times and report the average speed. We utilized Nvidia Titan Xp and Pytorch 1.0.1.post2

. As mentioned in the main paper, we used the open-sourced implementation of bulat-iccv-2017-facealignment bulat-iccv-2017-facealignment to extract 3D facial landmarks.

Description Symbol Inference time (ms)
3D Landmark Detector 101
Target Encoder 44 (K=1), 111 (K=8)
Target Landmark Transformer 22 (K=1), 19 (K=8)
Generator 35 (K=1), 36 (K=8)
Driver Landmark Transformer 26
Table 6: Inference speed of each component of our model.
Model Target encoding Driver generation
Table 7: Inference speed of the full model for generating single image with target images.

Table 6 displays the inference time breakdown of our models. Total inference time of the proposed models, MarioNETte+LT and MarioNETte, can be derived as shown in Table 7. While generating reenactment videos, and , utilized to compute the target encoding, is generated only once at the beginning. Thus, we divide our inference pipeline into Target encoding part and the Driver generation part.

Since we perform a batched inference for multiple target images, the inference time of the proposed components (e.g., the target encoder and the target landmark transformer) scale sublinearly to the number of target images . On the other hand, the open-source 3D landmark detector processes images in a sequential manner, and thus, its processing time scales linearly.

Appendix E Additional Examples of Generated Images

We provide additional qualitative results of the baseline methods and the proposed models on VoxCeleb1 and CelebV datasets. We report the qualitative results for both one-shot and few-shot (8 target images) settings, except Monkey-Net which is designed for using only a single image. In the case of the few-shot reenactment, we display only one target image, due to the limited space.

Figure 11 and Figure 12 compare different methods for the self-reenactment on VoxCeleb1 in one-shot and few-shot settings, respectively. Examples of one-shot and few-shot reenactments on VoxCeleb1 where driver’s and target’s identity do not match is shown in Figures 13 and Figure 14, respectively.

Figure 15, Figure 16, and Figure 17 depict the qualitative results on the CelebV dataset. One-shot and few-shot self-reenactment settings of various methods are compared in Figures 15 and Figure 16, respectively. The results of reenacting a different identity on CelebV under the few-shot setting can be found in Figure 17.

Figure 18 reveals failure cases generated by MarioNETte+LT while performing a one-shot reenactment under different identity setting on VoxCeleb1. Large pose difference between the driver and the target seems to be the main reason for the failures.

Figure 9: Qualitative results of ablation models of one-shot reenactment under different identity setting on CelebV.
Figure 10: Qualitative results of ablation models of few-shot reenactment under different identity setting on CelebV.
Figure 11: Qualitative results of one-shot self-reenactment setting on VoxCeleb1.
Figure 12: Qualitative results of few-shot self-reenactment setting on VoxCeleb1.
Figure 13: Qualitative results of one-shot reenactment under different identity setting on VoxCeleb1.
Figure 14: Qualitative results of few-shot reenactment under different identity setting on VoxCeleb1.
Figure 15: Qualitative results of one-shot self-reenactment setting on CelebV.
Figure 16: Qualitative results of few-shot self-reenactment setting on CelebV.
Figure 17: Qualitative results of few-shot reenactment under different identity setting on CelebV.
Figure 18: Failure cases generated by MarioNETte+LT while performing one-shot reenactment under different identity setting on VoxCeleb1.