Animating Through Warping: an Efficient Method for High-Quality Facial Expression Animation

08/01/2020 ∙ by Zili Yi, et al. ∙ 0

Advances in deep neural networks have considerably improved the art of animating a still image without operating in 3D domain. Whereas, prior arts can only animate small images (typically no larger than 512x512) due to memory limitations, difficulty of training and lack of high-resolution (HD) training datasets, which significantly reduce their potential for applications in movie production and interactive systems. Motivated by the idea that HD images can be generated by adding high-frequency residuals to low-resolution results produced by a neural network, we propose a novel framework known as Animating Through Warping (ATW) to enable efficient animation of HD images. Specifically, the proposed framework consists of two modules, a novel two-stage neural-network generator and a novel post-processing module known as Animating Through Warping (ATW). It only requires the generator to be trained on small images and can do inference on an image of any size. During inference, an HD input image is decomposed into a low-resolution component(128x128) and its corresponding high-frequency residuals. The generator predicts the low-resolution result as well as the motion field that warps the input face to the desired status (e.g., expressions categories or action units). Finally, the ResWarp module warps the residuals based on the motion field and adding the warped residuals to generates the final HD results from the naively up-sampled low-resolution results. Experiments show the effectiveness and efficiency of our method in generating high-resolution animations. Our proposed framework successfully animates a 4K facial image, which has never been achieved by prior neural models. In addition, our method generally guarantee the temporal coherency of the generated animations. Source codes will be made publicly available.



There are no comments yet.


page 1

page 8

page 9

page 10

page 11

page 12

page 16

page 18

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Animating a still image of an object or a scene in a controllable, efficient manner enables many interesting applications in the field of image editing/enhancement, movie post-production and human-computer interaction. One popular direction in this domain requires parametric 3D face modeling and then perform animation in 3D domain, e.g.,  [9]

. Approaches in this stream often require post-processing using computer graphics techniques, expensive equipment and significant amounts of labors to produce realistic results. In order to drive down the cost and time required to produce high quality animations, researchers are looking into fully-automatic 2D-based methods using machine learning techniques. Recent years have seen considerable progresses of 2D-based methods in animating faces/heads 

[48, 13, 43, 39, 34, 32, 37, 49, 50, 18], human objects [48, 45, 3, 58, 27], cartoon characters[29, 42], other objects [40] and scenes [48, 46, 8, 31, 35].

Among these methods, some targeted on animating a still image in a video prediction manner [46, 45, 58], and thus do not need a driving sequence to direct the animating. Some other methods use either exemplars or interpretable attributes to drive the animating. Depending on the application scenario, various forms of driving signals are used, including but not limited to key points [43, 16, 32, 37, 44, 50, 3, 14], label maps [48, 57], facial action units[34, 44, 18], skeletons [48], edge maps [48], expression categories [49, 44], RGB images [13, 50], motion hints [42, 8, 40, 41] or speech/audio [47, 50, 18, 29].

Majority of methods employ an end-to-end neural network to generate one frame at a time and merging a sequence of continuous frames forms an animation. One of the earliest methods is proposed by Shi et al. [54], in which they use a ConvLSTM generator to forecast the future rainfall intensity in a local region based on the current rainfall intensity. The generator is trained in a supervised manner with real video sequences. [45, 46] inherit the idea of training a generator in a supervised manner with real video datasets, and innovatively introduce the adversarial losses [15] to improve image quality. In 2018, a novel architecture known as Ganimation [34]

follows an image-to-image translation paradigm, instead of using a sequential model. In particular, they condition the generation of an image on action units (AUs)

[12] (a descriptor of facial actions), and produce temporally-coherent animation sequences through continuously altering the AU codes. Different from prior models that are trained with real video datasets, Ganimation is trained with a set of discrete images and weakly-labeled attributes (action units) of those images. [50, 3, 47, 16, 48, 44] continue to use the image-to-image translation paradigm, while they employ different forms of driving signals (e.g., key points, edge maps, audio).

Since existing methods employ convolutional layers directly on the raw input, they cannot efficiently handle large images as the memory usage during training and inference could become extremely high when the input size increases. Even assuming the training is feasible, access to large amounts of high resolution training data would be tedious and expensive. Motivated by the idea that HD images can be generated by adding high-frequency residuals to low-resolution results produced by a neural network, we propose a novel Animating-Through-Warping (ATW) framework that enables animation of an HD image with limited resources. The proposed framework inherits the image-to-image translation paradigm and is trained with a set of discrete images with labeled attributes.

Our proposed ATW framework consists of two modules, a neural-network-based generator and a Residual Warping (ResWarp) module. Firstly, the raw input face is decomposed into a low-resolution component and high-frequency residuals. The generator processes the low-resolution component and predicts low-resolution animation results as well as a motion field that could warp the input face to the desired status. The ResWarp module then warps residuals based on the up-sampled motion field and add the warped residuals to the up-sampled low-resolution results to generate large sharp results. Since the generator only operates on low-resolution images, the cost of memory and computing time is significantly reduced. Moreover, as the generator can be trained with low-resolution images, the need for HD training datasets is alleviated. In theory, our method can animate a still image of any size. Exemplar results are shown in Figure 1. Contributions of the paper are summarized as bellow:

  • We propose a novel Animating-Through-Warping (ATW) framework that enables efficient animation of an HD image with satisfying quality. The framework has been successfully verified in terms of the application of one-shot facial expression animation. It successfully animates a 4K face, which is intractable for prior neural models.

  • As part of the ATW framework, we propose a novel two-stage generator that predicts a coarse result by warping and then a refined result. In addition, a novel training loss known as refine loss is proposed to force minimum change between the coarse and the refined results, which proved to improve the temporal coherency and image quality of generated image sequences.

  • We propose a novel multiscale ResWarp variation that warps image residuals at multiple scales, which brings noticeable improvements over vanilla ResWarp as demonstrated in experiments.

2 Related work

2.1 GANs

The generator we used in method is a conditional GAN [15, 30], in which the resulting image is conditioned on the input image to be animated and driving signals that define the desired status. The driving signals can have various forms including but not limited to expression categories, action units, key points or edge maps.

2.2 Predicting motion field for image editing

In the domain of facial image editing, methods such as [1] and [52] propose to predict a motion field to warp a face to the target expression. Although our method also predicts a motion field, we targeted on animating rather than image editing, which requires handling of temporal coherency. In addition, in our proposed framework the motion field is used to warp both low-resolution image and high-frequency residual map.

2.3 Animating still images through warping

Prior works such as [40, 8, 56, 41] explicitly predict optic flows to facilitate animating of still images. However, there are some aspects that our method differs from these methods. For instance, in [40, 41] motion fields are refined from a sparse motion representation of key points and motion information extracted from the driving video, and in [56]

motion fields are also estimated from existing image pairs, whereas we predict motion fields from scratch.

[8] trains their model with ground-truth optic flows estimated from real videos. However, our model is trained in self-supervised manner with a set of discrete images. Lastly, neither of the three methods explicitly separate the warping of low- and high-frequency components of the target image, thus unable to reenact HD images. The maximum image size experimented are 6464, 256256, 640360 and 512512 respectively in [40], [41], [8] and [56].

2.4 Face reenactment/retargeting

Face reenactment/retargeting aims at generating movements of a target face based on a driving sequence. Typically the target face is provided through a video or a handful of photographs[59, 55, 53, 56, 23, 50]. When only one image is provided to define the target face, it is the same as our task. In the experimental section, we will compare our method with face reenactment method such as x2face [50] and few-shot vid2vid [56].

2.5 Video prediction

Predicting a video from previous frames is an important but challenging task [11, 2, 7, 7]. When video prediction is conditioned on the first frame only (e.g., [46, 45, 58]), it is exactly the same task as animating a still image. However, the task of video prediction does not require a driving video or driving sequence, thus not allowing flexible controllability over generated videos, which is different from our task.

2.6 Image residuals and Laplacian Pyramid

The difference between an image and the blurred version of itself represents the high-frequency component of an image. We follow this concept to decompose an image into low- and high-frequency components [51]. The low-frequency component can be obtained by blurring and down-sampling, while the high-frequency component (i.e. image residuals) can be acquired by subtracting the original image with its blurred version. A more sophisticated way to decompose an image is the Laplacian Pyramid [5], in which an image is down-sampled and decomposed at multiple levels and thus forms a pyramid of scale-disentangled representations. We exploit these techniques in the proposed Residual Warping module.

Figure 2: Overall Pipeline of the Proposed Method.

3 Method

3.1 Overall pipeline

Figure 2 (a) illustrates the overall pipeline of the proposed ATW framework where the generator is the only trainable module in the framework. Given an HD input image , we first down-sample the image to 128128 and then up-sample it to its original size to obtain a blurry large image . The residual image is computed by subtracting from . Note that the height and width of the input image are not necessarily equal but must be multiples of 128, as the input image is evenly divided into 128128 grids and pixel values in each grid are averaged when down-sampling. The generator takes the low-resolution image I (i.e., 128128) and generates a manipulated image based on the driving signal (e.g., expression categories, action units). Meanwhile, a motion field is predicted by Motion Network of the generator. The predicted motion field is up-sampled to the same size as the input and then used to warp the residual image to obtain . Finally, adding to which is up-sampled from yields a high resolution and sharp output .

Figure 2

presents the process of generating a single frame. We can generate consecutive frames by either continuously changing the value of driving signals or linearly interpolating the motion field 


3.2 Network architecture

The network architecture of the generator is shown in Figure 2 (b). The generator is made up of two sub-networks where the Motion Network predicts a motion field that could warp the input image to the desired status defined by the driving signal, and the Refinement Network refines the warping result and makes it visually more realistic. Generator takes Image I and driving signal as inputs and predicts a desired image . The size of inputs and outputs of the generator are expected to be . The driving signal

is a vector that defines what is desired in the output, which might have various forms including expression categories and action units.

is repeated and tiled to 128128d (d is the number of dimensions of the vector) before concatenated to I. The predicted motion field indicates the horizontal and vertical displacement of each pixel. Then a coarse result is obtained by warping the input image I with the motion field. The Refinement Network then processes the coarse result and predict a finer one . Both the Motion Network and the Refinement Network are designed as an encoder-decoder architecture, in which the input is convolved and down-sampled twice, then processed by bottleneck layers and finally deconvolved and up-sampled twice to the original size. The bottleneck layers are made up of several residual blocks (6 for the Motion Network, 4 for the Refinement Network).

Instead of leaving the last layer of the Motion Network unbounded, we utilize

as the activation function to limit output bounded within

. A factor is then multiplied to estimated motion values to adapt to specific image sizes. We use instance normalization and RELU for each layer in the Motion Network, while use ELU and no normalization layer in the Refinement Network to preserve color consistency between

and , as RELU and normalization layers are found to deteriorate color consistency [20]. Pixel values of all images processed by the network are within range of .

3.3 Training methodology

3.3.1 Training losses

The training objectives we define consists of five terms: the adversarial loss, the driving loss, the warp loss, the refine loss and the cycle loss: see Figure 2 (b). The adversarial loss forces the distribution of generated images to comply with that of real images; the driving loss is to drive the refined result to satisfy what is desired by the driving signal ; the warp loss is similar to the driving loss, which pushes the warping result to fit ; the refine loss minimizes the change of the refined result from the coarse result ; the cycle loss ensures that the input image I can be recovered from by using its own feature as driving signal.

To facilitate the training of Generator, we used two discriminators, i.e., acting on the warping result and for the refined result . Discriminator

takes an image as input and predicts two logits: the adversarial logits

that rates the realism of an image, and the driving logits that maps the input image to the corresponding driving signal. As we use the WGAN-GP [17] loss as our adversarial loss, the adversarial loss for Discriminator is written as:


where is the adversarial logits of . , , , are real images, generated images, and the interpolations between them, respectively. , , are distributions of real images I and driving signals separately. is the coefficient for the gradient penalty term, typically set to 10.

The driving loss for the discriminator is written as:


where measures the disparity between the ground-truth annotation and the predicted signal vector . Depending on the form of driving signals, could vary. In our experimental setting, we use Cross Entropy to indicate disparities of expression categories and Mean Squared Error (MSE) to measure distances of action units.

The training loss for Discriminator is written as:


where and are coefficients for the driving loss term and adversarial loss term respectively.

Discriminator has only one head and it maps an image to the corresponding driving signal. Therefore, it is only trained with driving loss:


The adversarial loss for Generator is defined as below.


We also employ the reconstruction loss (also known as cycle loss[59, 55]) to force consistency of the prediction with the original image. We use L1 to compute the loss as it is proved to encourage sharper result. The cycle loss is written as:


where is the L1 operation.

The driving loss and warp loss force consistency of outputs with the target driving signal . They are written as:


The refine loss forces minimum modification of the refined result upon the coarse warping result , thus written as:


The proposed refine loss has two benefits: 1, maximize the change brought by warping, so that the motion and appearance are fully decoupled, and thus the motion field is well aligned with the low- and high-frequency components; 2, minimize the color variance caused by refinement to avoid color flicking, which potentially helps improve the temporal coherency of generated image sequences.

The Motion Network and the Refinement Network are trained jointly with a weighted sum of the five losses, as shown in Equation 10.


where , , , and are coefficients for each loss terms. In our experiments, we set , , , and in our final implementation. However, we suggest further optimization of configurations based on specific requirements.

Figure 3: Processing pipeline of multiscale ResWarp.

3.3.2 Training procedure

We follow a standard WGAN-GP training procedure [17], in which discriminators and generator are alternatively trained with losses defined in Equation 4, 3 and 10: see Algorithm 1.

while G has not converged do
        for i = 1,…,5 do
               Sample batch images from training data;
               Randomly sample driving signals ;
               Get manipulated image ;
               Sample a random number ;
               Get interpolation ;
               Update the discriminator with loss ;
               Update the discriminator with loss ;
        end for
       Sample batch images from training data;
        Randomly sample driving signals ;
        Get result ;
        Update generator G with loss ;
end while
Algorithm 1 Training of our Proposed Network

3.3.3 Data preprocessing/augmentation

Some prior works (e.g., [34]) only process tightly-cropped faces, which limits the application range. In our experiments, we enlarged facial region and ensure the whole headshot is covered by the image. During training, we also randomly crop/flip the images and adjust color values (e.g., brightness, contrast and hue) to augment the data.

3.4 Residual warping module

As demonstrated in Figure 2 (a), the key idea of ResWarp module is to warp the input residual map to and then add onto to generate a large sharp result. In Figure 2 (a), the raw input is down-sampled once to and get only one residual map, which we refer to as vanilla ResWarp.

An alternative way is to down-sample the raw input multiple times and compute a pyramid of residual maps. As illustrated in Figure 3, each time we half the size of the input image by down-sampling and compute a residual map upon images before and after down-sampling. We repeat the process until the image size reaches 128128. After we obtain the low-resolution animated result from the generator, we up-sampled it by and add a warped residual map to it, and repeat the up-sampling and addition until it reaches the raw size. All residual maps are warped by the up-sampled versions of the same motion field. Note that the up-sampling of the motion field and images are based on bilinear interpolation, and down-sampling is based on neighbor averaging. As the residual maps are warped at multiple scales and the final result is obtained in a coarse-to-fine manner, we call it as multiscale ResWarp. We experimented both variations of multiscale and vanilla ResWarp and report the analysis results in experimental resutls.

4 Experimental Results

4.1 Implementation details

We evaluate the proposed method on three datasets: CelebA [28], EmotionNet [10], and RaFD [25] [36]. All facial images in these datasets are aligned, cropped and resized to before used for training. All models are trained on a NVIDIA 1080 Ti GPU with images of resolution 128128 and batch size set to 10. After training, the models are then tested on unseen images of various resolutions of 512 to 4K on one GPU. The final model has a total of 2.7M

9.4M parameters, varying by specific datasets. The model is implemented using Pytorch 0.4.1 and TorchVision 0.2.1.

In particular, we trained on CelebA for both attribute-driven and example-based facial animation. For attribute-driven facial animation, we use expression categories as driving signal. Since the CelebA dataset only provides labels of two categories (non-smiling and smiling), we train a model that can drive a given face from non-smile to smile, or vice versa. As expression category is not continuous, we are not able to generate an animation sequence by changing the driving signal. Therefore, we first generate a motion field for the target expression, and then produce the animation sequence by linearly interpolating the motion field [34].

To enable animating a face based on exemplars, we use action units (AUs) as driving signal. A third-party AU estimator, OpenFace [38], is employed to generate annotations for training images. Once trained, the model can drive a still face to mimic the target facial action using a continuous sequence of AUs extracted from an exemplar video.

As for EmotionNet and RaFD datasets, they provide abundant categorical expression labels (26 categories for EmotionNet, 8 categories for RaFD). Therefore, we mainly experimented the attribute-driven facial animation on these two datasets. In particular, we trained separate models for each dataset in terms of the application of attribute-driven animating. During inference, one single model for each dataset is able to generate animations of all available expression categories. In other words, one single model can generate facial animations with different expression categories.

4.2 Intermediate and final results

In Figure 4, we visualize learned motion fields F, coarse warping results and refined results , warped residual maps and final outputs . Figure 4 demonstrates that our Motion Network produces reasonable motion field and warping results. However, without refinement, the warping results may have distortions and artifacts, and suffer lack of details in certain regions. The refined results given by the Refinement Network look more realistic and have less artifacts. In summary, the Motion Network aims to generate reasonable geometric deformation, e.g. lips stretched for smiling faces, and is able to tell different parts of faces required to be deformed to achieve target facial expression. The Refinement Network further suppresses noise and artifacts, and adds more textures on warping results to make images look more realistic. The ResWarp module transfers the motion field to residuals. The final result is obtained by adding the warped residual map to the up-sampled refined result . Note that all results shown in Figure 4 are produced with input size of 512512.

Figure 4: Illustration of intermediate and final results produced with our method. Input size is 512512. Sizes of F, are are originally 128128, which are up-sampled by 4 times for visualization purpose. Top, middle and bottom row are produced by models trained on EmotionNet, RaFD, CelebA respectively. Driving signal in this setting is expression categories. In particular, source and target expressions from top to bottom are: angrily surprisedsadly surprised, disgustedsad, neutralsmile.

In addition, Figure 5 illustrates results of attribute-driven facial animation, in which the animation sequence is generated by linearly interpolating the motion field. Figure 6 illustrates the example-based animation results by using the exemplar AUs as driving signals. Figure 1 demonstrates high-resolution animating results of sizes ranging from 512 to 4K. More animation results are provided in the supplementary materials.

Figure 5: Attribute-driven animation results, produced by linearly interpolating the motion fields using factor = , , , , , . Source and target expressions from top to bottom are: sadly fearfulappalled, disgustedangry, smileneutral.
Figure 6: Example-based animation results, produced by different driving videos as examples.

4.3 Comparisons with facial reenactment methods

We compare our method with state-of-the-art methods including Ganimation [34], x2face [50] and few-shot vid2vid[56] for the application of example-based facial animation, where an input image is animated based on a driving video. To guarantee fairness, we use the same sets of 100 image-video pairs, in which the 100 images are randomly selected from FFHQ datasets [22] and driving videos are randomly selected from the original split of FaceForensics dataset [36]. Note that the input images have size of 10241024 and are unseen for all contesting models.

As the pretrained Ganimation model officially provided was trained on tightly-cropped faces of size 128128, we crop the face regions from the input images, resizes them to 128128 before testing. Once processed, we resize the animated faces to original sizes and paste them back to the original image. The official pretrained model of x2face was trained on VoxCeleb video dataset with image size of . Few-shot vid2vid is trained on FaceForensics [36] with image size of 128128. As the official pretrained models of vid2vid [56] are not provided, we use their official sources codes to train a new model. Limited to GPU memory, we can only train the model on 128128 images with a single GTX TITAN 1080 ti GPU which has total memory of 11 GB. Our ATW model is trained on CelebA dataset with image size of 128128 using AU as driving signal, and it can do inference on the 1K test images. Note that few-shot vid2vid and x2face can do either one-shot or many-shot facial animation, but here we only evaluate the task of one-shot animation.

The driving signals of Ganimation, x2face and few-shot vid2vid are different, i.e., action units, driving vectors extracted from RGB images, and edge maps that connect facial landmarks, respectively. We extract those information from the driving videos using either external tools (OpenFace [38] for AU, Face-Alignment [4] for facial landmarks) or utilities provided by itself (the driving network provided in [50] for driving vectors).

4.3.1 Qualitative comparison

The visual comparisons between all contesting methods are shown in Figure 7. We find that our method produces the sharpest and the most realistic results. At the same time, the identity of generated faces are well preserved. X2face generates unrealistic expressions, while Ganimation produces visible artifacts at the mouth regions. Few-shot vid2vid produces realistic expressions but fails to preserve accurate input identity.

Figure 7: Comparison against the state-of-the-arts for example-based facial animation. All results are generated with the same input image-video pair and resized to 10241024 for fair comparison. Driving signal for each resulting image is visualized at left bottom corner. Please zoom in to see more details.
Figure 8: Ablation studies on training losses in the task of example-based facial animation. E.g., “No ” indicates that term was removed, and “Full” indicates all loss terms were added.
Model Details Quality Efficiency
Method train dataset driving signal train size test size FID ID CSS FWE AU Err. Human Mem time
Ganimation[34] CelebA AU 128128 128128 114.6 0.798 12.5 8.1 19.0% 653MB 16.8ms
Few-shot vid2vid[56] FaceForensics edge map 128128 128128 146.7 0.675 9.4 6.4 26.6% 1559MB 29.3ms
x2face [50] VoxCeleb embedding 256256 256256 126.8 0.702 12.0 10.4 19.6% 1079MB 4.1ms
Ours CelebA AU 128128 10241024 107.0 0.776 6.2 4.5 34.8% 669MB 21.3ms
Table 1: Evaluation results on 100 image-video pairs. Note that memory consumption and time cost are tested on a framewise basis.

4.3.2 Quantitative comparison

We use four objective metrics to measure the quality of generated animations. Firstly, FID [19] is used to measure feature distribution discrepancy between real and generated faces. Secondly, the correctness of generated expressions is evaluated by comparing the AU features of driving faces with such attributes in generated faces. Specifically, the mean value of the sum of squared errors between the AU vector of a generated expression and that of the corresponding driving expression is used to indicate correctness of expression imitation. Thirdly, we use Flow Warping Error (FWE) as defined in [24] to measure the temporal coherency of generated animations, in which FlowNet2 [21] is used as the optic flow estimator. At last, we employed a pretrained face recognizer VGGFace [33]

to assess the preservation of facial identity in animated sequences. Specifically, the activations of the second last layer (bottleneck layer) of VGGFace model is used as the embedding of face identities. The embedding vector of the input face is compared with that of each animated face and a cosine similarity score (CSS) between them is computed. Mean of CSS of all animated faces is used as the indication of identity preservation (the higher the better).

Results in Table 1 demonstrate that our framework achieves the lowest FID score among all methods, implying that the distribution of our generated images is closest to that of real images. In addition, the lowest Flow Warping Error and AU Error achieved by our method indicates the effectiveness of our method in preserving the temporal coherency of generated animation sequences and imitating the target expression. Both Ganimation and our model achieve high identity similarity and do well in preserving face identities. In terms of test efficiency, our method costs comparable time and memory as competitors while generating significantly higher-resolution results.

Figure 9: Comparisons of multiscale ResWarp against vanilla ResWarp for example-based facial expression animation. The multiscale ResWarp appears to produce more realistic textures.

We also conduct user study for method comparison among 100 randomly selected image-AU pairs. Given an input image, 20 Amazon turks are instructed to choose the best item based on quality of generated animation, perceptual realism, and preservation of identity. Then the percentage of ratings that favor each method is computed and compared: see Table 1. The human assessment results demonstrate that our method obtains the highest average rating among different facial animation methods.

4.4 Ablation Study

4.4.1 Analysis of training losses

We carried out ablation studies on the training losses for the generator. We trained the generator with a specific loss term removed from Eq. 10, and see how the results are altered, as shown in Figure 8. We observe that without driving loss, the generated image is almost identical to the input and fails to achieve what is desired by the driving signal. Without adversarial loss, the result suffers severe artifacts. Without cycle loss, the identity is less preserved. Without refine loss and warp loss, some minor artifacts become visible: refer to the eyes in Figure 8. We also conducted quantitative evaluations as shown in Table 2, which further verifies the property of each loss term. The proposed refine loss contributes to the improvements of the temporal coherency and image quality. In practice, we recommend careful tuning of hyper-parameters based on specific requirements.

No 0.2 0.99 0.6 9.8
No 146.7 0.512 7.3 5.7
No 106.2 0.714 6.2 4.1
No 112.4 0.773 6.9 4.4
No 109.1 0.782 6.0 4.8
Full 107.0 0.776 6.2 4.5
Table 2: Quantitative evaluation results on 100 image-video pairs when using different configurations of training losses. Drawbacks of each configuration are highlighted in red color.

4.4.2 Analysis of multiscale and vanilla ResWarp

As mentioned in Sect. 3.4, we experimented two variations of ResWarp: multiscale ResWarp and vanilla ResWarp. Multiscale ResWarp requires computation of a Laplacian Pyramid and warps the residuals at multiple scales, thus being a bit less efficient than vanilla ResWarp. In terms of quality, we observe not much difference between them for images that less than 1024x1024. However, when the image resolution gets larger, multiscale ResWarp generates visually better results: see Figure 9, especially the left eye of the young man in the first row. This matches our intuition that the vanilla ResWarp may fail to capture enough high frequency details to improve the final target image, especially for high resolution images. In terms of efficiency, the multiscale ResWarp is about 2ms slower than vanilla ResWarp when doing inference on 1K images with our machine (Intel Core i7-8700K CPU@3.70GHz). In practice, one needs to choose the appropriate balance between the final quality and computational efficiency.

4.5 More experimental results

More experimental results including comparisons of ResWarp

and super-resolution methods, comparison of our two-stage generator with other warping-based generators, ablation studies on the design of the

ResWarp module and failure examples of our method are provided in the supplementary materials.

5 Conclusion

We presented a novel Animating-Through-Warping (ATW) framework, which enables animating HD facial images with limited resources. We experimented our method on three datasets and two applications (attribute-driven and example-based facial expression animation), verifying its effectiveness and efficiency. So far, our method is the only neural model that can successfully animate an ultra-high-resolution image. Qualitative and Quantitative comparisons with existing methods shows our method generally improves the temporal coherency and image quality in generated animation sequences without compromising efficiency, in the application of one-shot facial expression animation.

However, our method also suffers some failure examples, especially when generated faces involve synthesis of novel contents (e.g., teeth). Some typically failures are provided in the supplementary material. So far, we only experimented our method for animation of facial expressions. In the future, we intend to extend our method for reenactment of head poses and even arbitrary genres of objects or scenes.


We want to acknowledge Rui Ma, Daesik Jang, James Gregson, Shao Hua Chen, Kevin Cannons and Peng Deng in Huawei Technologies for their help and support in the project.


  • [1] S. Athar, Z. Shu, and D. Samaras (2019) Self-supervised deformation modeling for facial expression editing. arXiv preprint arXiv:1911.00735. Cited by: §2.2.
  • [2] M. Babaeizadeh, C. Finn, D. Erhan, R. H. Campbell, and S. Levine (2017) Stochastic variational video prediction. arXiv preprint arXiv:1710.11252. Cited by: §2.5.
  • [3] G. Balakrishnan, A. Zhao, A. V. Dalca, F. Durand, and J. Guttag (2018) Synthesizing images of humans in unseen poses. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    pp. 8340–8348. Cited by: §1, §1, §1.
  • [4] A. Bulat and G. Tzimiropoulos (2017) How far are we from solving the 2d & 3d face alignment problem?(and a dataset of 230,000 3d facial landmarks). In Proceedings of the IEEE International Conference on Computer Vision, pp. 1021–1030. Cited by: §4.3.
  • [5] P. Burt and E. Adelson (1983) The laplacian pyramid as a compact image code. IEEE Transactions on communications 31 (4), pp. 532–540. Cited by: §2.6.
  • [6] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele (2016)

    The cityscapes dataset for semantic urban scene understanding

    In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3213–3223. Cited by: Figure 13.
  • [7] E. Denton and R. Fergus (2018) Stochastic video generation with a learned prior. arXiv preprint arXiv:1802.07687. Cited by: §2.5.
  • [8] Y. Endo, Y. Kanamori, and S. Kuriyama (2019)

    Animating landscape: self-supervised learning of decoupled motion and appearance for single-image video synthesis

    arXiv preprint arXiv:1910.07192. Cited by: §0.A.4, §1, §1, §2.3.
  • [9] T. J. et al. (2016) Face2face: real-time face capture and reenactment of rgb videos. In CVPR, pp. 2387–2395. Cited by: §1.
  • [10] C. Fabian Benitez-Quiroz, R. Srinivasan, and A. M. Martinez (2016) Emotionet: an accurate, real-time algorithm for the automatic annotation of a million facial expressions in the wild. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5562–5570. Cited by: §4.1.
  • [11] C. Finn, I. Goodfellow, and S. Levine (2016) Unsupervised learning for physical interaction through video prediction. In Advances in neural information processing systems, pp. 64–72. Cited by: §2.5.
  • [12] E. Friesen and P. Ekman (1978) Facial action coding system: a technique for the measurement of facial movement. Palo Alto 3. Cited by: Animating Through Warping: an Efficient Method for High-Quality Facial Expression Animation, §1.
  • [13] A. Gabbay and Y. Hoshen (2019) Style generator inversion for image enhancement and animation. arXiv preprint arXiv:1906.11880. Cited by: §1, §1.
  • [14] J. Geng, T. Shao, Y. Zheng, Y. Weng, and K. Zhou (2018) Warp-guided gans for single-photo facial animation. ACM Transactions on Graphics (TOG) 37 (6), pp. 1–12. Cited by: §1.
  • [15] I. Goodfellow (2016) NIPS 2016 tutorial: generative adversarial networks. arXiv preprint arXiv:1701.00160. Cited by: §1, §2.1.
  • [16] K. Gu, Y. Zhou, and T. Huang (2019) FLNet: landmark driven fetching and learning network for faithful talking facial animation synthesis. arXiv preprint arXiv:1911.09224. Cited by: §1, §1.
  • [17] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville (2017) Improved training of wasserstein gans. In Advances in neural information processing systems, pp. 5767–5777. Cited by: §3.3.1, §3.3.2.
  • [18] M. Hassan, Y. Liu, L. Kong, Z. Wang, and G. Chen (2019) GANemotion: increase vitality of characters in videos by generative adversary networks. In 2019 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. Cited by: §1, §1.
  • [19] M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter (2017) Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in neural information processing systems, pp. 6626–6637. Cited by: §4.3.2.
  • [20] S. Iizuka, E. Simo-Serra, and H. Ishikawa (2017) Globally and locally consistent image completion. ACM Transactions on Graphics (ToG) 36 (4), pp. 1–14. Cited by: §3.2.
  • [21] E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox (2017) Flownet 2.0: evolution of optical flow estimation with deep networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2462–2470. Cited by: §4.3.2.
  • [22] T. Karras, S. Laine, and T. Aila (2019) A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4401–4410. Cited by: §4.3.
  • [23] I. Korshunova, W. Shi, J. Dambre, and L. Theis (2017)

    Fast face-swap using convolutional neural networks

    In Proceedings of the IEEE International Conference on Computer Vision, pp. 3677–3685. Cited by: §2.4.
  • [24] W. Lai, J. Huang, O. Wang, E. Shechtman, E. Yumer, and M. Yang (2018) Learning blind video temporal consistency. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 170–185. Cited by: §4.3.2.
  • [25] O. Langner, R. Dotsch, G. Bijlstra, D. H. Wigboldus, S. T. Hawk, and A. Van Knippenberg (2010) Presentation and validation of the radboud faces database. Cognition and emotion 24 (8), pp. 1377–1388. Cited by: §4.1.
  • [26] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, et al. (2017) Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4681–4690. Cited by: §0.A.1.
  • [27] C. Lin, Y. Huang, and T. K. Shih (2019) Creating waterfall animation on a single image. Multimedia Tools and Applications 78 (6), pp. 6637–6653. Cited by: §1.
  • [28] Z. Liu, P. Luo, X. Wang, and X. Tang (2015) Deep learning face attributes in the wild. In Proceedings of the IEEE international conference on computer vision, pp. 3730–3738. Cited by: §4.1.
  • [29] P. A. McKenzie, D. Kang, D. B. Mercredi, and A. Nech (2019-June 18) Techniques for animating stickers with sound. Google Patents. Note: US Patent 10,325,395 Cited by: §1, §1.
  • [30] M. Mirza and S. Osindero (2014) Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784. Cited by: §2.1.
  • [31] S. Nam, C. Ma, M. Chai, W. Brendel, N. Xu, and S. J. Kim (2019) End-to-end time-lapse video synthesis from a single outdoor image. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1409–1418. Cited by: §1.
  • [32] Y. Nirkin, Y. Keller, and T. Hassner (2019) Fsgan: subject agnostic face swapping and reenactment. In Proceedings of the IEEE International Conference on Computer Vision, pp. 7184–7193. Cited by: §1, §1.
  • [33] O. M. Parkhi, A. Vedaldi, and A. Zisserman (2015)

    Deep face recognition

    Cited by: §4.3.2.
  • [34] A. Pumarola, A. Agudo, A. M. Martinez, A. Sanfeliu, and F. Moreno-Noguer (2018) Ganimation: anatomically-aware facial animation from a single image. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 818–833. Cited by: §1, §1, §1, §3.1, §3.3.3, §4.1, §4.3, Table 1.
  • [35] D. Roache, B. Collar, and L. Ostrover (2019-December 12) Adding motion effects to digital still images. Google Patents. Note: US Patent App. 16/447,892 Cited by: §1.
  • [36] A. Rössler, D. Cozzolino, L. Verdoliva, C. Riess, J. Thies, and M. Nießner (2018) Faceforensics: a large-scale video dataset for forgery detection in human faces. arXiv preprint arXiv:1803.09179. Cited by: Figure 13, §4.1, §4.3, §4.3.
  • [37] E. Sanchez and M. Valstar (2018) Triple consistency loss for pairing distributions in gan-based face synthesis. arXiv preprint arXiv:1811.03492. Cited by: §1, §1.
  • [38] F. Schroff, D. Kalenichenko, and J. Philbin (2015) Facenet: a unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 815–823. Cited by: §4.1, §4.3.
  • [39] G. Shen, W. Huang, C. Gan, M. Tan, J. Huang, W. Zhu, and B. Gong (2019) Facial image-to-video translation by a hidden affine transformation. In Proceedings of the 27th ACM international conference on Multimedia, pp. 2505–2513. Cited by: §1.
  • [40] A. Siarohin, S. Lathuilière, S. Tulyakov, E. Ricci, and N. Sebe (2019) Animating arbitrary objects via deep motion transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2377–2386. Cited by: §0.A.4, §1, §1, §2.3.
  • [41] A. Siarohin, S. Lathuilière, S. Tulyakov, E. Ricci, and N. Sebe (2019) First order motion model for image animation. In Advances in Neural Information Processing Systems, pp. 7137–7147. Cited by: §1, §2.3.
  • [42] Z. P. Sin, P. H. Ng, S. C. Shiu, F. Chung, and H. V. Leong (2019) 2D character animating networks: bringing static characters to move via motion transfer. In Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing, pp. 196–203. Cited by: §1, §1.
  • [43] K. Songsri-in and S. Zafeiriou (2019) Face video generation from a single image and landmarks. arXiv preprint arXiv:1904.11521. Cited by: §1, §1.
  • [44] S. Tripathy, J. Kannala, and E. Rahtu (2020) Icface: interpretable and controllable face reenactment using gans. In The IEEE Winter Conference on Applications of Computer Vision, pp. 3385–3394. Cited by: §1, §1.
  • [45] R. Villegas, J. Yang, Y. Zou, S. Sohn, X. Lin, and H. Lee (2017) Learning to generate long-term future via hierarchical prediction. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 3560–3569. Cited by: §1, §1, §1, §2.5.
  • [46] C. Vondrick, H. Pirsiavash, and A. Torralba (2016) Generating videos with scene dynamics. In Advances in neural information processing systems, pp. 613–621. Cited by: §1, §1, §1, §2.5.
  • [47] K. Vougioukas, S. Petridis, and M. Pantic (2019) Realistic speech-driven facial animation with gans. International Journal of Computer Vision, pp. 1–16. Cited by: §1, §1.
  • [48] T. Wang, M. Liu, A. Tao, G. Liu, J. Kautz, and B. Catanzaro (2019) Few-shot video-to-video synthesis. arXiv preprint arXiv:1910.12713. Cited by: §1, §1, §1.
  • [49] X. Wang, Y. Wang, and W. Li (2019) U-net conditional gans for photo-realistic and identity-preserving facial expression synthesis. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM) 15 (3s), pp. 1–23. Cited by: §1, §1.
  • [50] O. Wiles, A. Sophia Koepke, and A. Zisserman (2018) X2face: a network for controlling face generation using images, audio, and pose codes. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 670–686. Cited by: §1, §1, §1, §2.4, §4.3, §4.3, Table 1.
  • [51] H. Winnemöller, J. E. Kyprianidis, and S. C. Olsen (2012) XDoG: an extended difference-of-gaussians compendium including advanced image stylization. Computers & Graphics 36 (6), pp. 740–753. Cited by: §2.6.
  • [52] R. Wu, X. Tao, X. Gu, X. Shen, and J. Jia (2019) Attribute-driven spontaneous motion in unpaired image translation. In Proceedings of the IEEE International Conference on Computer Vision, pp. 5923–5932. Cited by: §2.2.
  • [53] W. Wu, Y. Zhang, C. Li, C. Qian, and C. Change Loy (2018) Reenactgan: learning to reenact faces via boundary transfer. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 603–619. Cited by: §2.4.
  • [54] S. Xingjian, Z. Chen, H. Wang, D. Yeung, W. Wong, and W. Woo (2015) Convolutional lstm network: a machine learning approach for precipitation nowcasting. In Advances in neural information processing systems, pp. 802–810. Cited by: §1.
  • [55] Z. Yi, H. Zhang, P. Tan, and M. Gong (2017) Dualgan: unsupervised dual learning for image-to-image translation. In Proceedings of the IEEE international conference on computer vision, pp. 2849–2857. Cited by: §2.4, §3.3.1.
  • [56] E. Zakharov, A. Shysheya, E. Burkov, and V. Lempitsky (2019) Few-shot adversarial learning of realistic neural talking head models. In Proceedings of the IEEE International Conference on Computer Vision, pp. 9459–9468. Cited by: §0.A.4, §2.3, §2.4, §4.3, §4.3, Table 1.
  • [57] Y. Zhang, S. Zhang, Y. He, C. Li, C. C. Loy, and Z. Liu (2019) One-shot face reenactment. arXiv preprint arXiv:1908.03251. Cited by: §1.
  • [58] Y. Zhao, M. C. Oveneke, D. Jiang, and H. Sahli (2019) A video prediction approach for animating single face image. Multimedia Tools and Applications 78 (12), pp. 16389–16410. Cited by: §1, §1, §2.5.
  • [59] J. Zhu, T. Park, P. Isola, and A. A. Efros (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, pp. 2223–2232. Cited by: §2.4, §3.3.1.

Appendix 0.A Appendix

0.a.1 Comparisons against super-resolution methods

The ResWarp module in our framework is responsible for constructing a high-resolution result from a low-resolution result of , which can also be achieved with super-resolution methods. We compared our final results against those produced by super-resolution methods from the same low-resolution results , as shown in Figure 10. Note that SRGAN [26] is a learning-based method that can perform 4 super-resolution and the official pretrained model was trained on DIV2K. we observe that the textures and details recovered with our ResWarp module look more realistic than super-resolution methods such as SRGAN [26], or naive up-sampling with Nearest Neighbor and Bicubic interpolation. We also conducted quantitative evaluations of these methods on the same 100 image-video pairs as used in Section 4.3.2: see Table 3. Since these methods share the same low-frequency components, we do not need to compare the AU Error and identity preservation. Quantitative results show ResWarp achieves the lowest FID and best temporal coherency.

Figure 10: Comparisons of ResWarp against super-resolution methods. NN refers to Nearest Neighbor interpolation. We observe that results by NN contain tiling artifacts, and those produced by Bicubic interpolation suffer lack of sharpness. Results by SRGAN are sharp but look unnatural. Please zoom in to see more details.
Method FID FWE
NN 118.2 7.9
BICUBIC 115.3 6.4
SRGAN 110.8 7.2
ResWarp 107.0 6.2
Table 3: Quantitative evaluation results on 100 image-video pairs when using different super-resolution methods.

0.a.2 More Ablation Study

0.a.2.1 Analysis of up-sampling techniques in ResWarp

As in the ResWarp module, one down-sampling and two up-sampling operations are performed: see Figure 2 (a). For down-sampling, we evenly split the raw input image into 128128 patches and average each patch to form a new pixel. For up-sampling, we experimented three techniques: Nearest Neighbor (NN), Bilinear and Bicubic. Figure 11 shows how the choose of up-sampling technique affects the final results. Given the same input image and driving signal, we highlight differences between results generated using these up-sampling techniques. We observed that NN tends to generate tiling artifacts, and Bicubic generates some minor artifacts at boundaries. Bilinear produces the most pleasing results. In our final implementation, we choose bilinear up-sampling technique in the ResWarp module.

Figure 11: Comparisons of various up-sampling techniques used in ResWarp module in the task of attribute-driven facial animation (only last frame is shown). Note that NN refers to Nearest Neighbor interpolation. Differences of results are highlighted with blue boxes.

0.a.3 Failure examples

Figure 12 shows some typical failure examples of our method. When the results involve synthesis of novel contents (e.g., teeth when opening mouth), our method is prone to suffer blurriness as the corresponding residuals cannot be warped from the source image: see the bottom row of Figure 12. In addition, our method occasionally predicts invalid motion field and causes motion artifacts: see the top row of Figure 12.

Figure 12: Failure examples of our method. Sizes of raw input images are 10241024.

0.a.4 Combination of prior warping-based model and ResWarp

Since existing warping-based models such as [40, 8, 56] also predict a motion field that warps the input to its output, combining them with our proposed ResWarp module could enable their methods applicable on high resolution images as well. That is to say, we can use their models to generate a low-resolution result and the motion field, and then employ our ResWarp module to warp the raw input residuals and produce high-resolution results. We experimented the combination of few-shot vid2vid and ResWarp module on tasks of animating faces and street views: see Figure 13. As shown in Figure 13, some results suffer the misalignment of the residual map and low-frequency component (see the lip and glasses in 1st row of Figure 13), and some other results suffer severe regional or global blurriness (2nd-4th rows of Figure 13). The reason is that the few-shot vid2vid model is not well aligned with the ATW framework, and the motion field and the warped result in their model do not play a dominant role in the generation of the low-frequency component.

Figure 13: Results of combining few-shot vid2vid and ResWarp. Top two rows are generated by combining the few-shot vid2vid model trained on FaceForensics [36] and ResWarp module, and bottom two are produced by combining the model trained on CityScape [6] and ResWarp. Sizes of all input images are 10241024.