Caricature is an artistic creation produced by exaggerating some prominent characteristics of a face image while preserving its identity. Caricatures are widely used in social media and daily life for a variety of purposes. For example, it can be used as the profile image or to express certain emotions and sentiments on social networks. Due to the prosperity of social media, automatic caricature creation becomes an increasingly attractive research problem. In this paper, given an arbitrary face image of a person, our primary goal is to generate satisfactory or plausible caricatures of that person with reasonable exaggerations and an appropriate caricature style.
To this end, we identify and define four key aspects that need to be taken into account for caricature generation:
Identity Preservation: The generated caricature should share the same identity as the input face;
Plausibility: The generated caricature should be visually satisfactory or plausible; the style of the generated image should be consistent with normal cartoons or caricatures;
Exaggeration: Different parts of the input face should be deformed in a reasonable way to exaggerate the prominent characteristics of the face;
Diversity: Given an input face, diverse caricatures with different styles should be generated.
Several previous studies [1, 20, 23, 6, 29, 39] have made attempts to solve this problem. These studies mainly focus on the generation of sketch caricature [7, 26, 36], black-white illustration caricature , and outline caricature . Most of them adopt low-level image transformations and computer graphics techniques [2, 28, 42, 45] to generate new images. They are either semi-automatic or complicated with multiple stages, making it difficult to be applied to large-scale caricature generation applications. Moreover, although they can generate correct deformations on some facial parts, their results are usually visually unappealing, e.g., lacking of rich colors and vivid details. As shown in the second column of Figure 1, the conventional low-level geometric deformation based approaches  can only generate one specifically exaggerated caricature for one input face. Often, the content, texture and style of the generated caricature are plain and less interesting.
Recently, with the progress of conditional generative adversarial networks (GANs) [14, 34, 37] and their success in image generation, image translation and editing tasks [9, 40, 19, 49, 50, 5], it is possible to use a GAN model to learn transformations from data itself to produce plausible caricatures from the input face images. However, although typical GAN-based models such as Pix2Pix  can generate realistic images, directly applying these models to this task fails to produce satisfactory outputs. Most of the previous methods cannot address all of the four key aspects together. Quite often, the generated image is almost visually the same as the input face with only minor changes in color, lacking sufficient exaggerations in facial parts. As shown in the third column of Figure 1, there is almost no exaggeration of facial features, which does not satisfy the primary goal of caricature generation. In addition, many GAN-based image-to-image translation models require strictly paired training images, i.e., the transformation should be a bijective pixel-to-pixel mapping. However, these paired data are quite difficult to obtain. For caricature generation, using such pixel-wisely paired data is not feasible for practical purposes.
Inspired by the power of the conditional GANs, this paper proposes an end-to-end model named CariGAN to solve the problems encountered by the conventional GAN models. The goal is to address as much as possible the four key aspects of caricature generation.
Due to the difficulty in obtaining strictly paired training data, we introduce a new setting for training GANs, i.e., weakly paired training. Specifically, one pair of input face and the ground-truth caricature only share the same identity but has no pixel-to-pixel or pose-to-pose correspondence. This setting is much more challenging than the pixel-wisely paired training setting. We will describe this setting in detail in Section 3.1.
Furthermore, as shown in Figure 1, although conventional GAN-based models such as BicycleGAN  can produce caricatures with correct identities, they fail to produce reasonable exaggerations. It is worth emphasizing that the exaggeration is a vital aspect to make a vivid caricature. In our model, we retain the advantage of conventional models, i.e., employing a U-net as the generator to keep the identity unchanged during the transformation. In addition, we introduce a facial mask as a condition of GAN to precisely guide the deformations of faces, so that the generated images can have reasonable exaggerations.
For the plausibility issue, although the GAN-based models can produce plausible images by forcing the distribution of the generated caricatures to be close to that of the ground truth, there are still many artifacts that decrease the degree of plausibility. To enhance the plausibility of the generated caricatures, a new image fusion mechanism is proposed. By adopting this mechanism, we can encourage the model to concentrate on refining not only the global appearance, but also the important local facial parts of the generated caricature images.
Finally, many conditional GAN models suffer from the so-called “mode collapse” problem, i.e., different inputs, especially random noise, can be mapped to the same mode . The diversity of the outputs will be greatly reduced due to this problem. To address this problem, a novel diversity loss is proposed to enforce that the input random noise should play a more important role in generating the styles and colors of the generated caricatures.
In summary, the main contributions of this paper are as follows:
We introduce a new weakly paired training setting for GANs and propose a CariGAN model that can successfully generate plausible caricatures under this challenging setting.
We propose a new image fusion mechanism to encourage the model to focus on both the global and local appearance of the generated caricatures, and pay more attention to the key facial parts.
We propose a novel diversity loss to encourage our model to generate caricatures with larger diversity in style, color and content.
The rest of this paper is organized as follows: Section 2 introduces the related work on caricature generation, conditional generative adversarial networks and multimodality encoding in GANs. Section 3 introduces the proposed model in details. Experimental settings and results of different models are provided in Section 4, and the last section concludes the whole paper.
2 Related work
2.1 Caricature Generation
Early work on caricature generation mainly focuses on low-level image processing and computer graphics approaches. Typical process for this kind of approaches can be summarized as follows: (1) detect facial feature points (i.e., facial landmarks) and extract facial sketch from an input face; (2) find the distinctive characteristic and exaggerate the facial shape; (3) warp the original face image to the exaggerated one to get a caricature.
There are two major types of these earlier work: rule-based methods and example-based methods. Rule-based methods generate caricatures by simulating the rules of caricature drawing, i.e., the notion of “exaggeration the difference from the mean” (EDFM). In general, an average face or a standard face model is taken as a reference, and then the difference is exaggerated. Representative methods include [6, 36, 12, 27]. In , Chiang et al. formalized the caricature generation into a metamorphosis process to generate caricatures by leveraging one caricature as a reference. In , Mo et al. extended the notion of EDFM by considering both feature DFM (Difference From Mean) and feature variance. Unlike rule-based methods, example-based methods rely on a face-caricature dataset and generate caricatures based on similar examples. For example, Liang et al.  proposed a prototype-based exaggeration model by analyzing the correlation between face-caricature pairs. Liu et al. 45] took both the spatial relationship among facial components and the shape of facial components, into account and proposed a new example-based method. Zhang et al.  proposed a data-driven framework for generating cartoon faces by selecting and assembling facial components from a database.
2.2 Conditional Generative Adversarial Networks
Caricature generation can be seen as an image translation problem and thus can be modeled with conditional generative adversarial networks (cGANs) [14, 34]. A conditional GAN takes a random noise and some prior knowledge as inputs to generate data whose conditional distribution is similar to the one of the ground-truth data. Recently, cGANs [3, 32, 43, 8, 38, 33, 31] have shown great capacity in learning transformations from data and generating realistic images. Typical supervised models such as Pix2Pix  and BicycleGAN  perform well on the image-to-image translation problem, especially when the input image and the output image have a pixel-wise correspondence. To relieve the requirement of strictly paired training data, CycleGAN , DiscoGAN , and Dual GAN  demonstrated that such tasks can even be accomplished in an unsupervised way.
However, directly applying these supervised or unsupervised GAN models to the caricature generation task may fail to generate plausible caricatures due to the weakly paired nature of our task, e.g., different facial poses between face images and caricatures, and varying degrees of exaggeration and deformation among facial components in caricatures. To tackle this problem, our CariGAN model is built not only on condition of the input face image, but also a facial mask which indicates the landmarks of the target caricature. Through the condition of facial mask, the generated caricature can be encouraged to have a similar exaggeration and viewpoint as the ground-truth caricature. Similar to our model, some GAN-based models use an additional person pose mask to guide the generation process. For example, Ma et al.  used a person pose to guide a two-stage GAN-based model to generate realistic person images. In the first stage, it adopted a reconstruction loss to generate a coarse image which was then refined in the second stage by a GAN model. One major difference between their model and ours lies in that reconstruction plays a key role in their model, which may lead to blurry results . On the contrary, our model takes full advantage of adversarial learning and is able to generate more sharp images. Another difference is that they use multiple stages which is more complicated, while our model is an end-to-end one stage model.
Another closely related work is , which is also based on GANs for caricature generation. The major differences are that: (1) our model is trained on weakly paired face-caricature images, while  requires strictly paired images with the same facial viewpoint for training; (2) our model is conditioned on a face image and a facial mask, which can control the exaggeration of the output, while  is only conditioned on the input face, lacking the ability to control the exaggeration.
2.3 Mutlimodality Encoding in GANs
One major issue regarding cGANs is the “mode collapse” problem . In order to relieve this problem, the key point is on how to learn richer modalities of the outputs and avoid multiple inputs being mapped to the same output. Some prior studies addressing this problem [50, 24, 10, 4, 35, 16, 25] have been proposed. One simple and effective way to alleviate this problem is to use a latent code as an additional input to explicitly encode the modes. For example, a one-hot vector representing the facial viewpoint is introduced as an input to generate faces with different poses . In our work, we use a facial mask to guide the generation of caricatures.
Another typically applied approach to relieve the “mode collapse” problem is to enforce a tight connection between the latent codes and the output data. A few previous studies have investigated this idea by introducing an additional encoder to map the generated image back to the input random noise, so that the mapping from the random noise to the output can be bijective [50, 24, 10]. However, the encoder brings additional computation, and the simultaneous optimization of the generator and encoder is non-trivial. We provide a new perspective to solve this problem. In addition to using the facial mask as a guidance, we enforce the differences between the output images to be a linear function of the differences between the input random noises, so that the change of noise can greatly influence the styles of the output images.
3 Our Model
As illustrated in Figure 2, CariGAN takes a face image, a facial mask and a random noise as inputs. It then tries to generate a plausible caricature that has the same identity with the input face and meaningful exaggeration as indicated by the input facial mask. To produce satisfactory caricatures, CariGAN uses a generative adversarial network to model the translation from a face to a caricature. To encourage the model to generate realistic caricatures with more reasonable exaggerations, we introduce an image fusion mechanism to this model to focus more on the important facial parts of the generated image. We also design a diversity loss to address the “mode collapse” problem. The diversity loss enforces the differences between the output images to be a linear function of the differences between the input random noises.
3.1 Adversarial Learning with Weak Pairs
Weakly paired training setting Let be a pair of training data, where represents the input face image, and represents the corresponding ground-truth caricature. and are of resolutions and belong to the same person. It should be noted that they are not pixel-wisely or pose-wisely paired. This setting is quite different from the conventional paired training setting, where the input image and the ground-truth are usually pixel-wisely and bijectively mapped [19, 50]. This is because that there are multiple face images and various caricatures with different artistic styles for one person, which means that one face image can be paired with multiple caricatures and one caricature can also correspond to multiple face images of the same identity. Thus, in an input pair, the face image and caricature can have totally different viewpoints (i.e., facial poses), which makes the task extremely challenging. In addition to the viewpoint, there is no pixel-wise correspondence between the faces and the caricatures inherently, as many facial parts are exaggerated. Hence, we call this pair a weak pair, and define this setting as a new training setting, namely weakly paired training, in the image-to-image translation task.
Adversarial loss The goal of our task is to map an input face image to a caricature image such that the distribution of variable is close to that of the weakly paired ground-truth caricature image . To this end, we build a CariGAN model based on the cGANs to handle this image-to-image translation task. Our CariGAN is composed of a generator and a discriminator . With an input face image and a random noise , tries to generate a caricature image . The goal of is to make as plausible as possible, so as to fool the discriminator , while the discriminator tries to distinguish the generated image and the ground-truth . Specifically, following the usage of noise variable in BicycleGAN , we first sample a noise vector of length
from a Gaussian distribution. Then it is duplicatedtimes in the spatial locations to get a noise map . We then directly concatenate and as the input of our generator. The adversarial loss of such a conditional GAN can be formulated as:
Facial mask as an additional condition Unfortunately, only conditioning on the input face makes it difficult to learn reasonable exaggerations in the output caricature for the following reasons: (1) One input face can actually be mapped to caricatures with arbitrary exaggerations. This uncertainty may confuse the generator. (2) Although the input noise can be used to model a wider distribution, it is difficult to encode viewpoints, exaggerations and styles at the same time.
To reduce the uncertainty, we use a facial mask as an additional condition and feed it into the generator along with . We encourage the model to generate a caricature that has similar exaggerations as indicated by this mask. The facial mask is a binary image composed of facial landmarks. In the mask, each landmark is represented by a square block and we fill the pixels in the blocks with ones and the background pixels with zeros. The facial mask can encode two aspects of a face. The first aspect is the exaggeration on local facial parts, such as eyes, mouth etc. The second one is the viewpoint of the whole face.
During training, we directly use the facial mask of the ground-truth as input and constrain the output of the generator to be similar to with regard to facial exaggeration and viewpoint. In this way, the major appearance of the output image is roughly determined, except for some variations on the styles, textures and colors. The success of the previous conditional GAN models [34, 19, 49] has indicated that the random noise sampled from a Gaussian distribution is able to model the variation of different styles. Hence, we also use random noise to encode the style of the generated caricature. In fact, we use the facial mask as an additional condition for both the generator and the discriminator. Specifically, we directly concatenate , and to form an -channel map as the input of our generator. The input of the discriminator is a concatenation of or . With facial mask as an additional input, the adversarial loss of our model is as follows:
As the distribution of the generated fake pair is encouraged to be close to the distribution of the real pair , the generated image is not only enforced to have similar appearance of the ground-truth , but also enforced to have a similar exaggeration as indicated by . If we only use as a condition in and ignore it in the discriminator , then the is only constrained to mimic the distribution of . The input pose condition tends to be ignored during training in this case.
Content loss Previous work on image-to-image translation  shows that combining the pixel-wise loss between the generated fake image and the ground-truth can boost the performance of cGANs. Although in our task, pixels of the ground-truth and the generated image are not an bijective mapping, we discover that using an loss can stabilize the training of GANs. Hence, we also use this pixel-wise loss as a constraint for the content of the generated image. The content loss is formulated as:
where denotes norm of matrix.
3.2 Focus on Important Local Regions
Although the conditional GAN is able to generate visually appealing images, there are still many local artifacts in the output images such as the absence of eyes. The reason may be that the conventional conditional GANs only constrain that the global appearance of the generated image should look like real caricatures on average, but it cannot guarantee that each local facial part is present and realistic. To encourage the model to generate reasonable facial parts, we propose a new image fusion (IF for short) mechanism to force the model to focus more on important local regions. We fuse the background parts of the ground-truth and the generated key local parts of the generated fake images to create new additional fake images. The basic idea is illustrated in Figure 3.
Specifically, we use the input facial mask as a guidance for selecting the important regions. We create a Gaussian blob for each landmark in the facial mask and obtain a one-channel heatmap . Using this heatmap, we replace the regions of the ground-truth around the landmarks with the regions of the generated image , and keep the other unimportant parts such as the background pixels unchanged. In this way, we generate an additional fused fake image , which is formulated as:
where denotes the pixel-wise multiplication. With generated, it is fed into the discriminator , which tries to distinguish not only from but also from . The adversarial loss is now changed to:
where is the generated fake caricature by generator , i.e., , and is the additional fake caricature constructed by our image fusion module according to Eq. (4). Specifically, and have the same weight, i.e., , and both of them try to fool the discriminator , while tries to distinguish them from the ground-truth .
The image fusion mechanism can improve not only the quality of the generated caricature in global appearance, but also the quality of its local appearance. On one hand, the discriminator distinguishes from , forcing the generator to produce images that mimic the global appearance of the ground-truth. On the other hand, since most of the parts of is exactly the same as , the discriminator needs only to judge whether the focused regions look realistic. It then encourages the generator to pay more attention to the local facial parts and try to improve them to fool discriminator.
With the image fusion mechanism introduced, the content loss is modified accordingly. Our model is encouraged to focus more on the important regions, so the content loss is modified to the following form:
where is the heatmap created from the facial mask . Compared with Eq. (3), this heatmap guided content loss can encourage the network to put more efforts on generating the important facial parts, such as the eyes, mouth, nose and so on. However, in the experiments we discover that the two losses have no significant difference on influencing the performance of the generated caricatures, but this loss does make the model slightly more stable than Eq. (3) during the training stage.
3.3 Diversity Loss
In our proposed model, the random noise controls the colors and styles of the images. However, in practice, the proposed model may suffer from the “mode collapse” problem, i.e., the input noise may not able to affect the final results.
To address the “mode collapse” problem, we propose a diversity loss to force our model to generate images with larger diversity. The basic idea of the diversity loss is to encourage the difference between two fake caricatures generated from two different noises (but with the same input face and facial mask) to be a linear function of the difference between these two noises. Suppose the generator is given a human face image and a binary pose mask , but with two different noise and . The generator outputs two fake caricatures, i.e., and for these two inputs, respectively. We have: , .
We then extract features of these two fake caricatures from the last convolutional layer of the discriminator . Denote the extracted feature as , . The extracted feature encodes the identity, pose as well as style of the generated image. However, as the two features are extracted from two fake caricatures with the same identity and viewpoint, it is reasonable to treat the difference between these two features as the difference between the styles and other unimportant attributes. We therefore force the difference between the two features to be a linear function of the difference between the two input noises. In this way, the diversity of styles can be explicitly controlled by the input noise. Our diversity loss is formulated as:
where the difference of features and noises are normalized by the feature norms and noise norms respectively to have a similar magnitude. The overall loss of our proposed CariGAN model can be formulated as:
To make our approach more understandable, we summarize the whole training procedure of CariGAN in Algorithm 1.
4.1 Basic Settings
Dataset All the experiments in this study are performed on the WebCaricature dataset . The WebCaricature dataset contains photograph and caricature images of celebrities, which is currently the largest caricature dataset. Images of celebrities are used for training and the rest celebrities are hold out for testing. All the images are aligned according to the provided facial landmarks as follows: (1) rotate each image to make two eyes in a horizontal line; (2) resize each image to guarantee the distance between two eyes of 75 pixels; (3) crop the primary facial part as the face image and resize it to . Moreover, random flip is performed for augmentation. We construct weak pairs completely on the training set for training, obtaining weakly paired face-caricature images. During training, we randomly select a pair of face and caricature images as the input and ground truth, respectively. Each face or caricature image is associated with manually annotated facial landmarks from which we generate a binary mask and a heatmap .
Baselines We compare our model with other state-of-the-art models in the field of image-to-image translation, i.e., Pix2Pix , BicycleGAN  and PG . Pix2Pix integrated an image conditioned GAN together with the loss for pixel-wise transformation. It can be seen as a base version of the proposed model without using the guidance of the facial mask, image fusion mechanism and diversity loss. BicycleGAN improved Pix2Pix by introducing conditional VAE  and latent regressor  for diversified image-to-image transformation, while in this work we achieve such an indeterministic transformation through the diversity loss. PG proposed to explicitly introduce the body pose information into image-to-image generation. We implement all the baseline methods using their publicly released codes for a fair comparison.
Note that in Figure 1, we have already shown the performance of geometric deformation based methods on our task. We conclude from the figure that although geometric deformation based methods can generate visually pleasing caricatures, the output is usually deterministic, lacking diversity in styles and exaggerations, while the GAN-based approaches can generate more diverse outputs. Hence in this section, we only compare our model with the state-of-the-art GAN-based models.
Implementation Details We use a similar network as Pix2Pix . The generator is a U-net like network which takes a random noise, a image and a facial mask as input. The intermediate convolutional and deconvolutional layers are connected through skip-connections 
. The discriminator is also composed of several convolutional layers. Each convolutional layer is followed by a Batch Normalization layer and a Leaky ReLU layer. Each deconvolutional layer is followed by a Batch Normalization layer and a ReLU layer 
. We use Tanh as the activation function of the output layer of the generator and employ Sigmoid for the last layer of the discriminator. Adam is used as the optimizer to update the parameters of the entire model. In Adam,and the momentum is set to . The learning rate is 0.0002 and is fixed during the training procedure.
4.2 Ablation Study
We first perform an ablation study to test the influence of each individual module of the proposed model. Specifically, we investigate the performance of the following models: Base GAN, Mask-G, Mask-G-D, Mask+IF and Mask+IF+diverse. Note that image fusion is called IF for short. Here, Base GAN is the model trained directly using cGAN and loss. It is essentially identical to the pix2pix model. Mask-G denotes the base GAN model with the facial mask as an additional input condition only to the generator . Mask-G-D denotes the base GAN model with the facial mask as an input condition to both the generator and the discriminator . Mask+IF is the facial mask conditioned model trained using the image fusion mechanism, i.e., . As for the Mask+IF+diverse which is the full mode with , we will discuss it specifically in Section 4.3. Note that for a fair comparison, all the models use the content loss to stabilize the training.
The qualitative results of these models for the ablation study are shown in Figure 4. As can be seen, the Base GAN model without using any of the proposed designs gives perceptually the worst outputs. Especially, we notice that the outputs of the Base GAN model and the Mask-G model are aligned with the pose of the original input faces, while the results of the other models are well aligned with the given target caricature landmarks with reasonable exaggerations and correct viewpoints. This demonstrates that using a facial mask as the conditional information can help disentangle the exaggeration from other attributes of the faces and yield better exaggerated outputs. It also illustrates that the facial mask should be used for both generator and discriminator . We can also observe that the Mask+IF model produce the best detailed outputs around the facial landmarks and hence the overall look of the generated caricatures are more realistic. This means the image fusion mechanism is indeed effective. It can help the generator to focus on generating images at key locations of the target subject.
4.3 Evaluation on Diversity Loss
We further evaluate the effectiveness of the proposed diversity loss. To highlight the benefit of the diversity loss, we compare the visual difference between the outputs of the Mask+IF and Mask+IF+diverse models side by side under the same setting. Given a face image, we first randomly draw samples of from a Gaussian distribution. Then we feed the face image and the noise samples into the Mask+IF and Mask+IF+diverse models with the same facial mask. Figure 5 shows the generated caricature images under different noises but with the same facial mask. Results demonstrate that the Mask+IF model produces deterministic outputs with negligible changes. Such an observation is consistent with previous work  about the noise ignorance problem. On the other hand, the outputs of the Mask+IF+diverse model are more diversified, showing that our diversity loss deals well with this problem. In the meanwhile, we can see vivid details from both results from the two models, indicating that our model is able to enhance the diversity without sacrificing the visual quality of the generated caricatures.
4.4 Comparison with Baselines
We also compare the proposed model with the state-of-the-art models. The qualitative comparison results are given in Figure 6. Please note that the ground-truth should only be used as a reference in the evaluation because caricature generation is not a unique-solution problem with an exact pixel-level mapping. In other words, there can be many plausible caricatures for a given face image.
The figure reveals that due to the weakly-paired nature of our problem, models originally designed for pixel-wise image translation either cannot converge well, such as Pix2Pix, or generate somewhat identical images as the inputs, such as BicycleGAN. PG produces images with better exaggerations with respect to the given caricature landmarks. However, as it heavily relies on the reconstruction loss, its outputs are blurry. In contrast, the outputs of our model are sharper. More importantly, they balance much better between the plausibility, identity and exaggeration, and therefore are visually much better than the results by the state-of-the-art methods.
In addition to the qualitative comparison, we also quantitatively measure the performance of our model against the state-of-the-art models in a user study. Since a face image may have multiple caricature counterparts, traditional evaluation metrics, such as SSIM and PSNR used for image-to-image translation models, are not applicable. Instead, we use human judgments for more perceptually reliable evaluation. We randomly pickface images for each person from the test set and obtain in total face images. For each face image, we generate a caricature image using the state-of-the-art models and our model. Then we ask participants to score the generated images. Each participant is assigned with 50 groups of images with each group containing the corresponding face image, ground truth caricature image and generated caricature image. The participants are required to score each generated image according to the following three aspects: (1) plausibility, whether the image is plausible enough; (2) identity preserving, whether the image has the same identity as the input face and the ground-truth caricature; (3) exaggeration, whether the generated image has the similar exaggeration (and we also ask them to check whether the viewpoint is correct) as the ground-truth caricature image. For each aspect, a caricature image receives a score between and . We average the scores of all the participants.
We propose a CariGAN model based on conditional generative adversarial networks to address the four fundamental aspects of caricature generation task, i.e., Identity Preservation, Plausibility, Exaggeration and Diversity. Experiments demonstrate that using a facial mask as a condition of the cGAN model is crucial to the generation of appropriate exaggerations. It is also proved that the proposed image fusion mechanism can regularize our model to generate caricatures that are visually appealing in both global and local appearances. The diversity loss further encourages the model to produce diversified outputs given different random noise while preserving vivid exaggerations and accurate identity. Our model generates promising caricatures that handle all the four aspects of this task to a large degree, and clearly outperforms the state-of-the-art models in a user study. In the future work, we plan to further improve the performance in terms of those four aspects and extend the model to generate higher resolution caricatures.
-  E. Akleman. Making caricatures with morphing. In ACM SIGGRAPH 97 Visual Proceedings: The art and interdisciplinary programs (SIGGRAPH), page 145. ACM, 1997.
-  E. Akleman, J. Palmer, and R. Logan. Making extreme caricatures with a new interactive 2d deformation technique with simplicial complexes. In Proceedings of Visual, pages 165–170, 2000.
M. Arjovsky, S. Chintala, and L. Bottou.
Wasserstein generative adversarial networks.
International Conference on Machine Learning (ICML), pages 214–223, 2017.
M. I. Belghazi, A. Baratin, S. Rajeshwar, S. Ozair, Y. Bengio, D. Hjelm, and
Mutual information neural estimation.In International Conference on Machine Learning (ICML), pages 530–539, 2018.
-  D. Berthelot, T. Schumm, and L. Metz. Began: boundary equilibrium generative adversarial networks. arXiv:1703.10717, 2017.
-  S. E. Brennan. Caricature generator: The dynamic exaggeration of faces by computer. Leonardo, 40(4):392–400, 1985.
H. Chen, Y. Xu, H. Shum, S. C. Zhu, and N. Zheng.
Example-based facial sketch generation with non-parametric sampling.
International Conference on Computer Vision (ICCV), pages 433–438, 2001.
-  T.-H. Chen, Y.-H. Liao, C.-Y. Chuang, W.-T. Hsu, J. Fu, and M. Sun. Show, adapt and tell: Adversarial training of cross-domain image captioner. In International Conference on Computer Vision (ICCV), volume 2, 2017.
-  X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In Advances in neural information processing systems (NIPS), pages 2172–2180, 2016.
-  J. Donahue, P. Krähenbühl, and T. Darrell. Adversarial feature learning. arXiv:1605.09782, 2016.
-  T. Fujiwara, M. Tominaga, K. Murakami, and H. Koshimizu. Web-picasso: internet implementation of facial caricature system picasso. In Advances in Multimodal Interfaces (ICMI), pages 151–159. Springer, 2000.
-  B. Gooch, E. Reinhard, and A. Gooch. Human facial illustrations: Creation and psychophysical evaluation. ACM Transactions on Graphics, 23(1):27–44, 2004.
-  I. Goodfellow. Nips 2016 tutorial: Generative adversarial networks. arXiv:1701.00160, 2016.
-  I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances In Neural Information Processing Systems (NIPS), pages 2672–2680, 2014.
-  K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. In European Conference on Computer Vision (ECCV), pages 630–645. Springer, 2016.
-  X. Huang, M.-Y. Liu, S. Belongie, and J. Kautz. Multimodal unsupervised image-to-image translation. In European Conference on Computer Vision (ECCV), 2018.
-  J. Huo, W. Li, Y. Shi, Y. Gao, and H. Yin. Webcaricature: a benchmark for caricature face recognition. arXiv:1703.03230, 2017.
-  S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning (ICML), pages 448–456, 2015.
P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros.
Image-to-image translation with conditional adversarial networks.
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
-  S. Iwashita, Y. Takeda, and T. Onisawa. Expressive facial caricature drawing. In International Conference on Fuzzy Systems (FUZZ), volume 3, pages 1597–1602. IEEE, 1999.
-  T. Kim, M. Cha, H. Kim, J. K. Lee, and J. Kim. Learning to discover cross-domain relations with generative adversarial networks. In International Conference on Machine Learning (ICML), pages 1857–1865, 2017.
-  D. P. Kingma and M. Welling. Auto-encoding variational bayes. International Conference on Learning Representations (ICLR), 2014.
-  H. Koshimizu, M. Tominaga, T. Fujiwara, and K. Murakami. On kansei facial image processing for computerized facial caricaturing system picasso. In IEEE International Conference on Systems, Man, and Cybernetics (SMC), volume 6, pages 294–299. IEEE, 1999.
-  A. B. L. Larsen, S. K. Sønderby, H. Larochelle, and O. Winther. Autoencoding beyond pixels using a learned similarity metric. arXiv:1512.09300, 2015.
-  A. X. Lee, R. Zhang, F. Ebert, P. Abbeel, C. Finn, and S. Levine. Stochastic adversarial video prediction. arXiv:1804.01523, 2018.
-  L. Liang, H. Chen, Y.-Q. Xu, and H.-Y. Shum. Example-based caricature generation with exaggeration. In Pacific Conference on Computer Graphics and Applications (PG), pages 386–393. IEEE, 2002.
-  P.-Y. C. W.-H. Liao and T.-Y. Li. Automatic caricature generation by analyzing facial features. In Asia Conference on Computer Vision (ACCV), volume 2, 2004.
J. Liu, Y. Chen, and W. Gao.
Mapping learning in eigenspace for harmonious caricature generation.In ACM international conference on Multimedia (ACM MM), pages 683–686. ACM, 2006.
-  J. Liu, Y. Chen, J. Xie, X. Gao, and W. Gao. Semi-supervised learning of caricature pattern from manifold regularization. In International Multimedia Modeling Conference (MMM), pages 413–424, 2009.
-  L. Ma, X. Jia, Q. Sun, B. Schiele, T. Tuytelaars, and L. Van Gool. Pose guided person image generation. In Advances in Neural Information Processing Systems (NIPS), pages 405–415, 2017.
-  S. Ma, J. Fu, C. W. Chen, and T. Mei. Da-gan: Instance-level image translation by deep attention generative adversarial networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5657–5666, 2018.
-  M. Mathieu, C. Couprie, and Y. LeCun. Deep multi-scale video prediction beyond mean square error. arXiv:1511.05440, 2015.
-  L. Mescheder, A. Geiger, and S. Nowozin. Which training methods for gans do actually converge? In International Conference on Machine Learning (ICML), pages 3478–3487, 2018.
-  M. Mirza and S. Osindero. Conditional generative adversarial nets. arXiv:1411.1784, 2014.
-  T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida. Spectral normalization for generative adversarial networks. arXiv:1802.05957, 2018.
-  Z. Mo, J. P. Lewis, and U. Neumann. Improved automatic caricature by feature normalization and exaggeration. In ACM SIGGRAPH, page 57. ACM, 2004.
-  A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. International Conference on Learning Representations (ICLR), 2016.
-  S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee. Generative adversarial text to image synthesis. In International Conference on Machine Learning (ICML), pages 1060–1069, 2016.
-  S. B. Sadimon, M. S. Sunar, D. Mohamad, and H. Haron. Computer generated caricature: A survey. In International Conference on Cyberworlds (CW), pages 383–390. IEEE, 2010.
-  J. T. Springenberg. Unsupervised and semi-supervised learning with categorical generative adversarial networks. arXiv:1511.06390, 2015.
-  L. Tran, X. Yin, and X. Liu. Disentangled representation learning gan for pose-invariant face recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 3, page 7, 2017.
-  C.-C. Tseng, J.-J. J. Lien, et al. Colored exaggerative caricature creation using inter-and intra-correlations of feature shapes and positions. Image and Vision Computing, 30(1):15–25, 2012.
-  W. Xiong, W. Luo, L. Ma, W. Liu, and J. Luo. Learning to generate time-lapse videos using multi-stage dynamic generative adversarial networks. arXiv:1709.07592, 2017.
-  B. Xu, N. Wang, T. Chen, and M. Li. Empirical evaluation of rectified activations in convolutional network. arXiv:1505.00853, 2015.
-  W. Yang, M. Toyoura, J. Xu, F. Ohnuma, and X. Mao. Example-based caricature generation with exaggeration control. The Visual Computer, 32(3):383–392, 2016.
-  Z. Yi, H. R. Zhang, P. Tan, and M. Gong. Dualgan: Unsupervised dual learning for image-to-image translation. In International Conference on Computer Vision (ICCV), pages 2868–2876, 2017.
-  Y. Zhang, W. Dong, C. Ma, X. Mei, K. Li, F. Huang, B. Hu, and O. Deussen. Data-driven synthesis of cartoon faces using different styles. IEEE Transactions on Image Processing, volume = 26, number = 1, pages = 464–478, year = 2017.
-  Z. Zheng, H. Zheng, Z. Yu, Z. Gu, and B. Zheng. Photo-to-caricature translation on faces in the wild. arXiv:1711.10735, 2017.
-  J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. International Conference on Computer Vision (ICCV), 2017.
-  J.-Y. Zhu, R. Zhang, D. Pathak, T. Darrell, A. A. Efros, O. Wang, and E. Shechtman. Toward multimodal image-to-image translation. In Advances in Neural Information Processing Systems (NIPS), pages 465–476, 2017.