CariGAN: Caricature Generation through Weakly Paired Adversarial Learning

11/01/2018 ∙ by Wenbin Li, et al. ∙ Nanjing University University of Rochester 12

Caricature generation is an interesting yet challenging task. The primary goal is to generate plausible caricatures with reasonable exaggerations given face images. Conventional caricature generation approaches mainly use low-level geometric transformations such as image warping to generate exaggerated images, which lack richness and diversity in terms of content and style. The recent progress in generative adversarial networks (GANs) makes it possible to learn an image-to-image transformation from data, so that richer contents and styles can be generated. However, directly applying the GAN-based models to this task leads to unsatisfactory results because there is a large variance in the caricature distribution. Moreover, some models require strictly paired training data which largely limits their usage scenarios. In this paper, we propose CariGAN overcome these problems. Instead of training on paired data, CariGAN learns transformations only from weakly paired images. Specifically, to enforce reasonable exaggeration and facial deformation, facial landmarks are adopted as an additional condition to constrain the generated image. Furthermore, an attention mechanism is introduced to encourage our model to focus on the key facial parts so that more vivid details in these regions can be generated. Finally, a Diversity Loss is proposed to encourage the model to produce diverse results to help alleviate the `mode collapse' problem of the conventional GAN-based models. Extensive experiments on a new large-scale `WebCaricature' dataset show that the proposed CariGAN can generate more plausible caricatures with larger diversity compared with the state-of-the-art models.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 4

page 8

page 9

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Figure 1: Results by different types of models for caricature generation. From left to right: input face images (Col. ), geometric deformation based model (Col. ), BicycleGAN (Col. ) and our CariGAN model (Col. ), which are produced by adapting different poses to the same input image.

Caricature is an artistic creation produced by exaggerating some prominent characteristics of a face image while preserving its identity. Caricatures are widely used in social media and daily life for a variety of purposes. For example, it can be used as the profile image or to express certain emotions and sentiments on social networks. Due to the prosperity of social media, automatic caricature creation becomes an increasingly attractive research problem. In this paper, given an arbitrary face image of a person, our primary goal is to generate satisfactory or plausible caricatures of that person with reasonable exaggerations and an appropriate caricature style.

To this end, we identify and define four key aspects that need to be taken into account for caricature generation:

  • Identity Preservation: The generated caricature should share the same identity as the input face;

  • Plausibility: The generated caricature should be visually satisfactory or plausible; the style of the generated image should be consistent with normal cartoons or caricatures;

  • Exaggeration: Different parts of the input face should be deformed in a reasonable way to exaggerate the prominent characteristics of the face;

  • Diversity: Given an input face, diverse caricatures with different styles should be generated.

Several previous studies [1, 20, 23, 6, 29, 39] have made attempts to solve this problem. These studies mainly focus on the generation of sketch caricature [7, 26, 36], black-white illustration caricature [12], and outline caricature [11]. Most of them adopt low-level image transformations and computer graphics techniques [2, 28, 42, 45] to generate new images. They are either semi-automatic or complicated with multiple stages, making it difficult to be applied to large-scale caricature generation applications. Moreover, although they can generate correct deformations on some facial parts, their results are usually visually unappealing, e.g., lacking of rich colors and vivid details. As shown in the second column of Figure 1, the conventional low-level geometric deformation based approaches [42] can only generate one specifically exaggerated caricature for one input face. Often, the content, texture and style of the generated caricature are plain and less interesting.

Recently, with the progress of conditional generative adversarial networks (GANs) [14, 34, 37] and their success in image generation, image translation and editing tasks [9, 40, 19, 49, 50, 5], it is possible to use a GAN model to learn transformations from data itself to produce plausible caricatures from the input face images. However, although typical GAN-based models such as Pix2Pix [19] can generate realistic images, directly applying these models to this task fails to produce satisfactory outputs. Most of the previous methods cannot address all of the four key aspects together. Quite often, the generated image is almost visually the same as the input face with only minor changes in color, lacking sufficient exaggerations in facial parts. As shown in the third column of Figure 1, there is almost no exaggeration of facial features, which does not satisfy the primary goal of caricature generation. In addition, many GAN-based image-to-image translation models require strictly paired training images, i.e., the transformation should be a bijective pixel-to-pixel mapping. However, these paired data are quite difficult to obtain. For caricature generation, using such pixel-wisely paired data is not feasible for practical purposes.

Inspired by the power of the conditional GANs, this paper proposes an end-to-end model named CariGAN to solve the problems encountered by the conventional GAN models. The goal is to address as much as possible the four key aspects of caricature generation.

Due to the difficulty in obtaining strictly paired training data, we introduce a new setting for training GANs, i.e., weakly paired training. Specifically, one pair of input face and the ground-truth caricature only share the same identity but has no pixel-to-pixel or pose-to-pose correspondence. This setting is much more challenging than the pixel-wisely paired training setting. We will describe this setting in detail in Section 3.1.

Furthermore, as shown in Figure 1, although conventional GAN-based models such as BicycleGAN [50] can produce caricatures with correct identities, they fail to produce reasonable exaggerations. It is worth emphasizing that the exaggeration is a vital aspect to make a vivid caricature. In our model, we retain the advantage of conventional models, i.e., employing a U-net as the generator to keep the identity unchanged during the transformation. In addition, we introduce a facial mask as a condition of GAN to precisely guide the deformations of faces, so that the generated images can have reasonable exaggerations.

For the plausibility issue, although the GAN-based models can produce plausible images by forcing the distribution of the generated caricatures to be close to that of the ground truth, there are still many artifacts that decrease the degree of plausibility. To enhance the plausibility of the generated caricatures, a new image fusion mechanism is proposed. By adopting this mechanism, we can encourage the model to concentrate on refining not only the global appearance, but also the important local facial parts of the generated caricature images.

Finally, many conditional GAN models suffer from the so-called “mode collapse” problem, i.e., different inputs, especially random noise, can be mapped to the same mode [19]. The diversity of the outputs will be greatly reduced due to this problem. To address this problem, a novel diversity loss is proposed to enforce that the input random noise should play a more important role in generating the styles and colors of the generated caricatures.

In summary, the main contributions of this paper are as follows:

  • We introduce a new weakly paired training setting for GANs and propose a CariGAN model that can successfully generate plausible caricatures under this challenging setting.

  • We propose a new image fusion mechanism to encourage the model to focus on both the global and local appearance of the generated caricatures, and pay more attention to the key facial parts.

  • We propose a novel diversity loss to encourage our model to generate caricatures with larger diversity in style, color and content.

The rest of this paper is organized as follows: Section 2 introduces the related work on caricature generation, conditional generative adversarial networks and multimodality encoding in GANs. Section 3 introduces the proposed model in details. Experimental settings and results of different models are provided in Section 4, and the last section concludes the whole paper.

2 Related work

2.1 Caricature Generation

Early work on caricature generation mainly focuses on low-level image processing and computer graphics approaches. Typical process for this kind of approaches can be summarized as follows: (1) detect facial feature points (i.e., facial landmarks) and extract facial sketch from an input face; (2) find the distinctive characteristic and exaggerate the facial shape; (3) warp the original face image to the exaggerated one to get a caricature.

There are two major types of these earlier work: rule-based methods and example-based methods. Rule-based methods generate caricatures by simulating the rules of caricature drawing, i.e., the notion of “exaggeration the difference from the mean” (EDFM). In general, an average face or a standard face model is taken as a reference, and then the difference is exaggerated. Representative methods include [6, 36, 12, 27]. In [27], Chiang et al. formalized the caricature generation into a metamorphosis process to generate caricatures by leveraging one caricature as a reference. In [36], Mo et al. extended the notion of EDFM by considering both feature DFM (Difference From Mean) and feature variance. Unlike rule-based methods, example-based methods rely on a face-caricature dataset and generate caricatures based on similar examples. For example, Liang et al. [26] proposed a prototype-based exaggeration model by analyzing the correlation between face-caricature pairs. Liu et al. [28]

adopted principal components analysis (PCA) to get the principal components, and then employed support vector regression (SVR) to learn a mapping model to generate caricatures. Recently, Yang

et al. [45] took both the spatial relationship among facial components and the shape of facial components, into account and proposed a new example-based method. Zhang et al. [47] proposed a data-driven framework for generating cartoon faces by selecting and assembling facial components from a database.

Figure 2: The overall framework of the proposed CariGAN. The input of the generator is a concatenation of a random noise map, a face image and a facial mask. The input data is fed into a U-net generator to generate a fake caricature. Then, using our image fusion mechanism, we combine the ground-truth and the generated fake caricature to get an additional fused fake caricature. These caricatures are concatenated with the input facial mask, respectively, and fed into the discriminator. The discriminator then tries to distinguish the real from the fakes. In addition to the adversarial loss, a diversity loss is proposed to constrain the outputs to be more diverse in style and content.

2.2 Conditional Generative Adversarial Networks

Caricature generation can be seen as an image translation problem and thus can be modeled with conditional generative adversarial networks (cGANs) [14, 34]. A conditional GAN takes a random noise and some prior knowledge as inputs to generate data whose conditional distribution is similar to the one of the ground-truth data. Recently, cGANs [3, 32, 43, 8, 38, 33, 31] have shown great capacity in learning transformations from data and generating realistic images. Typical supervised models such as Pix2Pix [19] and BicycleGAN [50] perform well on the image-to-image translation problem, especially when the input image and the output image have a pixel-wise correspondence. To relieve the requirement of strictly paired training data, CycleGAN [49], DiscoGAN [21], and Dual GAN [46] demonstrated that such tasks can even be accomplished in an unsupervised way.

However, directly applying these supervised or unsupervised GAN models to the caricature generation task may fail to generate plausible caricatures due to the weakly paired nature of our task, e.g., different facial poses between face images and caricatures, and varying degrees of exaggeration and deformation among facial components in caricatures. To tackle this problem, our CariGAN model is built not only on condition of the input face image, but also a facial mask which indicates the landmarks of the target caricature. Through the condition of facial mask, the generated caricature can be encouraged to have a similar exaggeration and viewpoint as the ground-truth caricature. Similar to our model, some GAN-based models use an additional person pose mask to guide the generation process. For example, Ma et al. [30] used a person pose to guide a two-stage GAN-based model to generate realistic person images. In the first stage, it adopted a reconstruction loss to generate a coarse image which was then refined in the second stage by a GAN model. One major difference between their model and ours lies in that reconstruction plays a key role in their model, which may lead to blurry results [19]. On the contrary, our model takes full advantage of adversarial learning and is able to generate more sharp images. Another difference is that they use multiple stages which is more complicated, while our model is an end-to-end one stage model.

Another closely related work is [48], which is also based on GANs for caricature generation. The major differences are that: (1) our model is trained on weakly paired face-caricature images, while [48] requires strictly paired images with the same facial viewpoint for training; (2) our model is conditioned on a face image and a facial mask, which can control the exaggeration of the output, while [48] is only conditioned on the input face, lacking the ability to control the exaggeration.

2.3 Mutlimodality Encoding in GANs

One major issue regarding cGANs is the “mode collapse” problem [13]. In order to relieve this problem, the key point is on how to learn richer modalities of the outputs and avoid multiple inputs being mapped to the same output. Some prior studies addressing this problem [50, 24, 10, 4, 35, 16, 25] have been proposed. One simple and effective way to alleviate this problem is to use a latent code as an additional input to explicitly encode the modes. For example, a one-hot vector representing the facial viewpoint is introduced as an input to generate faces with different poses [41]. In our work, we use a facial mask to guide the generation of caricatures.

Another typically applied approach to relieve the “mode collapse” problem is to enforce a tight connection between the latent codes and the output data. A few previous studies have investigated this idea by introducing an additional encoder to map the generated image back to the input random noise, so that the mapping from the random noise to the output can be bijective [50, 24, 10]. However, the encoder brings additional computation, and the simultaneous optimization of the generator and encoder is non-trivial. We provide a new perspective to solve this problem. In addition to using the facial mask as a guidance, we enforce the differences between the output images to be a linear function of the differences between the input random noises, so that the change of noise can greatly influence the styles of the output images.

3 Our Model

As illustrated in Figure 2, CariGAN takes a face image, a facial mask and a random noise as inputs. It then tries to generate a plausible caricature that has the same identity with the input face and meaningful exaggeration as indicated by the input facial mask. To produce satisfactory caricatures, CariGAN uses a generative adversarial network to model the translation from a face to a caricature. To encourage the model to generate realistic caricatures with more reasonable exaggerations, we introduce an image fusion mechanism to this model to focus more on the important facial parts of the generated image. We also design a diversity loss to address the “mode collapse” problem. The diversity loss enforces the differences between the output images to be a linear function of the differences between the input random noises.

3.1 Adversarial Learning with Weak Pairs

Weakly paired training setting   Let be a pair of training data, where represents the input face image, and represents the corresponding ground-truth caricature. and are of resolutions and belong to the same person. It should be noted that they are not pixel-wisely or pose-wisely paired. This setting is quite different from the conventional paired training setting, where the input image and the ground-truth are usually pixel-wisely and bijectively mapped [19, 50]. This is because that there are multiple face images and various caricatures with different artistic styles for one person, which means that one face image can be paired with multiple caricatures and one caricature can also correspond to multiple face images of the same identity. Thus, in an input pair, the face image and caricature can have totally different viewpoints (i.e., facial poses), which makes the task extremely challenging. In addition to the viewpoint, there is no pixel-wise correspondence between the faces and the caricatures inherently, as many facial parts are exaggerated. Hence, we call this pair a weak pair, and define this setting as a new training setting, namely weakly paired training, in the image-to-image translation task.

Adversarial loss   The goal of our task is to map an input face image to a caricature image such that the distribution of variable is close to that of the weakly paired ground-truth caricature image . To this end, we build a CariGAN model based on the cGANs to handle this image-to-image translation task. Our CariGAN is composed of a generator and a discriminator . With an input face image and a random noise , tries to generate a caricature image . The goal of is to make as plausible as possible, so as to fool the discriminator , while the discriminator tries to distinguish the generated image and the ground-truth . Specifically, following the usage of noise variable in BicycleGAN [50], we first sample a noise vector of length

from a Gaussian distribution. Then it is duplicated

times in the spatial locations to get a noise map . We then directly concatenate and as the input of our generator. The adversarial loss of such a conditional GAN can be formulated as:

(1)

Facial mask as an additional condition   Unfortunately, only conditioning on the input face makes it difficult to learn reasonable exaggerations in the output caricature for the following reasons: (1) One input face can actually be mapped to caricatures with arbitrary exaggerations. This uncertainty may confuse the generator. (2) Although the input noise can be used to model a wider distribution, it is difficult to encode viewpoints, exaggerations and styles at the same time.

To reduce the uncertainty, we use a facial mask as an additional condition and feed it into the generator along with . We encourage the model to generate a caricature that has similar exaggerations as indicated by this mask. The facial mask is a binary image composed of facial landmarks. In the mask, each landmark is represented by a square block and we fill the pixels in the blocks with ones and the background pixels with zeros. The facial mask can encode two aspects of a face. The first aspect is the exaggeration on local facial parts, such as eyes, mouth etc. The second one is the viewpoint of the whole face.

During training, we directly use the facial mask of the ground-truth as input and constrain the output of the generator to be similar to with regard to facial exaggeration and viewpoint. In this way, the major appearance of the output image is roughly determined, except for some variations on the styles, textures and colors. The success of the previous conditional GAN models [34, 19, 49] has indicated that the random noise sampled from a Gaussian distribution is able to model the variation of different styles. Hence, we also use random noise to encode the style of the generated caricature. In fact, we use the facial mask as an additional condition for both the generator and the discriminator. Specifically, we directly concatenate , and to form an -channel map as the input of our generator. The input of the discriminator is a concatenation of or . With facial mask as an additional input, the adversarial loss of our model is as follows:

(2)

As the distribution of the generated fake pair is encouraged to be close to the distribution of the real pair , the generated image is not only enforced to have similar appearance of the ground-truth , but also enforced to have a similar exaggeration as indicated by . If we only use as a condition in and ignore it in the discriminator , then the is only constrained to mimic the distribution of . The input pose condition tends to be ignored during training in this case.

Content loss   Previous work on image-to-image translation [19] shows that combining the pixel-wise loss between the generated fake image and the ground-truth can boost the performance of cGANs. Although in our task, pixels of the ground-truth and the generated image are not an bijective mapping, we discover that using an loss can stabilize the training of GANs. Hence, we also use this pixel-wise loss as a constraint for the content of the generated image. The content loss is formulated as:

(3)

where denotes norm of matrix.

Figure 3: Illustration of the image fusion procedure.

3.2 Focus on Important Local Regions

Although the conditional GAN is able to generate visually appealing images, there are still many local artifacts in the output images such as the absence of eyes. The reason may be that the conventional conditional GANs only constrain that the global appearance of the generated image should look like real caricatures on average, but it cannot guarantee that each local facial part is present and realistic. To encourage the model to generate reasonable facial parts, we propose a new image fusion (IF for short) mechanism to force the model to focus more on important local regions. We fuse the background parts of the ground-truth and the generated key local parts of the generated fake images to create new additional fake images. The basic idea is illustrated in Figure 3.

Specifically, we use the input facial mask as a guidance for selecting the important regions. We create a Gaussian blob for each landmark in the facial mask and obtain a one-channel heatmap . Using this heatmap, we replace the regions of the ground-truth around the landmarks with the regions of the generated image , and keep the other unimportant parts such as the background pixels unchanged. In this way, we generate an additional fused fake image , which is formulated as:

(4)

where denotes the pixel-wise multiplication. With generated, it is fed into the discriminator , which tries to distinguish not only from but also from . The adversarial loss is now changed to:

(5)

where is the generated fake caricature by generator , i.e., , and is the additional fake caricature constructed by our image fusion module according to Eq. (4). Specifically, and have the same weight, i.e., , and both of them try to fool the discriminator , while tries to distinguish them from the ground-truth .

The image fusion mechanism can improve not only the quality of the generated caricature in global appearance, but also the quality of its local appearance. On one hand, the discriminator distinguishes from , forcing the generator to produce images that mimic the global appearance of the ground-truth. On the other hand, since most of the parts of is exactly the same as , the discriminator needs only to judge whether the focused regions look realistic. It then encourages the generator to pay more attention to the local facial parts and try to improve them to fool discriminator.

With the image fusion mechanism introduced, the content loss is modified accordingly. Our model is encouraged to focus more on the important regions, so the content loss is modified to the following form:

(6)

where is the heatmap created from the facial mask . Compared with Eq. (3), this heatmap guided content loss can encourage the network to put more efforts on generating the important facial parts, such as the eyes, mouth, nose and so on. However, in the experiments we discover that the two losses have no significant difference on influencing the performance of the generated caricatures, but this loss does make the model slightly more stable than Eq. (3) during the training stage.

  Set learning rates and for the generator and discriminator , respectively.
  Initialize the parameters of and of .
  for number of iterations do
     Updating while fixing :
      Sample a batch of weakly paired training data .
      Get the pose of the ground-truth.
      Generate fake samples from .
      Generate fused fake samples from .
      
     Updating while fixing :
      
  end for
Algorithm 1 The training procedure of the CariGAN model.

3.3 Diversity Loss

In our proposed model, the random noise controls the colors and styles of the images. However, in practice, the proposed model may suffer from the “mode collapse” problem, i.e., the input noise may not able to affect the final results.

To address the “mode collapse” problem, we propose a diversity loss to force our model to generate images with larger diversity. The basic idea of the diversity loss is to encourage the difference between two fake caricatures generated from two different noises (but with the same input face and facial mask) to be a linear function of the difference between these two noises. Suppose the generator is given a human face image and a binary pose mask , but with two different noise and . The generator outputs two fake caricatures, i.e., and for these two inputs, respectively. We have: , .

We then extract features of these two fake caricatures from the last convolutional layer of the discriminator . Denote the extracted feature as , . The extracted feature encodes the identity, pose as well as style of the generated image. However, as the two features are extracted from two fake caricatures with the same identity and viewpoint, it is reasonable to treat the difference between these two features as the difference between the styles and other unimportant attributes. We therefore force the difference between the two features to be a linear function of the difference between the two input noises. In this way, the diversity of styles can be explicitly controlled by the input noise. Our diversity loss is formulated as:

(7)

where the difference of features and noises are normalized by the feature norms and noise norms respectively to have a similar magnitude. The overall loss of our proposed CariGAN model can be formulated as:

(8)

To make our approach more understandable, we summarize the whole training procedure of CariGAN in Algorithm 1.

4 Experiments

4.1 Basic Settings

Dataset   All the experiments in this study are performed on the WebCaricature dataset [17]. The WebCaricature dataset contains photograph and caricature images of celebrities, which is currently the largest caricature dataset. Images of celebrities are used for training and the rest celebrities are hold out for testing. All the images are aligned according to the provided facial landmarks as follows: (1) rotate each image to make two eyes in a horizontal line; (2) resize each image to guarantee the distance between two eyes of 75 pixels; (3) crop the primary facial part as the face image and resize it to . Moreover, random flip is performed for augmentation. We construct weak pairs completely on the training set for training, obtaining weakly paired face-caricature images. During training, we randomly select a pair of face and caricature images as the input and ground truth, respectively. Each face or caricature image is associated with manually annotated facial landmarks from which we generate a binary mask and a heatmap .

Baselines   We compare our model with other state-of-the-art models in the field of image-to-image translation, i.e., Pix2Pix [19], BicycleGAN [50] and PG [30]. Pix2Pix integrated an image conditioned GAN together with the loss for pixel-wise transformation. It can be seen as a base version of the proposed model without using the guidance of the facial mask, image fusion mechanism and diversity loss. BicycleGAN improved Pix2Pix by introducing conditional VAE [22] and latent regressor [10] for diversified image-to-image transformation, while in this work we achieve such an indeterministic transformation through the diversity loss. PG proposed to explicitly introduce the body pose information into image-to-image generation. We implement all the baseline methods using their publicly released codes for a fair comparison.

Note that in Figure 1, we have already shown the performance of geometric deformation based methods on our task. We conclude from the figure that although geometric deformation based methods can generate visually pleasing caricatures, the output is usually deterministic, lacking diversity in styles and exaggerations, while the GAN-based approaches can generate more diverse outputs. Hence in this section, we only compare our model with the state-of-the-art GAN-based models.

Figure 4: Qualitative comparison of different variants of the proposed model. From top to bottom: input, ground truth, facial mask, Base GAN, Mask-G, Mask-G-D and Mask+IF. We suggest the readers pay more attention to the facial parts, such as the eyes, mouth, etc. Best viewed in color in screen with zoom-in.
Figure 5: Interpolating on the noise

. The first and second columns are the input images and ground truth, respectively. The rest are the outputs with linearly interpolated noises. Odd rows: our model without diversity loss. Even rows: our model with diversity loss. Please pay special attention to the colors and textures of the generated caricatures.

Implementation Details   We use a similar network as Pix2Pix [19]. The generator is a U-net like network which takes a random noise, a image and a facial mask as input. The intermediate convolutional and deconvolutional layers are connected through skip-connections [15]

. The discriminator is also composed of several convolutional layers. Each convolutional layer is followed by a Batch Normalization layer and a Leaky ReLU layer. Each deconvolutional layer is followed by a Batch Normalization layer

[18] and a ReLU layer [44]

. We use Tanh as the activation function of the output layer of the generator and employ Sigmoid for the last layer of the discriminator. Adam is used as the optimizer to update the parameters of the entire model. In Adam,

and the momentum is set to . The learning rate is 0.0002 and is fixed during the training procedure.

4.2 Ablation Study

We first perform an ablation study to test the influence of each individual module of the proposed model. Specifically, we investigate the performance of the following models: Base GAN, Mask-G, Mask-G-D, Mask+IF and Mask+IF+diverse. Note that image fusion is called IF for short. Here, Base GAN is the model trained directly using cGAN and loss. It is essentially identical to the pix2pix model. Mask-G denotes the base GAN model with the facial mask as an additional input condition only to the generator . Mask-G-D denotes the base GAN model with the facial mask as an input condition to both the generator and the discriminator . Mask+IF is the facial mask conditioned model trained using the image fusion mechanism, i.e., . As for the Mask+IF+diverse which is the full mode with , we will discuss it specifically in Section 4.3. Note that for a fair comparison, all the models use the content loss to stabilize the training.

The qualitative results of these models for the ablation study are shown in Figure 4. As can be seen, the Base GAN model without using any of the proposed designs gives perceptually the worst outputs. Especially, we notice that the outputs of the Base GAN model and the Mask-G model are aligned with the pose of the original input faces, while the results of the other models are well aligned with the given target caricature landmarks with reasonable exaggerations and correct viewpoints. This demonstrates that using a facial mask as the conditional information can help disentangle the exaggeration from other attributes of the faces and yield better exaggerated outputs. It also illustrates that the facial mask should be used for both generator and discriminator . We can also observe that the Mask+IF model produce the best detailed outputs around the facial landmarks and hence the overall look of the generated caricatures are more realistic. This means the image fusion mechanism is indeed effective. It can help the generator to focus on generating images at key locations of the target subject.

4.3 Evaluation on Diversity Loss

We further evaluate the effectiveness of the proposed diversity loss. To highlight the benefit of the diversity loss, we compare the visual difference between the outputs of the Mask+IF and Mask+IF+diverse models side by side under the same setting. Given a face image, we first randomly draw samples of from a Gaussian distribution. Then we feed the face image and the noise samples into the Mask+IF and Mask+IF+diverse models with the same facial mask. Figure 5 shows the generated caricature images under different noises but with the same facial mask. Results demonstrate that the Mask+IF model produces deterministic outputs with negligible changes. Such an observation is consistent with previous work [19] about the noise ignorance problem. On the other hand, the outputs of the Mask+IF+diverse model are more diversified, showing that our diversity loss deals well with this problem. In the meanwhile, we can see vivid details from both results from the two models, indicating that our model is able to enhance the diversity without sacrificing the visual quality of the generated caricatures.

Figure 6: Qualitative comparison of our model against the baseline methods. From left to right: input, ground-truth, facial mask, results of Pix2Pix, BicycleGAN, PG, and our model. Please pay attention to the exaggerated facial regions, including eyes, mouth, nose, and so on.

4.4 Comparison with Baselines

We also compare the proposed model with the state-of-the-art models. The qualitative comparison results are given in Figure 6. Please note that the ground-truth should only be used as a reference in the evaluation because caricature generation is not a unique-solution problem with an exact pixel-level mapping. In other words, there can be many plausible caricatures for a given face image.

The figure reveals that due to the weakly-paired nature of our problem, models originally designed for pixel-wise image translation either cannot converge well, such as Pix2Pix, or generate somewhat identical images as the inputs, such as BicycleGAN. PG produces images with better exaggerations with respect to the given caricature landmarks. However, as it heavily relies on the reconstruction loss, its outputs are blurry. In contrast, the outputs of our model are sharper. More importantly, they balance much better between the plausibility, identity and exaggeration, and therefore are visually much better than the results by the state-of-the-art methods.

In addition to the qualitative comparison, we also quantitatively measure the performance of our model against the state-of-the-art models in a user study. Since a face image may have multiple caricature counterparts, traditional evaluation metrics, such as SSIM and PSNR used for image-to-image translation models, are not applicable. Instead, we use human judgments for more perceptually reliable evaluation. We randomly pick

face images for each person from the test set and obtain in total face images. For each face image, we generate a caricature image using the state-of-the-art models and our model. Then we ask participants to score the generated images. Each participant is assigned with 50 groups of images with each group containing the corresponding face image, ground truth caricature image and generated caricature image. The participants are required to score each generated image according to the following three aspects: (1) plausibility, whether the image is plausible enough; (2) identity preserving, whether the image has the same identity as the input face and the ground-truth caricature; (3) exaggeration, whether the generated image has the similar exaggeration (and we also ask them to check whether the viewpoint is correct) as the ground-truth caricature image. For each aspect, a caricature image receives a score between and . We average the scores of all the participants.

The perceptual scores are given in Table 1. The Pix2Pix model receives the worst perceptual scores in all the three aspects. Because of the pixel-wise translation, BicycleGAN performs well in the plausibility and identity preserving aspects while showing poorly in the exaggeration aspect. On the other hand, the PG model addresses exaggeration well but has low scores in the plausibility and identity preserving aspects (due to the blurry outputs). Overall, our model performs well in all the three perceptual aspects and produces the best all-around perceptual performance in terms of user rating scores.

Methods Plausibility Identity Exaggeration Avg.
pix2pix 1.92 1.63 2.15 1.90
BicycleGAN 5.32 6.91 3.67 5.30
PG 2.86 2.85 5.08 3.60
Ours 5.45 5.34 6.18 5.66
Table 1: Perceptual scores of different models, which are on a -point scale. indicates that a perceptual aspect is best preserved and indicates a perceptual aspect is totally lost.

5 Conclusions

We propose a CariGAN model based on conditional generative adversarial networks to address the four fundamental aspects of caricature generation task, i.e., Identity Preservation, Plausibility, Exaggeration and Diversity. Experiments demonstrate that using a facial mask as a condition of the cGAN model is crucial to the generation of appropriate exaggerations. It is also proved that the proposed image fusion mechanism can regularize our model to generate caricatures that are visually appealing in both global and local appearances. The diversity loss further encourages the model to produce diversified outputs given different random noise while preserving vivid exaggerations and accurate identity. Our model generates promising caricatures that handle all the four aspects of this task to a large degree, and clearly outperforms the state-of-the-art models in a user study. In the future work, we plan to further improve the performance in terms of those four aspects and extend the model to generate higher resolution caricatures.

References