Disentangled Makeup Transfer with Generative Adversarial Network

07/02/2019
by   Honglun Zhang, et al.
Shanghai Jiao Tong University
3

Facial makeup transfer is a widely-used technology that aims to transfer the makeup style from a reference face image to a non-makeup face. Existing literature leverage the adversarial loss so that the generated faces are of high quality and realistic as real ones, but are only able to produce fixed outputs. Inspired by recent advances in disentangled representation, in this paper we propose DMT (Disentangled Makeup Transfer), a unified generative adversarial network to achieve different scenarios of makeup transfer. Our model contains an identity encoder as well as a makeup encoder to disentangle the personal identity and the makeup style for arbitrary face images. Based on the outputs of the two encoders, a decoder is employed to reconstruct the original faces. We also apply a discriminator to distinguish real faces from fake ones. As a result, our model can not only transfer the makeup styles from one or more reference face images to a non-makeup face with controllable strength, but also produce various outputs with styles sampled from a prior distribution. Extensive experiments demonstrate that our model is superior to existing literature by generating high-quality results for different scenarios of makeup transfer.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 3

page 4

page 6

page 7

page 8

page 9

page 10

03/27/2020

Local Facial Makeup Transfer via Disentangled Representation

Facial makeup transfer aims to render a non-makeup face image in an arbi...
05/31/2017

Representation Learning by Rotating Your Faces

The large pose discrepancy between two face images is one of the fundame...
04/25/2016

Makeup like a superstar: Deep Localized Makeup Transfer Network

In this paper, we propose a novel Deep Localized Makeup Transfer Network...
12/08/2019

Face Beautification: Beyond Makeup Transfer

Facial appearance plays an important role in our social lives. Subjectiv...
02/05/2018

Face Destylization

Numerous style transfer methods which produce artistic styles of portrai...
03/12/2018

Style Aggregated Network for Facial Landmark Detection

Recent advances in facial landmark detection achieve success by learning...
08/16/2020

Learning Disentangled Expression Representations from Facial Images

Face images are subject to many different factors of variation, especial...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Makeup is a widely-used skill to improve one’s facial appearance but it is never easy to become a professional makeup artist as there are so many different cosmetic products and tools diverse in brands, categories and usages. As a result, it has been increasingly popular to try different makeup styles on photos or short videos with virtual makeup software. Facial Makeup Transfer [Tong et al.2007] provides an effective solution to this by naturally transferring the makeup style from a well-suited reference face image to a non-makeup face, which can be utilized in a wide range of applications like photograph, video, entertainment and fashion.

Rather than traditional image processing methods [Guo and Sim2009, Li et al.2015, Tong et al.2007]

like image gradient editing and physics-based manipulation, recent literature on facial makeup transfer mostly employ deep neural networks 

[Bengio et al.2013] to learn the mapping from non-makeup face images to makeup ones, and leverage the adversarial loss of GAN (Generative Adversarial Network[Goodfellow et al.2014] to generate realistic fake images. In order to accurately capture the makeup style, several methods [Liu et al.2016, Li et al.2018] have been proposed to evaluate the differences between the generated face and the reference face on crucial cosmetic components like foundation, eyebrow, eye shadow and lipstick.

However, existing approaches mainly focus on makeup transfer between two face images and are only able to produce fixed outputs, which is denoted by pair-wise makeup transfer as Fig.1 illustrates. In fact, there are several other scenarios of makeup transfer, such as controlling the strength of makeup style (interpolated makeup transfer), blending the makeup styles of two or more reference images (hybrid makeup transfer) and producing various outputs based on a single non-makeup face without any reference images (multi-modal makeup transfer). To the best of our knowledge, those different scenarios have not been researched much yet and cannot be well achieved by existing literature.

Figure 1: Different scenarios of makeup transfer. Most related researches only focus on the pair-wise makeup transfer. In contrast, our model can achieve all of them.

In this paper, we propose DMT (Disentangled Makeup Transfer), a unified generative adversarial network to achieve different scenarios of makeup transfer. Inspired by recent advances in Disentangled Representation [Huang et al.2018, Ma et al.2018, Lee et al.2018], our model utilizes an identity encoder as well as a makeup encoder to disentangle the personal identity and the makeup style for arbitrary face images. Based on the outputs of the two encoders, we further employ a decoder to reconstruct the original faces. We also apply a discriminator to distinguish real face images from fake ones. Thanks to such a disentangled architecture, our model can not only transfer the makeup styles from one or more reference face images to a non-makeup face with controllable strength, but also produce various outputs with styles sampled from a prior distribution. Furthermore, we leverage the attention mask [Chen et al.2018, Mejjati et al.2018, Yang et al.2018, Zhang et al.2018] to refine the transfer results so that the makeup-unrelated content is well preserved. We perform extensive experiments on a dataset that contains both makeup and non-makeup face images [Li et al.2018]. Both qualitative and quantitative results demonstrate that our model is superior to existing literature by generating high-quality faces for different makeup transfer scenarios.

Our contributions are summarized as follows.

  • We propose DMT, a unified model to achieve different scenarios of makeup transfer. To the best of our knowledge, we are the first to integrate disentangled representation to solve the task of facial makeup transfer.

  • With such a disentangled architecture, our model is able to conduct different scenarios of makeup transfer, including pair-wise, interpolated, hybrid and multi-modal, which cannot be achieved by related researches.

  • Extensive experiments demonstrate the superiority of our model against state-of-the-arts, both qualitatively and quantitatively.

2 Related Work

Generative Adversarial Network (GAN) [Goodfellow et al.2014]

is a powerful method for training generative models of complex data and has been proved effective in a wide range of computer vision tasks, including image generation 

[Radford et al.2015, Zhang et al.2017, Gulrajani et al.2017]

, image-to-image translation 

[Isola et al.2017, Zhu et al.2017, Choi et al.2017], inpainting [Liu et al.2018]

, super-resolution 

[Ledig et al.2017] and so on. In this paper, we leverage the adversarial loss of GAN to generate realistic faces that are indistinguishable from real ones.

Facial Makeup Transfer aims to transfer the makeup style from a reference face image to a non-makeup face. Traditional methods include [Guo and Sim2009] and [Li et al.2015], which decompose face images into several layers and conduct makeup transfer within each layer. [Liu et al.2016] proposes an optimization-based deep localized makeup transfer network and applies different transfer methods for different cosmetic components. In contrast, [Li et al.2018] trains a learning-based model with dual inputs and outputs to achieve pair-wise makeup transfer and only requires a forward pass for inference. Other related topics include Unpaired Image-to-Image Translation [Zhu et al.2017, Kim et al.2017, Yi et al.2017], where images of two domains are translated bi-directionally, and Style Transfer [Gatys et al.2015, Johnson et al.2016], where a transfer image is synthesized based on a content image and a style image. However, an image-to-image translation model trained on makeup and non-makeup faces can only learn domain-level mappings, thus producing fixed output for a certain non-makeup face regardless of the reference image. Style transfer models can be used to conduct makeup transfer by treating the makeup and non-makeup faces as the style image and the content image respectively, but can only learn global features of the whole images and fails to focus on crucial cosmetic components.

Disentangled Representation means decomposing the original input into several independent hidden codes so that the features of different components can be better learned. [Huang et al.2018, Lee et al.2018] disentangle images into domain-invariant content codes and domain-specific style codes to obtain multi-modal outputs for unpaired image-to-image translation tasks. [Ma et al.2018] introduces a disentangled representation of three components, the foreground, the background and the body pose, to manipulate images of pedestrians and fashion models. In this paper, we propose to disentangle an arbitrary face image into two independent components, the personal identity and the makeup style.

Attention Mask is an effective mechanism widely-used in image-to-image translation [Chen et al.2018, Mejjati et al.2018, Yang et al.2018] and image editing [Zhang et al.2018] tasks, which learns to localize the interested region and preserve the unrelated content. In this paper, we employ the attention mask in our model so that the makeup-unrelated region including the hair, the clothing and the background keeps unchanged after transfer.

Figure 2: The disentangled architecture of DMT, which contains four modules in all, the Identity Encoder , the Makeup Encoder , the Decoder and the Discriminator .

3 Methodology

3.1 Disentangled Makeup Transfer

For a given face image , which can be either a non-makeup face or a makeup one with arbitrary style, we propose to disentangle it into two components that are independent of each other, the personal identity and the makeup style. As a result, facial makeup transfer can be achieved by combining the same personal identity with different makeup styles, just like a person wearing different clothes.

Based on the above assumption, we propose DMT (Disentangled Makeup Transfer), a unified and flexible generative adversarial network to conduct different scenarios of makeup transfer. As Fig.2 shows, our model contains four modules in all, the Identity Encoder , the Makeup Encoder , the Decoder and the Discriminator . For the given face image , we obtain the corresponding Identity Code and Makeup Code with and as follows.

()
()

We suppose that captures the makeup style of including crucial cosmetic components like foundation, eyebrow, eye shadow and lipstick, whereas conveys the information of other makeup-unrelated content such as personal identity, clothing and background. Furthermore, and should be independent of each other as they describe different features of , which satisfies the definition of disentangled representation. Based on and , we leverage to obtain the reconstructed image without loss of information after encoding and decoding, which can be regulated by the following Reconstruction Loss.

()
()

where is the norm used to calculate the absolute difference between and .

3.2 Pair-wise Makeup Transfer

Pair-wise makeup transfer aims to swap the makeup styles of two face images, thus producing an after-makeup face and an anti-makeup face as Fig.1 shows. Given another face image , we apply again to obtain the corresponding makeup code as Fig.2 shows.

()

Based on and , we obtain the transfer result as follows, which is supposed to preserve the personal identity of and synthesize the makeup style of .

()

It should be noted that both and can be either makeup or non-makeup faces, thus leading to four different cases of pair-wise makeup transfer as Table 1 shows, which well cover the objectives investigated in most related researches. For training, we randomly set and

as makeup or non-makeup images with equal probabilities, which helps our model learn to handle different cases.

Objective
- - - -
- add makeup
- - remove makeup
swap makeup
Table 1: Four different cases of pair-wise makeup transfer, where - means non-makeup and means makeup.

As for personal identity preservation, it is improper to directly compare and in the raw pixel-level. Instead, we utilize a VGG-16 [Simonyan and Zisserman2015]

model pre-trained on the ImageNet dataset 

[Russakovsky et al.2015] to compare their activations in a certain hidden layer, as it has been proved that deep neural networks are effective in extracting high-level features [Gatys et al.2015]. In order to preserve the personal identity of , we employ the following Perceptual Loss to measure the difference between and in the -th layer of VGG-16.

()

where is the norm and denotes the output of the -th layer. By minimizing the above perceptual loss, we can ensure that the original high-level features of is well preserved in .

Figure 3: Examples of parsing masks and cosmetic regions.
Figure 4: Calculation of the makeup loss. We first perform histogram matching on different cosmetic regions of and to produce a ground truth , which shares the same color distribution as on each region and preserves the shape information of , then calculate the makeup loss between and on each cosmetic region.

Another challenge is how to evaluate the instance-level consistency of and in makeup style. Here we leverage the Makeup Loss proposed by [Li et al.2018]. As Fig.3 shows, we obtain the parsing mask for each face image, which consists of semantic parts, background, face, left / right eyebrow, left / right eye, nose, upper / lower lip, mouth, hair, left / right ear, neck, and can be achieved by training a semantic segmentation model [Zhao et al.2017, Yu et al.2018] on face parsing datasets [Smith et al.2013, Lee et al.2019]. Based on the parsing mask, we extract the following four regions to cover crucial cosmetic components for each face image.

  • Face covers the foundation, including face, nose, left / right ear, neck.

  • Brow covers the eyebrow, including left / right eyebrow.

  • Eye covers the eye shadow. We extract two rectangle regions enclosing the eyes and exclude overlapping content of hair, left / right eye, left / right eyebrow.

  • Lip covers the lipstick, including upper / lower lip.

As Fig.3 shows, the makeup style of each cosmetic region mainly depends on the color distribution. For example, adding lipstick for the non-makeup face in Fig.3 can be simply achieved by replacing the lip color with that of the makeup face. Therefore, the transfer result is supposed to share similar color distribution with on each cosmetic region. To meet this requirement, we first perform histogram matching on different regions of and to produce a ground truth as Fig.4 shows, which shares the same color distribution as on each region and preserves the shape information of . Then we calculate the makeup loss on different cosmetic regions of and with the norm as follows.

()

where , and denote the corresponding cosmetic regions of and for , , , , are the weights to combine different loss terms.

Based on the perceptual loss and the makeup loss, the transfer result generated by not only preserves the personal identity of but also satisfies the makeup style of . As Fig.2 shows, we apply the encoders again on to obtain the corresponding identity code and makeup code .

()
()

To ensure the one-to-one mappings between face images and identity / makeup codes, we employ the following Identity Makeup Reconstruction Loss (abbreviated as IMRL in Fig.2) so that the disentangled representation keeps unchanged after decoding and encoding.

()

where and are the weights of the identity term and the makeup term.

3.3 Interpolated Makeup Transfer

Interpolated makeup transfer is a general extension of pair-wise makeup transfer as it aims to control the strength of makeup style. Based on the disentangled representation discussed in previous sections, we can easily achieve this by combining the makeup styles of and with a controlling parameter . As increases from to , the makeup style of the transfer result transits from to accordingly.

()

3.4 Hybrid Makeup Transfer

We can also achieve hybrid makeup transfer by blending multiple makeup styles. Given reference images , we obtain their makeup codes and perform hybrid makeup transfer with controlling weights as follows.

()

3.5 Multi-Modal Makeup Transfer

Multi-modal makeup transfer aims to produce various outputs based on a single non-makeup face without any reference images. As Fig.2 shows, we randomly sample the makeup style

from a prior distribution like the standard normal distribution

and obtain the corresponding decoded result .

()
Figure 5: Detailed structures of , , and , where blocks of different colors denote different types of neural layers.

As a result, only depends on and the random style , and multi-modal makeup transfer can be achieved by sampling multiple styles to generate different outputs.

3.6 Attention Mask

We leverage the attention mask [Chen et al.2018, Mejjati et al.2018, Yang et al.2018, Zhang et al.2018] widely-used in image-to-image translation tasks to protect the makeup-unrelated content from being altered. Fig.5 illustrates the network structure of DMT in details, where the model is utilized to conduct pair-wise makeup transfer between and . Apart from generating the face image , also learns to produce an attention mask to localize the makeup-related region, where higher values mean stronger relation. Based on the above definition of , we obtain the refined result by selectively extracting the related content from and copying the rest from the original face .

()

where denotes element-wise multiplication and means inverting the mask to get the unrelated region.

Figure 6: Examples of makeup-related region and the generated attention mask .

As the parsing mask of each face image is available, we manually obtain the makeup-related region as Fig.6 shows by excluding background, left / right eye and hair from the parsing mask, which can serve as the ground truth for by applying the Attention Loss as follows.

()

3.7 Other Loss Functions

In this section, we briefly discuss some other loss functions that are necessary or beneficial to train our model.

Adversarial Loss. As Fig.2 shows, learns to distinguish real faces from fake ones by minimizing the following adversarial loss [Goodfellow et al.2014].

()

where the LSGAN objectives [Mao et al.2017] are applied to stabilize the training process and generate faces of higher quality. In contrast, tries to synthesize fake images to fool so the adversarial loss of acts oppositely.

()

KL Loss. As the random style is sampled from a prior distribution, the learned makeup code and should also follow the same distribution.

()

where is the KL divergence.

Total Variation Loss. To encourage smoothness for the attention mask, we impose the total variation loss [Pumarola et al.2018] on .

()

Full Objective. By combining the above losses, the full objectives for adversarial learning are defined as follows.

()
()

where are the weights of different loss terms.

4 Implementation

We implement DMT with TensorFlow

222https://www.tensorflow.org/ and conduct all the experiments on a NVIDIA Tesla P100 GPU. We have published an open-source release of our codes as well as the pre-trained model333https://github.com/Honlan/DMT.

Figure 7: Ablation study by removing , , , from DMT respectively.

In Fig.5, we use blocks of different colors to denote different types of neural layers and illustrate the network structures of , , , in details. We specify the settings of convolution layers with the attached texts. For example, means a convolution layer with filters of kernel size

and stride size

. We apply instance normalization [Ulyanov et al.2016] in , adaptive instance normalization (AdaIN) [Huang and Belongie2017] and layer normalization [Ba et al.2016] in

, and use relu as the default nonlinearity for

, , . No normalization layers are applied in

, as they remove the original mean and variance that contain important makeup information. In contrast,

consists of six convolution layers with leaky relu.

The makeup code of each face image is an

-dimensional vector as Fig.

5 shows. In order to blend the information of and

, we use a multilayer perceptron that takes

as input to produce two hidden codes and , which serve as the dynamic mean and variance for the AdaIN layers of . Lastly, contains two branches to produce the face image with tanh and the attention mask with sigmoid, which are further combined with according to Eq.().

5 Experiments

In this section, we first conduct ablation study to investigate the individual contributions of each component. Then we demonstrate the superiority of our model by comparing against state-of-the-arts. Lastly, we apply our model to perform different scenarios of makeup transfer.

5.1 Dataset

We utilize the MT (Makeup Transfer) dataset released by [Li et al.2018] to conduct all the experiments, which contains non-makeup and makeup female face images of the resolution along with the corresponding parsing masks. We follow the splitting strategy of [Li et al.2018] by randomly selecting non-makeup and makeup images as the test set and use all the other for training.

5.2 Training

The training images are resized to , randomly cropped to and horizontally flipped with a probability of for data augmentation. All the neural parameters are initialized with the He initializer [He et al.2015] and we employ the Adam [Kingma and Ba2014] optimizer with , for training.

We set , , , following the configurations of [Huang et al.2018, Lee et al.2018, Pumarola et al.2018]. As for other weights, we have tried several settings and finally arrive at a proper combination, , , , , where all the loss terms are sufficiently learned.

The relu4_1 layer of VGG16 is used to calculate . We train DMT for epochs in all, where the learning rate is fixed as during the first epochs and linearly decays to over the next epochs. The batch size is set as . For each iteration, we randomly select two training images, makeup or not, then randomly assign them to and .

5.3 Baselines

We compare our model against the following baselines.

  • DFM: Digital Face Makeup [Guo and Sim2009] is an early model based on image processing method.

  • DTN: Deep localized makeup Transfer Network [Liu et al.2016] is an optimization-based model that transfers different cosmetic components separately.

  • BG: BeautyGAN [Li et al.2018] is the state-of-the-art for facial makeup transfer by training a generator with dual inputs and dual outputs.

  • CG: CycleGAN [Zhu et al.2017] can be utilized to achieve facial makeup transfer by treating makeup and non-makeup faces as two domains.

  • ST: Style Transfer [Gatys et al.2015] can be utilized to achieve facial makeup transfer by treating the makeup and non-makeup faces as the style and the content.

  • DIA: Deep Image Analogy [Liao et al.2017]

    achieves visual attribute transfer by image analogy to match high-level features extracted from deep neural networks.

Figure 8: Ablation study of the attention mask , the attention loss and the perceptual loss .
Figure 9: Transfer results of DMT against the baselines. DMT can achieve high-quality results and well preserve makeup-unrelated content.

5.4 Ablation Study

We construct several variants of DMT to investigate the individual contributions of different mechanisms. As Fig.7 shows, the model fails to accurately add makeup for certain cosmetic components when trained without the corresponding loss terms, including , , and . We also investigate the impacts of the attention mask , the attention loss and the perceptual loss as Fig.8 shows, where the residual image is employed to visualize the difference between the original non-makeup image and the transfer result .

()

Without  ( is removed accordingly), we observe that the background is wrongly modified as the residual image shows. After applying without , DMT can learn the makeup-related region in an unsupervised manner, but the background is still slightly altered (zoom in to see the details). No significant difference is observed without . However, when both and are removed, the background suffers from obvious changes, which demonstrates that both the attention mask and the perceptual loss contribute to preservation of makeup-unrelated content.

Figure 10: Transfer results and residual images of DMT against BG for more makeup styles.

5.5 Qualitative Comparison

Fig.9 illustrates the qualitative comparisons of DMT against the baselines on the test set, where the transfer results of DFM, ST, DTN, DIA and CG are provided by [Li et al.2018]. The results of DFM, DTN and DIA can capture the makeup styles more or less, but all suffer from severe artifacts. ST and CG can generate realistic faces, but fail to add makeup corresponding to the reference images. In contrast, both BG and DMT can produce realistic results of higher quality by properly transferring different cosmetic components. Furthermore, our model is superior to BG by also transferring the eyebrows and better preserving makeup-unrelated content including eyes, hair and background. In subsequent experiments, we mainly compare our model against BG as it outperforms the other five baselines significantly.

We display more comparisons of our model against BG in Fig.10. BG can produce visually satisfactory results, but always unavoidably alters makeup-unrelated content according to the residual images. In contrast, DMT can achieve a better tradeoff between makeup transfer and identity preservation by accurately focusing on the crucial cosmetic components.

5.6 Quantitative Comparison

We conduct quantitative comparison by human evaluation. Based on the test set, we randomly select makeup faces for each non-makeup image. As a result, we obtain pairs and conduct pair-wise makeup transfer on them with DMT and BG. The volunteers are instructed to choose the better one according to realism, quality of makeup transfer and preservation of unrelated content. As Table 2 shows, our model outperforms BG by winning more votes.

We also compare the reconstruction capability of DMT against BG. For a non-makeup face and a makeup one , we employ BG to swap the makeup styles for twice to get the reconstructed image and . As for DMT, we can simply obtain and according to Eq.(). We perform the above operations on the pairs with DMT and BG respectively to produce two reconstruction sets, both of which contain images. Based on the original images and the reconstructed ones, we leverage the following three metrics, the Mean Squared Error (MSE), the Peak Signal-to-Noise Ratio (PSNR) and the Structural Similarity Index (SSIM) [Wang et al.2004], to evaluate the reconstruction capability of DMT and BG. As Table 2 shows, DMT achieves better performances on all the three metrics than BG, which demonstrates that our model can faithfully maintain the one-to-one mappings between face images and identity / makeup codes.

human MSE PSNR SSIM
BG 0.00513 24.0 0.924
DMT 0.00028 36.1 0.992
Table 2: Quantitative comparisons of DMT against BG, where means the higher the better and means the lower the better.
Figure 11: Visualization of the learned makeup distribution after dimension reduction.

5.7 Additional Results

We provide some additional results of DMT on other makeup transfer tasks, which cannot be achieved by BG or other related researches.

To better understand the learned makeup distribution, we calculate the makeup codes of all the makeup faces in the training set and obtain -dimensional vectors. After dimension reduction with t-SNE, we transform each vector into a point in the -D coordinates for visualization. As Fig.11 shows, faces of similar makeup styles are mapped to closer positions. For example, faces in the green box all belong to smoky-eyes makeup style and those in the red box all wear lipsticks of bright red, which well demonstrate the interpretability of the learned makeup representation.

Figure 12: Interpolated makeup transfer of DMT by controlling the parameter .

We employ DMT to conduct interpolated makeup transfer by controlling the parameter . As Fig.12 shows, Our model can produce natural results of high quality with increasing strength of makeup style. Based on the disentangled representation, we can also achieve hybrid makeup transfer by blending the makeup codes of multiple reference faces into a non-makeup image as Fig.13 shows. Another interesting capability of our model is to conduct face interpolation by jointly combining the makeup codes as well as the identity codes. Fig.14 and Fig.15 illustrate the face interpolation results of DMT with and without attention mask respectively.

Figure 13: hybrid makeup transfer of DMT by combining the makeup codes of multiple faces.
Figure 14: Face interpolation of DMT by combining the identity codes and makeup codes of multiple faces.
Figure 15: Face interpolation of DMT without attention mask by combining the identity codes and makeup codes of multiple faces.
Figure 16: Multi-modal makeup transfer of DMT by randomly sampling multiple makeup codes.

Based on a single non-makeup face, we can achieve multi-modal makeup transfer with DMT by sampling multiple makeup codes from the learned distribution. As Fig.16 shows, we produce abundant makeup styles diverse in colors of crucial cosmetic components. Most of them look quite appealing and creative, but some may be rare in real life, such as the purple face in the first row. We discover that there exist evident boundary between the neck and the upper body when the color of foundation changes a lot. In fact, this problem is caused by the parsing mask rather than our model, as the semantic part of neck does not cover all the visible skin of the upper body (see the original face and the corresponding parsing mask in Fig.16).

Lastly, we try to interpretate the implications of different dimensions in the makeup code. As Fig.17 shows, we first calculate the normalized makeup code of a makeup face, , then adjust the value of each dimension while keeping the others fixed to inspect the corresponding influence. We find that different dimensions are correlated with different makeup styles. For example, increasing the value of results in whiter face and darker eye shadow. It should be noted that the implications of different dimensions are learned in a totally unsupervised manner. If we provide additional annotations like color of lipstick or name of makeup style and correlate them with certain dimensions, the learned makeup code is supposed to be more interpretable and further disentangled.

Figure 17: Linear interpolation on different dimensions of . In each column, the face in the blue box is the closest one to the input.

6 Conclusion

In this paper, we propose DMT (Disentangled Makeup Transfer), a unified and flexible model to achieve different scenarios of makeup transfer. Our model contains an identity encoder, a makeup encoder and a decoder to learn disentangled representation. We also leverage the attention mask to preserve makeup-unrelated content. Extensive experiments demonstrate that our model can generate better results than state-of-the-arts and perform different scenarios of makeup transfer, which cannot be achieved by related researches.

References

  • [Ba et al.2016] Lei Jimmy Ba, Ryan Kiros, and Geoffrey E. Hinton. Layer normalization. CoRR, abs/1607.06450, 2016.
  • [Bengio et al.2013] Yoshua Bengio, Aaron C. Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell., 35(8):1798–1828, 2013.
  • [Chen et al.2018] Xinyuan Chen, Chang Xu, Xiaokang Yang, and Dacheng Tao. Attention-gan for object transfiguration in wild images. In ECCV, pages 167–184, 2018.
  • [Choi et al.2017] Yunjey Choi, Min-Je Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, and Jaegul Choo. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. CoRR, abs/1711.09020, 2017.
  • [Gatys et al.2015] Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. A neural algorithm of artistic style. CoRR, abs/1508.06576, 2015.
  • [Goodfellow et al.2014] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. Generative adversarial nets. In NeurIPS, pages 2672–2680, 2014.
  • [Gulrajani et al.2017] Ishaan Gulrajani, Faruk Ahmed, Martín Arjovsky, Vincent Dumoulin, and Aaron C. Courville. Improved training of wasserstein gans. In NeurIPS, pages 5769–5779, 2017.
  • [Guo and Sim2009] Dong Guo and Terence Sim. Digital face makeup by example. In CVPR, pages 73–79, 2009.
  • [He et al.2015] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In ICCV, pages 1026–1034, 2015.
  • [Huang and Belongie2017] Xun Huang and Serge J. Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. In ICCV, pages 1510–1519, 2017.
  • [Huang et al.2018] Xun Huang, Ming-Yu Liu, Serge J. Belongie, and Jan Kautz. Multimodal unsupervised image-to-image translation. In ECCV, pages 179–196, 2018.
  • [Isola et al.2017] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros.

    Image-to-image translation with conditional adversarial networks.

    In CVPR, pages 5967–5976, 2017.
  • [Johnson et al.2016] Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In ECCV, pages 694–711, 2016.
  • [Kim et al.2017] Taeksoo Kim, Moonsu Cha, Hyunsoo Kim, Jung Kwon Lee, and Jiwon Kim. Learning to discover cross-domain relations with generative adversarial networks. In ICML, pages 1857–1865, 2017.
  • [Kingma and Ba2014] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014.
  • [Ledig et al.2017] Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew P. Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, and Wenzhe Shi. Photo-realistic single image super-resolution using a generative adversarial network. In CVPR, pages 105–114, 2017.
  • [Lee et al.2018] Hsin-Ying Lee, Hung-Yu Tseng, Jia-Bin Huang, Maneesh Singh, and Ming-Hsuan Yang. Diverse image-to-image translation via disentangled representations. In ECCV, pages 36–52, 2018.
  • [Lee et al.2019] Cheng-Han Lee, Ziwei Liu, Lingyun Wu, and Ping Luo. Maskgan: Towards diverse and interactive facial image manipulation. Technical Report, 2019.
  • [Li et al.2015] Chen Li, Kun Zhou, and Stephen Lin. Simulating makeup through physics-based manipulation of intrinsic image layers. In CVPR, pages 4621–4629, 2015.
  • [Li et al.2018] Tingting Li, Ruihe Qian, Chao Dong, Si Liu, Qiong Yan, Wenwu Zhu, and Liang Lin. Beautygan: Instance-level facial makeup transfer with deep generative adversarial network. In ACM MM, pages 645–653, 2018.
  • [Liao et al.2017] Jing Liao, Yuan Yao, Lu Yuan, Gang Hua, and Sing Bing Kang. Visual attribute transfer through deep image analogy. ACM Trans. Graph., 36(4):120:1–120:15, 2017.
  • [Liu et al.2016] Si Liu, Xinyu Ou, Ruihe Qian, Wei Wang, and Xiaochun Cao. Makeup like a superstar: Deep localized makeup transfer network. In IJCAI, pages 2568–2575, 2016.
  • [Liu et al.2018] Guilin Liu, Fitsum A. Reda, Kevin J. Shih, Ting-Chun Wang, Andrew Tao, and Bryan Catanzaro. Image inpainting for irregular holes using partial convolutions. In ECCV, pages 89–105, 2018.
  • [Ma et al.2018] Liqian Ma, Qianru Sun, Stamatios Georgoulis, Luc Van Gool, Bernt Schiele, and Mario Fritz. Disentangled person image generation. In CVPR, pages 99–108, 2018.
  • [Mao et al.2017] Xudong Mao, Qing Li, Haoran Xie, Raymond Y. K. Lau, Zhen Wang, and Stephen Paul Smolley. Least squares generative adversarial networks. In ICCV, pages 2813–2821, 2017.
  • [Mejjati et al.2018] Youssef Alami Mejjati, Christian Richardt, James Tompkin, Darren Cosker, and Kwang In Kim. Unsupervised attention-guided image-to-image translation. In NeurIPS, pages 3697–3707, 2018.
  • [Pumarola et al.2018] Albert Pumarola, Antonio Agudo, Aleix M. Martinez, Alberto Sanfeliu, and Francesc Moreno-Noguer. Ganimation: Anatomically-aware facial animation from a single image. In ECCV, pages 835–851, 2018.
  • [Radford et al.2015] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. CoRR, abs/1511.06434, 2015.
  • [Russakovsky et al.2015] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael S. Bernstein, Alexander C. Berg, and Fei-Fei Li. Imagenet large scale visual recognition challenge. IJCV, 115(3):211–252, 2015.
  • [Simonyan and Zisserman2015] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.
  • [Smith et al.2013] Brandon M. Smith, Li Zhang, Jonathan Brandt, Zhe Lin, and Jianchao Yang. Exemplar-based face parsing. In CVPR, pages 3484–3491, 2013.
  • [Tong et al.2007] Wai-Shun Tong, Chi-Keung Tang, Michael S. Brown, and Ying-Qing Xu. Example-based cosmetic transfer. In PCCGA, pages 211–218, 2007.
  • [Ulyanov et al.2016] Dmitry Ulyanov, Andrea Vedaldi, and Victor S. Lempitsky. Instance normalization: The missing ingredient for fast stylization. CoRR, abs/1607.08022, 2016.
  • [Wang et al.2004] Zhou Wang, Alan C. Bovik, Hamid R. Sheikh, and Eero P. Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Processing, 13(4):600–612, 2004.
  • [Yang et al.2018] Chao Yang, Taehwan Kim, Ruizhe Wang, Hao Peng, and C.-C. Jay Kuo. Show, attend and translate: Unsupervised image translation with self-regularization and attention. CoRR, abs/1806.06195, 2018.
  • [Yi et al.2017] Zili Yi, Hao (Richard) Zhang, Ping Tan, and Minglun Gong. Dualgan: Unsupervised dual learning for image-to-image translation. In ICCV, pages 2868–2876, 2017.
  • [Yu et al.2018] Changqian Yu, Jingbo Wang, Chao Peng, Changxin Gao, Gang Yu, and Nong Sang. Bisenet: Bilateral segmentation network for real-time semantic segmentation. In ECCV, pages 334–349, 2018.
  • [Zhang et al.2017] Han Zhang, Tao Xu, and Hongsheng Li. Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. In ICCV, pages 5908–5916, 2017.
  • [Zhang et al.2018] Gang Zhang, Meina Kan, Shiguang Shan, and Xilin Chen. Generative adversarial network with spatial attention for face attribute editing. In ECCV, pages 422–437, 2018.
  • [Zhao et al.2017] Hengshuang Zhao, Jianping Shi, Xiaojuan Qi, Xiaogang Wang, and Jiaya Jia. Pyramid scene parsing network. In CVPR, pages 6230–6239, 2017.
  • [Zhu et al.2017] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. CoRR, abs/1703.10593, 2017.