FairFaceGAN: Fairness-aware Facial Image-to-Image Translation

12/01/2020 ∙ by Sunhee Hwang, et al. ∙ 0

In this paper, we introduce FairFaceGAN, a fairness-aware facial Image-to-Image translation model, mitigating the problem of unwanted translation in protected attributes (e.g., gender, age, race) during facial attributes editing. Unlike existing models, FairFaceGAN learns fair representations with two separate latents - one related to the target attributes to translate, and the other unrelated to them. This strategy enables FairFaceGAN to separate the information about protected attributes and that of target attributes. It also prevents unwanted translation in protected attributes while target attributes editing. To evaluate the degree of fairness, we perform two types of experiments on CelebA dataset. First, we compare the fairness-aware classification performances when augmenting data by existing image translation methods and FairFaceGAN respectively. Moreover, we propose a new fairness metric, namely Frechet Protected Attribute Distance (FPAD), which measures how well protected attributes are preserved. Experimental results demonstrate that FairFaceGAN shows consistent improvements in terms of fairness over the existing image translation models. Further, we also evaluate image translation performances, where FairFaceGAN shows competitive results, compared to those of existing methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 4

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Artificial Intelligence (AI) systems have achieved remarkable success in a broad range of research fields such as computer vision, natural language processing, and audio analysis. However, outputs of the AI systems could be biased since they heavily rely on human-collected datasets which may contain ethically sensitive stereotypes [geiger2020garbage]. Research and articles indicated that several AI systems yielded unfair results with respect to protected attributes such as gender, age, or race [propublica, sr_bias, unfair_od1, unfair_od2, unfair_od3, womanalso, balanced]. This is a critical problem to computer vision systems, which have already been deployed in diverse real world applications without adjusting demographic disparities. For example, PULSE algorithm [pulse], taking low-resolution faces into high-resolution images, tends to produce racially biased results, i.e, white skin, blue eyes, and brown hair, regardless of input images [sr_bias]. Accordingly, in order to resolve the societal bias problem, researchers have directed their attention on developing fair computer vision models [balanced, discoverfair, lnl, womanalso, manalso, fair1, fair2].

Attractive (Gender changed) Big Nose (Race changed)
Bald (Gender changed) Bags Under Eyes (Age changed)
Figure 1: Image translation results on CelebA dataset [celeba]. For each example, we present four facial images, which are an input image and the results of StarGAN, FixedPointGAN, and FairFaceGAN (ours), respectively (from left to right). and denote adding and removing the attribute of the input image, respectively. While Red boxes indicate the occurrence of unwanted translation of protected attributes, Green boxes denote the preservation of protected attributes. Best viewed in color.

In this paper, we aim to improve fairness in Image-to-Image translation of facial attributes, whose goal is to edit attributes of input images. Even though recent methods based on Generative Adversarial Networks (GANs) 

[gan] succeeded in synthesizing realistic facial images while translating attributes fairly, they might contain unintended discriminative factors. In Figure 1, we present several examples of discriminatory translation results. While translating of target attributes, existing facial attribute editing models [star, fp] unintendedly modify protected attributes (i.e, gender, age, race) as well.

To address this problem, we propose a fairness-aware Image-to-Image translation model, namely FairFaceGAN, which maps input images into target domains while preserving protected attributes. In specific, we introduce a new fair representation learning method that learns two separate latent spaces with different objectives: (i) one is for mapping target attributes adequately; (ii) the other is for preserving information of protected attributes. By employing two decoupled latent spaces, FairFaceGAN successfully prevents unwanted translation during editing target attributes, as shown in the last column of each example of Figure 1. We note that our method can be easily extended to the case of multiple protected attributes as it separates target attributed-related information from the rest. Moreover, another merit of FairFaceGAN is that it does not require protected attribute annotations. Instead, we exploit knowledge related to protected attributes from a pre-trained classification model. We believe that this will largely benefit the application of our method especially in the circumstance where protected attribute labels are not acquirable.

To compare FairFaceGAN with existing image translation models in terms of fairness, we design and perform two kinds of experiments. Specifically, for the first one, we measure how the fairness-aware classification performances are improved when the biased training dataset is augmented by previous translation models and ours respectively. For this, we use standard fairness metrics, i.e, Equality of Opportunity [eqopp]

and Equalized Odds 

[odds]. For the second one, we propose a new fairness metric, Fréchet Protected Attribute Distance (FPAD), inspired by Fréchet Inception Distance (FID) [fid], to evaluate the protected attribute preservation ability of image translation models. On the both types of experiments, FairFaceGAN shows consistently fairer results over the existing image translation methods. Also, we provide comparisons on the standard image translation metrics, i.e, FID and Kernel Inception Distance (KID), where FairFaceGAN achieves comparable results to the other models.

Our main contributions can be summarized as follows:

  • We introduce FairFaceGAN that maps input images into target domain in a fair way with respect to multiple protected attributes.

  • To reduce the correlation between protected and target attributes in the mapping, we propose to learn two separate representations with different objectives: target attributes mapping and protected attribute preservation.

  • To achieve fairness, we present a knowledge transfer technique for fair translation on the target dataset. It enables our model to mitigate bias related to multiple protected attributes even for the case where annotations for protected attributes are unavailable.

  • Through the extensive experiments on CelebA, we demonstrate that FairFaceGAN produces the fairest results in terms of Equality of Opportunity, Equalized Odds, and the proposed FPAD over existing Image-to-Image translation models.

2 Related Work

2.1 Fairness in Computer Vision

In recent years, fairness in computer vision has become a popular research topic. Among various types of fair methods, we briefly introduce two approaches to mitigate bias problems in visual recognition tasks: (1) Reorganizing a biased dataset to the fair dataset (Pre-processing), and (2) Reducing bias through model architecture or algorithm (In-processing).

Pre-processing.

Sattigeri et al [ibmfairnessgan] proposed a fair data generating method based on GANs. They are trained on a biased dataset and generate new data which are fair in terms of the protected attributes. The generated data is utilized to train a fairness-aware face attribute classification model. Quadrianto et al [discoverfair] introduced a data-to-data translation method that transforms an original biased dataset into a new fair dataset. In this paper, we also address fairness in the image classification task by generating fair dataset using our FairFaceGAN.

In-processing.

Zheng et al [cvpr_vae] proposed a disentangling method that splits feature representation into the two subspaces, one relevant to target labels and the irrelevant one. Similarly, FFVAE [ffr] aim to represent protected attribute related information and the rest. Park et al [readme] proposed a fair disentangling method for representing target, protected attribute, and mutual information of both. Unlike above, Wang et al [balanced] proposed an adversarial approach to reduce gender bias in a visual recognition model. While, most existing methods take into account a binary protected attribute despite the diversity of demographic groups. In contrast, we introduce a fair method that eliminates multiple protected attributes related biases in computer vision models.

2.2 Image-to-Image Translation

The main goal of Image-to-Image translation task is to learn how to map images from a source domain into images of a target domain. The methods based on Conditional Generative Adversarial Networks (CGANs) [cgan, pix2pix]

have shown a great success with pixel-wise paired datasets in super-resolution 

[sr1], image in-painting [inpainting1], image restoration [derain], and image segmentation [segment1]. In addition, Cycle consistency adversarial networks (CycleGANs) [cycle] are introduced to learn a mapping between unpaired datasets. They train the Image-to-Image translation models in an unsupervised manner. Moreover, Choi et al [star] proposed StarGAN that reduces the computational cost of models based on CycleGAN. The unified and unsupervised Image-to-Image translation model learns a mapping between multiple domains effectively. However, we find out that the learned mapping is biased to protected attributes (See Figure 1). There are some studies [fp, fair1, fair2] that prevent unwanted information translation during mapping. Although Siddiquee et al [fp] proposed a FixedPointGAN that generates unchanged images in same-domain translation, it generates biased results in different-domain translation, a still remaining issue. In addition, fair representation methods by semantic constraints [fair1] and a disentangling method [fair2] are developed. Inspired by [fair1, fair2], we also aim to train a fairness-aware image translation model by proposing a fair representation learning method.

Figure 2:

The proposed Protected Attribute Classifier (PAC).

3 Proposed Method

In this work, we propose two modules: 1) Protected Attribute Classifier (PAC) module, which learns high-level features of multiple protected attributes. 2) FairFaceGAN, which is a fairness-aware facial Image-to-Image translation network to learn a fair mapping of the multiple facial attributes in the multi-domain. The main network for the fairness-aware Image-to-Image translation is FairFaceGAN and PAC module is introduced to train FairFaceGAN without protected attribute annotations. In this section, we explain the modules in sequence.

3.1 Protected Attribute Classifier (PAC)

As illustrated in Figure 2, PAC consists of two branches: one is for predicting protected attributes (gender , age , race ) and the other is for predicting the domain labels . The encoder of PAC with a number of convolutional layers is shared by the two branches and followed by task-specific fully connected layers: (gender classifier), (age classifier), (race classifier), and (domain classifier). We define the objective function for PAC as follows:

(1)

where and respectively denote a cross-entropy loss and a flattened feature of the last layer from the shared encoder.

In addition, to transfer knowledge related to protected attributes from the learned PAC into the FairFaceGAN, we train a discriminator to fail classification on source domain (UTK dataset [utk]) and target domain(CelebA dataset [celeba]) using a gradient reversal layer like DANN [dann] since the representation of PAC and FairFaceGAN are trained on different domains. To do so, we optimize the domain adversarial loss as follows:

(2)

Optimization

We use Adam optimizer with a learning rate of 0.001 and a batch size of 128. The PAC was optimized before ten epochs on a single 1080Ti GPU.

Figure 3: An overview of the proposed FairFaceGAN framework, which consists of Encoder-Decoder Generator, Discriminator, and Target Attribute Classifiers (TACs). Given an image and target attribute , we learn the model fairly work on protected attributes with Fair Representation Loss (FRL) and Protected Attribute Distance Loss (PADL) to generate .

3.2 FairFaceGAN

FairFaceGAN aims to map input images into target facial attributes using a unified generator. As shown in Figure 3, it contains four components: one encoder-decoder generator, two target attribute classifiers (TACs), and one discriminator.

Given an input image

and a target attribute vector

, we first depth-wisely concatenate both of them. Then the combined data is fed into the encoder to represent two latent spaces. One is for target attributes and the other is for the rest information. The two features are then concatenated and used as an input of our decoder for generating a fair image with target attributes.

Auxiliary Classifier Generative Adversarial Network Loss.

We train FairFaceGAN with an adversarial loss to generate images to be realistic. In addition, we add an auxiliary classification layer on the top of the discriminator to distinguish the target attributes of the input image () and the generated image (). The adversarial loss with the auxiliary classifier is defined as follows:

(3)

Reconstruction Loss.

For the reconstruction, we use a cycle consistency loss [cycle] that guarantees the quality of generated images in the unsupervised manner. In addition, inspired by FixedPointGAN [fp], we add an identity loss to make the generative model not transfer unnecessary regions in a same-domain translation.

(4)

Fair Representation Loss (FRL).

During translating target attributes, the high correlation between target attributes and protected attributes causes unwanted protected attribute translation. To prevent it, we separate representation into target attribute translation () and protected attribute preservation () respectively. To this end, we apply a fair representation loss defined as follows:

(5)

Protected Attribute Distance Loss (PADL).

In addition, we propose protected attribute distance loss (PADL) minimizes the protected attribute feature distance between input images () and generated images (). Since we do not have protected attribute labels in the target dataset, we instead utilize the semantic knowledge of protected attribute from the trained PAC to measure the distance. With Fair Representation Loss (FRL), it explicitly preserves protected attribute information in target attribute translation. The loss is defined as follows:

(6)

Perceptual Loss.

On top of that, the perceptual loss [percept] is used to improve the quality of outputs. We select the same layers of [percept] to measure not only the style loss between input images and reconstructed images but also the content loss between input images and generated images.

Optimization

We use WGAN with gradient penalty [wgp] and Adam for optimizing the parameters of our FairFaceGAN with 0.5 and

0.999. We note that the overall loss function is a weighted sum of all terms. The initial learning rate for both generator and discriminator is set to 0.0001, which is decayed every eight epochs. We obtained the best results before 20 epochs on two 1080-TI GPUs.

4 Experiments

4.1 Dataset

Pac.

We train PAC on UTK Face [utk] and CelebA [celeba] datasets. CelebA dataset is utilized only for domain adversarial training and UTK Face dataset is leveraged for protected attribute (gender, race, and age) classification training as well as domain adversarial training. We randomly select 19,708, 2,000, and 2,000 images of UTK dataset for training, validation, and test, respectively, where 200,599 images of CelebA dataset are set to the domain adversarial training. All images are resized to 128 128. Results with ranges of age and race for the classification are shown in Table 1.

FairFaceGAN.

For training FairFaceGAN, we use only CelebA dataset without protected attribute annotation. Instead, by transferring knowledge from pre-trained PAC on UTK dataset, we utilize the protected attribute related semantic information. Training and test datasets are composed of 200,599 and 2,000 respectively. We pre-process all images by randomly cropping (178 178) and resizing into 128 128. The five target attributes (attractive, blond hair, bags under eyes, bald, big nose) are selected manually. While we conduct both qualitative and quantitative evaluation for the gender attribute, we only conduct qualitative evaluation for the age and race attributes since their labels are not included in CelebA dataset.

Figure 4: Image-to-Image translation results compare to StarGAN [star] and FixedPointGAN [fp]. and denote the case of target attribute is added or removed. Red and Green boxes indicate the discriminative outputs and fairly mapped results respectively.
Attribute [Label] Source Only DA CelebA [celeba]
Gender [Male, Female] 0.94 0.91 0.92
Race [White, Black, Asian, Indian, Others] 0.87 0.81 N/A
Age [09, 1019, … , 50+] 0.73 0.65 N/A
Domain Classification [UTK, CelebA] N/A 0.5 N/A
Table 1: Protected attribute classification accuracy on UTK dataset [utk] (Source Only and DA). DA denotes results of the domain adversarial training.
Star
GAN [star]
FixedPoint
GAN [fp]
Ours
(f)
Ours
(p)
Ours
(f+p)
Ours
(f+p+P)
ACC 92.07 91.01 90.55 92.11 89.71 90.66
FID 10.23 6.91 10.66 6.98 9.98 9.8
KID 1.940.29 2.060.41 2.330.28 1.470.35 2.130.3 1.890.27
Table 2: Quantitative comparison on CelebA dataset. f, p, and P indicate the usage of FRL, PADL, and Perceptual Loss. ACC, FID, and KID denote the average of target attribute classification accuracies, Fréshet Inception Distance [fid], and Kernel Inception Distance ( 100) [kid].
StarGAN [star] FixedPointGAN [fp] Ours
Quality 30.78 20.97 48.25
Fairness 11.31 34.46 54.23
Table 3: User study results.
Gender
(transform)
Attribute
StarGAN
[star]
FixedPoint
GAN [fp]
Ours
(f)
Ours
(p)
Ours
(f+p)
Ours
(f+p+P)
Male
()
BlondHair 56.32 24.55 31.05 32.54 4.86 5.63
Bald 11.68 11.90 6.67 14.24 5.19 8.30
BUE 6.38 2.60 2.18 3.41 1.41 3.44
BigNose 16.20 7.05 4.62 9.99 1.51 4.94
Attractive 11.32 3.49 4.84 3.39 2.94 3.79
Male
()
BlondHair 41.37 21.04 20.11 32.01 9.96 8.91
Bald 17.79 3.71 3.97 8.51 2.19 9.02
BUE 21.29 6.87 9.23 13.32 3.13 5.23
BigNose 2.66 2.02 2.19 3.75 1.11 1.63
Attractive 7.85 4.43 4.09 13.92 1.35 6.7
Female
()
BlondHair 135.7 108.71 72.98 104.13 4.75 17.39
Bald 60.33 131.48 22.18 57.83 21.00 24.79
BUE 3.25 3.10 1.71 3.08 1.55 4.02
BigNose 22.42 12.18 4.98 8.97 2.22 3.47
Attractive 13.85 7.29 6.17 3.05 2.78 5.00
Female
()
BlondHair 29.80 94.38 35.49 55.39 5.17 5.94
BUE 6.06 4.19 9.57 4.42 2.29 3.74
BigNose 5.77 3.10 4.95 4.5 2.18 3.86
Attractive 22.79 13.36 17.30 19.2 7.12 11.70
Average 25.94 24.50 13.91 20.82 4.35 7.24
Table 4: Fréshet Protected Attribute Distance (FPAD) of generated images. BUE denotes Bags Under Eyes. () denotes without attribute into with attribute, and vice versa.
Training Dataset Male Female Fairness Score
TPR FPR TPR FPR
64.10 18.40 86.36 49.00 22.26 26.43
79.49 29.45 90.40 53.00 10.92 17.23
+ [star] 64.10 15.34 91.41 43.00 27.31 27.49
+ [fp] 56.41 19.63 87.88 42.00 31.47 26.92
+ 74.36 22.70 85.35 45.00 10.99 16.65
Table 5: Fair Classification Results. TPR, FPR, , and indicate Classification Accuracy, True Positive Rates, False Positive Rate, Equality of Opportunity [eqopp], and Equalized Odds [odds]. and indicate the subset of original images in test dataset for the image translation model and the generator. Last three rows present results of data augmentation.

4.2 Evaluation

Qualitative evaluation.

As shown in Figure 4, FairFaceGAN generates better quality images compared to StarGAN [star] and FixedPointGAN [fp]. The models tend to change the skin color, add mustache on female images, apply makeup on male images, or make them aged, even though those are not the target attributes. Unlike their results, FairFaceGAN prevents the unwanted translation of protected attributes better.

Protected Attribute Classification.

Table 1

shows the protected attribute classification accuracy of PAC on UTK and CelebA datasets. We fine-tune the ImageNet

[imgnet] pre-trained ResNext50 [resnext], one of the state-of-the-art image classification networks. The result demonstrates that our PAC encodes representations informative to the protected attributes on both UTK and CelebA datasets.

Quantitative Comparisons.

To compare quantitative results of generated images of ours and existing models, we measure the target attribute classification accuracy, Fréchet Inception Distance (FID) [fid], and Kernel Inception Distance (KID) [kid]

. In this experiment, we also conduct an ablation study of the proposed loss functions as follows: 1) Fair Representation Loss (FRL) only. 2) FRL and Protected Attribute Distance Loss (PADL). 3) FRL, PADL, and VGG Perceptual Loss. Firstly, to evaluate target attribute classification accuracies on the generated images, we re-train the ImageNet

[imgnet] pre-trained ResNext50 [resnext] to classify the target attributes on CelebA dataset. As shown in Table 2 (first row), the generated images from ours achieve the best result (92.11%) over other models, where original testset achieves the accuracy of 88.88%. We also measure FID and KID values to evaluate our model with standard metrics. As shown in Table 2 (second and third rows), our model shows the best KID and competitive FID. Meanwhile, our final model shows slightly lower accuracy than others since there is a trade-off between fairness and the image generation ability [tradeoff1, tradeoff2]. Note that our goal focuses on improving fairness of the translation model.

User Study.

We also present results of a user study to compare the fairness and visual quality of generated images of ours, StarGAN [star], and FixedPointGAN [fp]. We randomly select 24 sets, four images per set (Input, Results of StarGAN, FixedPointGAN, and ours), and request 73 participants to choose the best produced (Quality) and the best protected attribute preserved (Fairness) images. As shown in Table 3, our model achieves the best scores for both image quality and fairness.

Fréchet Protected Attribute Distance (FPAD).

To evaluate the fairness of our proposed model, we propose a new metric FPAD inspired by FID [fid]. We leverage our PAC model to extract a protected attribute feature and measure feature distance of input images and generated images . We compute in given (, ) and (, ) which are the mean and covariance of protected attribute features from and . As shown in Table 4, our model achieves the lowest FPAD compared to the prior models. In other words, our generative model best preserves the protected attributes during the mapping. Although there is a slight performance drop, we compensate it by applying the perceptual loss that improves visual quality of generated images.

Fair Classification.

Furthermore, to evaluate our model using standard fairness metrics, we conduct an attractiveness classification task. We compare the performances when augmenting data by existing image translation models [fp, star] and FairFaceGAN respectively. For the evaluation, we leverage the two fairness metrics: Equality of Opportunity and Equalized Odds (, ). Details are in our supplementary material. We fine-tune ImageNet pre-trained ResNext50 [resnext] using the testset of FairFaceGAN, divided into 1,200 (), 300, and 500 images for training, validation, and test, respectively. As shown in Table 5, we verify whether generated images of FairFaceGAN can be utilized for the classification model to be trained more fairly on gender compare to existing image translation models.

5 Conclusion

In this paper, we introduced a novel fairness-aware facial Image-to-Image translation model to avoid the problem of translating unwanted attributes. Through Fair Representation Loss (FRL) and Protected Attribute Distance Loss (PADL), our model learns fair representations in terms of multiple protected attributes (age, gender, and race). To demonstrate the ability of FairFaceGAN, we conducted an extensive evaluation of image translation and fairness. Overall, our experimental results showed that FairFaceGAN is fairer in terms of Equality of Opportunity, Equalized Odds, and the proposed FPAD over the existing Image-to-Image translation models.

Acknowledgements.

This work was supported by Institute for Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (Development of framework for analyzing, detecting, mitigating of bias in AI model and training data) under Grant 2019-0-01396 and (Artificial Intelligence Graduate School Program (YONSEI UNIVERSITY)) under Grant 2020-0-01361.

We thank Pilhyeon Lee, Seogkyu Jeon, and Jijoong Kim for the thorough reviews and the constructive feedback.

References