Understanding Unequal Gender Classification Accuracy from Face Images

11/30/2018 ∙ by Vidya Muthukumar, et al. ∙ 0

Recent work shows unequal performance of commercial face classification services in the gender classification task across intersectional groups defined by skin type and gender. Accuracy on dark-skinned females is significantly worse than on any other group. In this paper, we conduct several analyses to try to uncover the reason for this gap. The main finding, perhaps surprisingly, is that skin type is not the driver. This conclusion is reached via stability experiments that vary an image's skin type via color-theoretic methods, namely luminance mode-shift and optimal transport. A second suspect, hair length, is also shown not to be the driver via experiments on face images cropped to exclude the hair. Finally, using contrastive post-hoc explanation techniques for neural networks, we bring forth evidence suggesting that differences in lip, eye and cheek structure across ethnicity lead to the differences. Further, lip and eye makeup are seen as strong predictors for a female face, which is a troubling propagation of a gender stereotype.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

page 7

page 8

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The problem of unequal accuracy rates across groups has recently been highlighted in gender classification from face images. A study by NIST shows that automated gender classification algorithms are more accurate for males than females [24]. Going further, Buolamwini and Gebru created a dataset of parliament members from three European and three African countries — the Pilot Parliaments Benchmark (PPB), balanced across two attributes: gender and Fitzpatrick skin type [14]

, and evaluated the accuracy of three commercial facial gender classifiers

[4]. All three achieved much lower accuracy on dark-skinned females (Fitzpatrick skin types IV–VI) than light-skinned females, dark-skinned males, and light-skinned males. (Note that gender classification is a distinct task from race classification [15].)

The discrepancy is conjectured to be largely due to imbalanced training datasets and test benchmarks. Commonly used training datasets such as CelebA [21] and IMDb face [29] are made up of celebrities and biased towards light-skinned people. Test benchmarks such as Labeled Faces in the Wild [17] and Adience [11] are also biased [4], so high overall accuracies achieved on these test datasets obfuscate the inequality issue. The IJB-A dataset purports to be geographically diverse [19], but a close examination reveals that only 8 percent of the faces are of African descent, whereas more than 50 percent of the faces are of European descent. The PPB dataset is the first of its kind to be balanced by gender and balanced between African and European descent [4].

These works, however, do not investigate the underlying causes of the unequal misclassification rates in gender classification. In particular, since the partition in [4] is phenotypic into different skin type categories but the dark-skinned people are predominantly of African descent, it may be that other features, such as hairstyle, facial structure, cosmetics or clothing are the reason for disparity, rather than skin type alone [5]. A study of unequal gender classification accuracy, conducted using images with different parts of the face masked out, points to the nose region as important, but does little to disentangle the various aspects of identity [27]. Buolamwini points to several shortcomings of that study and calls for “further scholarship that attends to the impact of phenotypic characteristic on gender classification that extends beyond skin type” [5].

Heeding this call, we rigorously analyze gender classifiers and test the extent to which various features influence the classification outcome by skin type and gender. The contributions of this paper are as follows.

  • Using principles from color theory and the framework of optimal transport, we test stability to skin type by varying the skin type of a face keeping all other features fixed, and statistically show that the effect of skin tone on classification outcome is minimal. Thus, the unequal accuracy observed in [4] likely arises not specifically because of the skin type, but other correlated features of identity [1].

  • Motivated by a visual observation that most misclassified dark-skinned females have short hair, we test the significance of hair patterns in gender classification. We find that ignoring hair information retains high overall accuracy as well as differential performance, suggesting that information about the hair is also unimportant.

  • Finally, we use recently proposed ideas on contrastive explanation [10] to show that neural networks used for gender classification latch on to various facial features like lips, cheeks, and eyes with cosmetics as sufficient explanations for the female gender — suggesting that discrepancies in these features are the root of the inequality observed in intersectionally-defined groups.

2 Setup

2.1 Pilot Parliaments Benchmark dataset

Set Number Female Male
All
Dark
Light
Table 1: Gender and skin type composition of PPB*/PPB dataset.

The PPB dataset is the first benchmark dataset that is balanced across gender and Fitzpatrick skin type; the methodology of its collection is detailed in [4]. The creators intentionally chose countries with majority populations at opposite ends of the skin type scale to make the lighter/darker dichotomy more distinct. The images are uniform in (high) resolution quality, pose, illumination and expression, reducing the possibility of attributing differences in performance to variations in these quantities, all of which are known to be significant technical challenges [32].

We use an approximation of the PPB dataset for the experiments in this paper. This dataset contains images of parliament members from the six countries identified in [4] and were manually labeled by us into the categories dark-skinned and light-skinned.111The images were accessed in January 2018. We do not work with the PPB dataset directly due to its terms and conditions of use. Our approximation to the PPB dataset, which we call PPB*, is very similar to PPB and satisfies the relevant characteristics for the study we perform. Table 1 compares the decomposition of the original PPB dataset and our PPB* approximation according to skin type and gender.

2.2 Classification models

We employ several classifiers in our experiments. The first classifier is the IBM Watson gender classifier service available in August 2018, which achieves accuracy on several test benchmarks, as well as accuracy on the light male, light female and dark male groups of the PPB* dataset. We access the gender classifier using the API, which takes in an input image of variable size and returns (in the event that a face is detected) a score that the image is of a male person. Values are classified female and values are classified male. Accuracies on the PPB* dataset are presented in Table 2. The accessibility of scores from the IBM Watson API as well as its high level of performance make this a good classifier for carrying out stability experiments. (Scores are not available from the other two commercial classifiers studied in [4].)

The second and third classifier are needed for studying face-only cropped images (i.e. cropped so that no hair is present) because the commercial classifier API does not offer the flexibility to restrict modeling to smaller cropped areas. As a second classifier, we use IBM Watson’s “deep-face-features” API to extract a

-dimensional representation for every face-cropped image. We train a downstream support vector machine (SVM) classifier with radial basis function (RBF) kernel using

images from the CelebA dataset for training and

validation images to choose the RBF kernel parameter. The third classifier is the same as the second in terms of the SVM and its training, but has a more modern feature extractor: a convolutional neural network (CNN) trained on the recently created VGGFace2 dataset

[7]. In particular, we use the ResNet-50 network to extract a -dimensional representation for every face-cropped image.222https://github.com/ox-vgg/vgg_face2

The fourth classifier is needed because unfortunately, apart from the scores, we only have black-box access to the Watson API. The details of the model architecture are not available and we cannot inspect any intermediate layers. For interpretability experiments, we create a customized classifier: a simple CNN described in Figure 1.

Figure 1: Customized classifier trained on CelebA.

We train this neural network on CelebA [21] (

training images). Before input to the neural network, all images are face-cropped with padding, eye-aligned and resized to

. Accuracies on the PPB* dataset for the customized model are presented in Table 2. While they are not as good as the Watson classifier, especially on females, they do achieve close to state-of-the-art accuracy on males, and serve as a model that can be further interrogated.

Classifier DF DM LF LM
Watson
Customized
Table 2: Accuracy on PPB* for dark females (DF), light females (LF), dark males (DM), and light males (LM).

3 Experiments

The first set of experiments tests the stability of gender classification algorithms to variation in skin type. Next, we test the influence of hair length, by seeing whether the unequal performance persists when we remove all information about this attribute from faces. Finally, we seek sufficient explanations on faces for the classification decisions of female and male respectively.

3.1 Stability experiment: Does skin type alone influence gender classification?

In the first set of experiments, we systematically isolate the skin type and test the gender classification outcome for significant changes as a function of varying skin type. Isolating a latent facial attribute, and thus changing it, is in general known to be a challenging computer vision task. Likelihood based generative models 

[18] and conditional generative adversarial networks (GANs) [8, 28] have made recent progress in varying attributes like hair color and facial expressions. However, these tools themselves are trained on biased celebrity datasets. Moreover, these approaches are not effective in varying one attribute in isolation, leaving other attributes unchanged. We empirically show the existence of an approximately low-dimensional structure in color space that describes the group of human skin types. Leveraging this structure, we provide simple but mathematically grounded rules to change the skin type of a face.

3.1.1 A low-dimensional skin type group in YCrCb space

Recall that image pixels can be represented in the -dimensional vector space . Multiple bases for the color space such as the standard RGB [12], HSV [26] and YCrCb [16], have been used to create skin detection rules. More recently, hybrid rules have been proposed that work under complex lighting conditions [25, 22]. We use the following skin detection rule based on the YCrCb space [16], where stands for luminance and stand for chrominance values.

(1)

We employ this rule for its simplicity and fairly good performance in skin detection under the favorable lighting conditions of the PPB* dataset.

Figure 2: Example of a light-skinned and dark-skinned image in the PPB* dataset. Observe that the Cr and Cb channels are similar across both images. Practically all variation in the skin type is captured in the Y component.
Figure 3: Frequencies of Cr and Cb values across all skin type pixels across all images.

We also plot histograms of the YCrCb values of skin type pixels detected for each face image and observe that the and values fall into an even narrower range than described in (1). As the illustration in Figure 2 depicts, the chrominance values do not appear different for individuals with light or dark skin type. More rigorously, Figure 3 plots the histogram of and values across all images in PPB*; we observe that the chrominance values are stable. Practically all the variation in skin type is captured by the channel alone.333We expect this phenomenon will hold for any face image with high resolution quality and uniform illumination.

3.1.2 Methods to change the skin type

Based on the low-dimensional structure described in the previous subsection, we describe two rules that we employ to change the skin type of a face. Both are carried out in the YCrCb color space. We represent an image in RGB space by , and in YCrCb space by , where represents the width and height of the image.

Figure 4: Examples of light-skinned and dark-skinned faces whose luminance modes are shifted.
Procedure 1 (Luminance mode-shift)

We shift the skin type luminance mode of an image in the following sequence of steps:

  1. Determine .

  2. Calculate the mode-shift-value .

  3. Shift the luminance values, i.e. .

  4. Clip luminance values to .

Figure 5: Examples of light-skinned and dark-skinned faces that are optimally transported to new skin types, either darkened or lightened.

Procedure 1 is attractive for its simplicity and quick computation ( time), but the results of skin type change according to luminance mode shift are not always visually attractive, as demonstrated in Figure 4. Perhaps the luminance mode of skin type pixels is not sufficiently descriptive, and we would rather consider a transform between skin type histograms. Motivated by this, we next consider a skin type operation based on optimal transport, which has recently shown to be effective in color transfer in RGB space [13].

Procedure 2 (Optimal transport [13])

This procedure takes as input a target skin type distribution over values. We denote the skin type distribution of a grayscale image by and the target skin type distribution by . Then, the optimally transported image is defined as follows:

Figure 5 shows that the results of optimally transported skin type are visually more realistic. However, the computational cost of using this operation is more; the optimal transport operation has complexity .444The minimum size of images that we work with is , and in practice it takes seconds to a minute to optimally transport an image, compared to milliseconds to luminance mode shift an image. For future work, we could utilize the computational reductions in computing the optimal transport using Sinkhorn regularization [9].

3.1.3 Results

We consider the following ensemble of skin-type changes on the PPB* dataset:

  1. Dark females/dark males: Evaluate the score on the original image. Evaluate the average new score on the set of lightened images.

  2. Light females/light males: Evaluate the score on the original image. Evaluate the average new score on the set of darkened images.

The set of darkened/lightened mode-shifted images represents all luminance-mode-shifts with negative/positive . Owing to the computational expense of optimal transport, we pick ten images on varying ends of the skin type spectrum.

(a) Luminance-mode-shift.
(b) Optimal transport.
Figure 6: Histograms of differences in scores of dark females in PPB* dataset after lightening the skin type.
(a) Luminance-mode-shift.
(b) Optimal transport.
Figure 7: Histograms of differences in scores of light females in PPB* dataset after darkening the skin type.
(a) Scores.

(b) Mode-shift.
(c) OT.
Figure 8: Scatterplots of original prediction vs prediction after lightening for dark females. Shaded region represents dark females correctly classified after lightening.

Figure 6 shows the distribution of affected differences in prediction on lightening the set of dark females in the PPB dataset, either using mode-shift (Figure 6) or optimal transport (Figure 6).555The quality of the experiment itself is better with the optimal transport method as the lightened images are more realistic, but owing to computational complexity of optimal transport, we also have fewer lightened samples to average over. On the other hand, the mode-shift operation generates images that are not as realistic, but the experiment itself is statistically more robust as we can quickly generate many lightened samples. Thus, observing similar conclusions for the two methods strengthens our result. We observe that most images’ scores do not change meaningfully after lightening/darkening. In the case of dark females, of the images’ scores do not change by more than on lightening using mode-shift. of the images’ scores do not change by more than on lightening using optimal transport. In the case of light females, of the images’ scores do not change by more than on darkening using mode-shift. of the images’ scores do not change by more than on darkening using optimal transport. We conducted one-sample

-tests to test the null hypothesis that the mean of differences in scores is equal to

. The results in terms of the confidence intervals are presented in Table 3.

(a) Scores.

(b) Mode-shift.
(c) OT.
Figure 9: Scatterplots of original prediction vs prediction after darkening for light females. The shaded region represents light females that would be correctly classified after darkening.
Category confidence interval
DF, mode-shift
DF, OT
LF, mode-shift
LF, OT
Table 3:

Results of one sample t-test on mean of differences in scores with respect to

after skin type change.

Figures 8 and 9 shed insight into the relative difference in predictions, which also matters — in particular, we may care about the fraction of images whose average classification decision would change after lightening/darkening. In the scatterplots of original score vs. score after change in skin type (Figures 889 and 9), we highlight the points that fall in the red-shaded region as representing dark females that are correctly classified only after lightening, or light females that are incorrectly classified only after darkening. Very few images fall into these categories: and dark females (out of ) are correctly classified only after lightening using mode-shift and optimal transport respectively. The effect of darkening is even less pronounced for light females – after mode-shift and optimal transport, and females (out of ) respectively become incorrectly classified. Looking at the distribution on original scores of dark and light females (Figures 6 and 7), we see that almost all light females are classified as female with extremely high score, and almost all dark females are classified as either female or male with extremely high score. The dark females that are classified as male with extremely high score, say above , do not change significantly in score or classification decision on lightening.

All of these results, together, lead us to conclude that the skin type by itself has a minimal effect on classification decisions.

3.2 The potential influence of hair length

If the skin type is not an underrepresented facial feature that matters, then what is? We visually observed that most of the dark females that were misclassified as male by the Watson classifier had short hair. We manually labeled all the dark and light females in the PPB* dataset as either short or long-haired, and considered the intersectional performance of the Watson classifier on females across skin type and hair length.666We did not consider a similar split across males, because all males in the PPB* dataset are short-haired. The results are presented in Table 4. We notice a meaningful split in performance across hair length, especially for dark females: the classification accuracy on dark females with short hair is only , as opposed to a much higher on dark females with long hair. While the difference is less pronounced for light females, the accuracy on short-haired light females is also lower at . We also observed a relatively higher proportion of short-haired dark females in the PPB* dataset:777We are not making this claim for the general population of light-skinned and dark-skinned females, only the ones in the PPB* dataset. of the dark females were observed to be short-haired, as opposed to of the light females. Looking at purely misclassified dark females, out of were short-haired!

Dark-skinned Light-skinned
Short-haired
Long-haired
Table 4: Accuracy of Watson classifier on females intersected across skin type and hair length.

A hypothesis to explain the unequal performance displayed above is that the neural networks latch on to hair length as a significant predictor for gender. It is well-known in human visual perception that certain hairstyles (including male facial hair) are convincingly attributed to respective (binary) genders [3]. Such simplistic explanations can lead not only to racial biases but also confirmations of gender stereotypes.

We do not replicate the stability experiment for hair patterns because it is challenging to develop methodology to change a face’s hairstyle while keeping all other attributes unchanged.888While some GANs are able to do this in theory [8], the visual results are unconvincing. Moreover, these will not be able to adapt to hairstyles of underrepresented ethnicities. We can do a different sort of experiment, however — we can consider the performance of state-of-the-art gender classifiers that completely ignore information about the hair. This is achieved by using cropped facial images as input obtained via standard face detectors in computer vision [34]. By definition, these input images will contain only facial information, and not information about hair patterns (Figure 10 contains some example images).

Figure 10: Examples of images in PPB* dataset, only-face cropped.

We achieve intersectional accuracies on the PPB* dataset reported in Table 5 and Table 6 using SVMs with Watson deep face features and ResNet-50 deep face features with different kernel parameters on only-face cropped images, . We observe that the accuracies are greater than for males and relatively high for light females (), but only for dark females with the Watson features. Similarly, but at a higher overall accuracy level, the accuracies are greater than for males and light-skinned females, but only around

for dark-skinned females. Thus, the unequal performance persists even with a state-of-the-art gender classifier that has no information about hair, suggesting that there are other underrepresented features in the face that result in differential performance. Seeing the same result with different feature extraction methods trained on different datasets only strengthens the conclusion.

Female Male
Dark-skinned
Light-skinned
Table 5: Accuracy of SVM trained on Watson deep face features intersected across skin type and gender.
Female Male
Dark-skinned
Light-skinned
Table 6: Accuracy of SVM (for RBF kernel parameters ) trained on ResNet-50 deep face features intersected across skin type and gender.

3.3 Sufficient facial features

(a) Females.

(b) Males.
Figure 11: Sufficient explanations for sample females and males in PPB* dataset.
(a) Female.
(b) Male.
Figure 12: Average sufficient explanations for all females and males in PPB* dataset.

What do gender classifiers look at, if not skin type and hair length? One reason why this sort of question is so challenging to answer is the high dimensionality of facial images, but another reason is the complexity of ML models. State-of-the-art gender classifiers typically use neural networks, whose interpretability is challenging and currently an active area of machine learning research 

[23]. One class of interpretability methods explains a neural network’s decision on each image [2, 31, 33, 20]. The challenge is that superfluous features that do not actually contribute (or could even negatively contribute) to the classification decision can often be highlighted.

We seek minimal sufficient explanations for a classification decision in an image. In other words, what are the minimal features in an image that, by themselves, would be classified as female/male? We use the recently proposed contrastive explanations method [10] to answer this question, particularly looking at finding pertinent positives.

Procedure 3

The contrastive explanations method takes as input an image , which is classified in category , and a (possibly neural network) model

that maps images to logits on the classification decision. It selects a “pertinent positive explanation”

as a solution to the following optimization problem:

(2)

where , and the parameters

are regularization hyperparameters.

Effectively, the contrastive explanations method selects as simple an explanation as possible (where simplicity is measured by the elastic net regularizer [36]), subject to the classification decision of the original image being preserved. This method has been validated on the MNIST hand-written digits dataset as well as an MRI dataset [10], but has not yet been used to provide post hoc explanations in face classification tasks.

We set parameters and and searched over the space of hyperparameters for . We applied the contrastive explanations method to all images in the PPB* dataset and used the customized neural network classifier described in Section 2.2 as the function . Examples of contrastive explanations, effectively values of for correctly predicted females and males in the PPB* dataset are presented in Figure 11. We can see in Figure 11 that the lips, eyes and cheeks show up very prominently as a sufficient explanation for a female classification — in particular, female lips look pink/red in color, and cheekbones are more prominent. This could be a result of celebrities, who wear more prominent makeup in photographs than the average human population, in training datasets. In Figure 11, we see that the nose and forehead area are highlighted as sufficient explanations for a male classification. These are clearly consistent patterns noticed across correctly classified females and males. The average sufficient explanation masks for females and males are presented in Figure 12.

Work in visual perception [3, 6] has shown that humans can adeptly classify the gender of a face using certain facial features in isolation. It is interesting to see that ML models are also able to do this. It is particularly interesting that a few of the dark females in Figure 11

have underrepresented skin type and hair pattern, and are yet classified correctly, probably largely due to their lip and cheek patterns.

However, the questions of racial and gender bias are still relevant for facial features in isolation. It has been statistically established that the facial structure of European- and African-descent females, including cheekbones and lips, is different [35]. So we cannot expect the performance of ML algorithms trained primarily on European-descent females to generalize to African-descent females, even if such simple facial features form a sufficient explanation. And lip makeup alone constitutes a simplistic explanation of a female face, furthering a gender stereotype.

4 Discussion and Future Work

We rigorously tested the influence of various features on the gender classification task. First, we showed that gender classification is relatively stable to variations in skin type and thus the skin type by itself has a minimal effect on the classification decision. Second, we observed unequal performance on females with varying hair length and tested the performance of a classifier that ignores hair information. We saw that the unequal performance across gender and skin tone persists, suggesting that facial features other than skin type and hair pattern are behind the phenomenon. Finally, using the contrastive explanations method, we identified red/pink lips, cheeks and eyes; and nose and forehead as sufficient facial features for a classification decision to be female or male respectively.

We began this research with the aim of developing invariant or equivariant face classifiers that would ignore skin type completely and thereby have equal accuracy across groups. Such an approach would preclude the need for a high level of diversification in training datasets. However, our mathematically-oriented analysis using the low-dimensional skin type group revealed that high-performing gender classifiers are already invariant to skin type. Moreover, we showed that the classifiers are sensitive to a host of facial features that are not easily considered in isolation. To solve the problem of unequal performance, we require diverse training datasets that represent humanity across many dimensions of identity, starting but not ending with ethnicity.

Many questions remain as to how exactly to go about diversifying training data as even ethnicity does not fully encapsulate an individual’s identity. The contrastive explanations for female images consist of stereotypical attributes like lip makeup, and thus gender stereotypes commonly used by humans are confirmed in machine learning algorithms. This is a parallel issue to the issue of bias in skin type and correlated attributes: while females and males are balanced in training data, they are stereotypical females and males from the celebrity population. Informally speaking, we would expect the appearance of the general population of females and males alike to be quite different. We suggest that a good training dataset should diversify not only across ethnicity, but also across profession, cultural norms, and economic status, to capture a truly global population. Collecting such a dataset while controlling for image quality is a difficult, but necessary task.

As a parallel effort, it would also be interesting to examine the potential of decoupled classification on demographic groups, which along with task transfer learning has been shown to mitigate biases in classification of other facial attributes across race and gender 

[30].

Finally, the perspectives presented here are limited to the problem of binary gender classification from visual data, itself a flawed problem especially when considering various non-binary gendered individuals. The community needs to move beyond the binary gender construct in future work.

Acknowledgments

This work was conducted under the auspices of the IBM Science for Social Good initiative. The authors thank Joy Buolamwini, Pin-Yu Chen, Amit Dhurandhar, Michele Merler, and Karthikeyan Natesan Ramamurthy for comments and assistance.

References

  • [1] G. A. Akerlof and R. Kranton. Identity economics. The Economists’ Voice, 7(2), 2010.
  • [2] S. Bach, A. Binder, G. Montavon, F. Klauschen, K.-R. Müller, and W. Samek. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS One, 10(7):e0130140, 2015.
  • [3] E. Brown and D. I. Perrett. What gives a face its gender? Perception, 22(7):829–840, 1993.
  • [4] J. Buolamwini and T. Gebru. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pages 77–91, New York, USA, Feb. 2018.
  • [5] J. A. Buolamwini. Gender shades: Intersectional phenotypic and demographic evaluation of face datasets and gender classifiers. Master’s thesis, Massachusetts Institute of Technology, Sept. 2017.
  • [6] A. M. Burton, V. Bruce, and N. Dench. What’s the difference between men and women? Evidence from facial measurement. Perception, 22(2):153–176, 1993.
  • [7] Q. Cao, L. Shen, W. Xie, O. M. Parkhi, and A. Zisserman. VGGFace2: A dataset for recognizing faces across pose and age. In Proceedings of the International Conference on Automatic Face and Gesture Recognition, pages 67–74, Xi’an, China, May 2018.
  • [8] Y. Choi, M. Choi, M. Kim, J.-W. Ha, S. Kim, and J. Choo.

    StarGAN: Unified generative adversarial networks for multi-domain image-to-image translation.

    In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , pages 8789–8797, Salt Lake City, USA, June 2018.
  • [9] M. Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. In Advances in Neural Information Processing Systems, pages 2292–2300, 2013.
  • [10] A. Dhurandhar, P.-Y. Chen, R. Luss, C.-C. Tu, P. Ting, K. Shanmugam, and P. Das. Explanations based on the missing: Towards contrastive explanations with pertinent negatives. In Advances in Neural Information Processing Systems, Dec. 2018.
  • [11] E. Eidinger, R. Enbar, and T. Hassner.

    Age and gender estimation of unfiltered faces.

    IEEE Transactions on Information Forensics and Security, 9(12):2170–2179, 2014.
  • [12] R. D. Feitosa, L. L. de Oliveira, D. L. Borges, and M. Marcio Filho. A mathematical model for reducing the likely spectrum of human skin tones in the RGB color space. In Proceedings of the IEEE International Conference on Imaging Systems and Techniques, pages 329–334, Santorini, Greece, Oct. 2014.
  • [13] S. Ferradans, N. Papadakis, G. Peyré, and J.-F. Aujol. Regularized discrete optimal transport. SIAM Journal on Imaging Sciences, 7(3):1853–1882, 2014.
  • [14] T. B. Fitzpatrick. The validity and practicality of sun-reactive skin types I through VI. Archives of Dermatology, 124(6):869–871, June 1988.
  • [15] S. Fu, H. He, and Z.-G. Hou. Learning race from face: A survey. IEEE Transactions on Pattern Recognition and Machine Intelligence, 36(12):2483–2509, Dec. 2014.
  • [16] R.-L. Hsu, M. Abdel-Mottaleb, and A. K. Jain. Face detection in color images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(5):696–706, 2002.
  • [17] G. B. Huang, M. Mattar, T. Berg, and E. Learned-Miller. Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In Proceedings of the Workshop on Faces in ‘Real-Life’ Images: Detection, Alignment, and Recognition, Marseille, France, Oct. 2008.
  • [18] D. P. Kingma and P. Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. In Advances in Neural Information Processing Systems, Dec. 2018.
  • [19] B. F. Klare, B. Klein, E. Taborsky, A. Blanton, J. Cheney, K. Allen, P. Grother, A. Mah, and A. K. Jain. Pushing the frontiers of unconstrained face detection and recognition: IARPA Janus benchmark A. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1931–1939, Boston, USA, June 2015.
  • [20] T. Lei, R. Barzilay, and T. Jaakkola. Rationalizing neural predictions. In

    Proceedings of the Conference on Empirical Methods in Natural Language Processing

    , pages 107–117, Austin, USA, Nov. 2016.
  • [21] Z. Liu, P. Luo, X. Wang, and X. Tang. Deep learning face attributes in the wild. In Proceedings of the IEEE International Conference on Computer Vision, pages 3730–3738, Santiago, Chile, Dec. 2015.
  • [22] Z. Lu, X. Jiang, and A. Kot. Color space construction by optimizing luminance and chrominance components for face recognition. Pattern Recognition, 83:456–468, Nov. 2018.
  • [23] G. Montavon, W. Samek, and K.-R. Müller. Methods for interpreting and understanding deep neural networks. Digital Signal Processing, 73:1–15, Feb. 2017.
  • [24] M. Ngan, M. Ngan, and P. Grother. Face recognition vendor test (FRVT) performance of automated gender classification algorithms. US Department of Commerce, National Institute of Standards and Technology, 2015.
  • [25] M. M. Oghaz, M. A. Maarof, A. Zainal, M. F. Rohani, and S. H. Yaghoubyan.

    A hybrid color space for skin detection using genetic algorithm heuristic search and principal component analysis technique.

    PloS One, 10(8):e0134828, 2015.
  • [26] V. Oliveira and A. Conci. Skin detection using hsv color space. In H. Pedrini, & J. Marques de Carvalho, Workshops of Sibgrapi, pages 1–2, 2009.
  • [27] Ö. Özbudak, M. Kırcı, Y. Çakır, and E. O. Güneş. Effects of the facial and racial features on gender classification. In Proceedings of the Mediterranean Electrotechnical Conference, pages 26–29, Valetta, Malta, Apr. 2010.
  • [28] G. Perarnau, J. van de Weijer, B. Raducanu, and J. M. Álvarez. Invertible conditional GANs for image editing. In NIPS Workshop on Adversarial Training, Dec. 2016.
  • [29] R. Rothe, R. Timofte, and L. Van Gool. Deep expectation of real and apparent age from a single image without facial landmarks. International Journal of Computer Vision, 126(2–4):144–157, Apr. 2018.
  • [30] H. J. Ryu, H. Adam, and M. Mitchell. InclusiveFaceNet: Improving face attribute detection with race and gender diversity. In Proceedings of the Fairness, Accountability, and Transparency in Machine Learning Workshop, Stockholm, Sweden, July 2018.
  • [31] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra. Grad-CAM: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, pages 618–626, Venice, Italy, Oct. 2017.
  • [32] T. Sim, S. Baker, and M. Bsat. The CMU pose, illumination, and expression (PIE) database. In Proceedings of the International Conference on Automatic Face and Gesture Recognition, pages 53–58, 2002.
  • [33] K. Simonyan, A. Vedaldi, and A. Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034, 2013.
  • [34] P. Viola and M. Jones. Rapid object detection using a boosted cascade of simple features. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages I–511–I–518, Kauai, USA, Dec. 2001.
  • [35] Z. Zhuang, D. Landsittel, S. Benson, R. Roberge, and R. Shaffer. Facial anthropometric differences among gender, ethnicity, and age groups. Annals of Occupational Hygiene, 54(4):391–402, 2010.
  • [36] H. Zou and T. Hastie. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(2):301–320, 2005.