Removing Disparate Impact of Differentially Private Stochastic Gradient Descent on Model Accuracy

03/08/2020 ∙ by Depeng Xu, et al. ∙ University of Arkansas 0

When we enforce differential privacy in machine learning, the utility-privacy trade-off is different w.r.t. each group. Gradient clipping and random noise addition disproportionately affect underrepresented and complex classes and subgroups, which results in inequality in utility loss. In this work, we analyze the inequality in utility loss by differential privacy and propose a modified differentially private stochastic gradient descent (DPSGD), called DPSGD-F, to remove the potential disparate impact of differential privacy on the protected group. DPSGD-F adjusts the contribution of samples in a group depending on the group clipping bias such that differential privacy has no disparate impact on group utility. Our experimental evaluation shows how group sample size and group clipping bias affect the impact of differential privacy in DPSGD, and how adaptive clipping for each group helps to mitigate the disparate impact caused by differential privacy in DPSGD-F.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Most researches on fairness-aware machine learning study whether the predictive decision made by machine learning model is discriminatory against the protected group [Kamishima et al.2011, Zafar et al.2017, Kamiran et al.2010, Hardt et al.2016, Zhang et al.2018, Madras et al.2018]

. For example, demographic parity requires that a prediction is independent of the protected attribute. Equality of odds

[Hardt et al.2016] requires that a prediction is independent of the protected attribute conditional on the original outcome. These fairness notions focus on achieving non-discrimination within one single model. In addition to the within-model fairness, cross-model fairness also arises in differential privacy preserving machine learning models when we compare the accuracy loss incurred by private model between the majority group and the protected group. Recently, research in [Bagdasaryan et al.2019] shows that the reduction in accuracy incurred by deep private models disproportionately impacts underrepresented subgroups. The unfairness in this cross-model scenario is that the reduction in accuracy due to privacy protection is discriminatory against the protected group.

In this paper, we study the inequality in utility loss due to differential privacy w.r.t. groups, which compares the change in prediction accuracy w.r.t. each group between the private model and the non-private model. Differential privacy guarantees the query results or the released model cannot be exploited by attackers to derive whether one particular record is present or absent in the underlying dataset [Dwork et al.2006]. When we enforce differential privacy onto a regular non-private model, the model trades some utility off for privacy. On one hand, with the impact of differential privacy, the within-model unfairness in the private model may be different from the one in the non-private model [Jagielski et al.2019, Xu et al.2019, Ding et al.2020, Cummings et al.2019]. On the other hand, differential privacy may introduce additional discriminative effect towards the protected group when we compare the private model with the non-private model. The utility loss between the private and non-private models w.r.t. each group, such as reduction in group accuracy, may be uneven. The intention of differential privacy should not be to introduce more accuracy loss on the protected group regardless of the level of within-model unfairness in the non-private model.

There are several empirical studies on the relation between the utility loss due to differential privacy and groups with different represented sample sizes. Research in [Bagdasaryan et al.2019] shows that the accuracy of private models tends to decrease more on classes that already have lower accuracy in the original, non-private model. In their case, the direction of inequality in utility loss due to differential privacy is the same as the existing within-model discrimination against the underrepresented group in the non-private model, i.e. “the poor become poorer”. Research in [Du et al.2019] shows the similar observation that the contribution of rare training examples is hidden by random noise in differentially private stochastic gradient descent, and that random noise slows down the convergence of the learning algorithm. Research in [Jaiswal and Provost2019] shows different observations when they analyze if the performance on emotion recognition is affected in an imbalanced way for the models trained to enhance privacy. They find that while the performance is affected differently for the subgroups, the effect is not consistent across multiple setups and datasets. In their case, there is no consistent direction of inequality in utility loss by differential privacy against the underrepresented group. Hence, the impact of differential privacy on group accuracy is more complicated than the observation in [Bagdasaryan et al.2019] (see detailed discussions in Section 4.1). It needs to be cautionary to conclude that differential privacy introduces more utility loss on the underrepresented group. The bottom line is that the objective of differential privacy is to protect individual’s privacy instead of introducing unfairness in the form of inequality in utility loss w.r.t. groups. Though the privacy metric increases when a model is adversarially trained to enhance privacy, we need to ensure that the performance of the model on that dataset does not harm one subgroup more than the other.

In this work, we conduct theoretical analysis of the inequality in utility loss by differential privacy and propose a new differentially private mechanism to remove it. We study the cost of privacy w.r.t. each group in comparison with the whole population and explain how group sample size is related to the privacy impact on group accuracy along with other factors (Section 4.2). The difference in group sample sizes leads to the difference in average group gradient norms, which results in different group clipping biases under the uniform clipping bound. It costs less utility trade-off to achieve the same level of differential privacy for the group with larger group sample size and/or smaller group clipping bias. In other words, the group with smaller group sample size and/or larger group clipping bias incurs more utility loss when the algorithm achieves the sample level of differential privacy w.r.t. each group. Furthermore, we propose a modified differentially private stochastic gradient descent (DPSGD) algorithm, called DPSGD-F, to remove the potential inequality in utility loss among groups (Section 5.2). DPSGD-F adjusts the contribution of samples in a group depending on the group clipping bias. For the group with smaller cost of privacy, their contribution is decreased and the achieved privacy w.r.t. their group is stronger; and vise versa. As a result, the final utility loss is the same for each group, i.e. differential privacy has no disparate impact on group utility in DPSGD-F. Our experimental evaluation shows the effectiveness of our removal algorithm on achieving equal utility loss with satisfactory utility (Section 6).

Our contributions are as follows:

  • We provide theoretical analysis on the group level cost of privacy and show the source of disparate impact of differential privacy on each group in the original DPSGD.

  • We propose a modified DPSGD algorithm, called DPSGD-F, to achieve differential privacy with equal utility loss w.r.t. each group. It uses adaptive clipping to adjust the sample contribution of each group, so the privacy level w.r.t. each group is calibrated based on their cost of privacy. As a result, the final group utility loss is the same for each group in DPSGD-F.

  • In our experimental evaluation, we show how group sample size and group clipping bias affect the impact of differential privacy in DPSGD, and how adaptive clipping for each group helps to mitigate the disparate impact caused by differential privacy in DPSGD-F.

2 Related Works

2.1 Differential Privacy

Existing literature in differentially private machine learning targets both convex and nonconvex optimization algorithms and can be divided into three main classes, input perturbation, output perturbation, and inner perturbation. Input perturbation approaches [Duchi et al.2013] add noise to the input data based on local differential privacy model. Output perturbation approaches [Bassily et al.2018] add noise to the model after the training procedure finishes, i.e., without modifying the training algorithm. Inner perturbation approaches modify the learning algorithm such that the noise is injected during learning. For example, research in [Chaudhuri et al.2011] modifies the objective of the training procedure and [Abadi et al.2016] add noise to the gradient output of each step of the training without modifying the objective.

Limiting users to small contributions keeps noise level at the cost of introducing bias. research in [Amin et al.2019]

characterizes the trade-off between bias and variance, and shows that (1) a proper bound can be found depending on properties of the dataset and (2) a concrete cost of privacy cannot be avoided simpling by collecting more data. Several works study how to adaptively bound the contributions of users and clip the model parameters to improve learning accuracy and robustness. Research in

[Pichapati et al.2019] uses coordinate-wise adaptive clipping of the gradient to achieve the same privacy guarantee with much less added noise. In federated learning setting, the proposed approach [Thakkar et al.2019]

adaptively sets the clipping norm applied to each user’s update, based on a differentially private estimate of a target quantile of the distribution of unclipped norms remove the need for such extensive parameter tuning. Other than adaptive clipping, research in

[Phan et al.2017]

adaptively injects noise into features based on the contribution of each to the output so that the utility of deep neural networks under -differential privacy is improved;

[Lee and Kifer2018] adaptively allocates per-iteration privacy budget to achieve zCDP on gradient descent.

2.2 Fairness-aware Machine Learning

In the literature, many methods have been proposed to modify the training data for mitigating biases and achieving fairness. These methods include: Massaging [Kamiran and Calders2009], Reweighting [Calders et al.2009], Sampling [Kamiran and Calders2011], Disparate Impact Removal [Feldman et al.2015], Causal-based Removal [Zhang et al.2017] and Fair Representation Learning [Edwards and Storkey2016, Xie et al.2017, Madras et al.2018, Zhang et al.2018]. Some researches propose to mitigate discriminative bias in model predictions by adjusting the learning process [Zafar et al.2017] or changing the predicted labels [Hardt et al.2016]. Recent studies [Zhang et al.2018, Madras et al.2018] also use adversarial learning techniques to achieve fairness in classification and representation learning.

Reweighting or sampling changes the importance of training samples according to an estimated probability that they belong to the protected group so that more importance is placed on sensitive ones

[Calders et al.2009, Dwork et al.2012, Kamiran and Calders2011]. Adaptive sensitive reweighting uses an iterative reweighting process to recognize sources of bias and diminish their impact without affecting features or labels [Krasanakis et al.2018]. [Kearns et al.2018]

uses agnostic learning to achieve good accuracy and fairness on all subgroups. However, it requires a large number of iterations, thus incurring a very high privacy loss. Other approaches to balance accuracy across classes include oversampling, adversarial training with a loss function that overweights the underrepresented group, cost-sensitive learning, and resampling. These techniques cannot be directly combined with DPSGD because the sensitivity bounds enforced by DPSGD are not valid for oversampled or overweighted inputs, i.e. the information used to find optimal balancing strategy is highly sensitive with unbounded sensitivity.

2.3 Differential Privacy and Fairness

Recent works study the connection between achieving privacy protection and fairness. Research in [Dwork et al.2012] proposed a notion of fairness that is a generalization of differential privacy. Research in [Hajian et al.2015] developed a pattern sanitization method that achieves -anonymity and fairness. Most recently, the position paper [Ekstrand et al.2018] argued for integrating recent research on fairness and non-discrimination to socio-technical systems that provide privacy protection. Later on, several works studied how to achieve within-model fairness (demographic parity [Xu et al.2019, Ding et al.2020], equality of odds [Jagielski et al.2019], equality of opportunity [Cummings et al.2019]) in addition to enforcing differential privacy in the private model. Our work in this paper studies how to prevent disparate impact of the private model on model accuracy across different groups.

3 Preliminary

Let be a dataset with tuples , where each tuple includes the information of a user on unprotected attributes , the protected attribute , and the decision . Let denote a subset of with the set of tuples with . Given an set of examples

, the non-private model outputs a classifier

with parameter which minimizes the loss function . The optimal model parameter is defined as: . A differentially private algorithm outputs a classifier by selecting in a manner that satisfies differential privacy while keeping it close to the actual optimal .

3.1 Differential Privacy

Differential privacy guarantees output of a query be insensitive to the presence or absence of one record in a dataset.

Definition 1.

Differential privacy [Dwork et al.2006]. A mechanism is -differentially private if, for any pair of datasets and that differ in exactly one record, and for all events in the output range of , we have

The parameter denotes the privacy budget, which controls the amount by which the distributions induced by and may differ. The parameter is a broken probability. Smaller values of and indicate stronger privacy guarantee).

Definition 2.

Global sensitivity [Dwork et al.2006]. Given a query : , the global sensitivity is defined as .

The global sensitivity measures the maximum possible change in when one record in the dataset changes.

The Gaussian mechanism with parameter adds Gaussian noise to each component of the model output.

Definition 3.

Gaussian mechanism [Dwork et al.2006]. Let be arbitrary. For , the Gaussian mechanism with parameter satisfies -differential privacy.

3.2 Differentially Private Stochastic Gradient Descent

The procedure of deep learning model training is to minimize the output of a loss function through numerous stochastic gradient descent (SGD) steps.

[Abadi et al.2016] proposed a differentially private SGD algorithm (DPSGD). DPSGD uses a clipping bound on norm of individual updates, aggregates the clipped updates, and then adds Gaussian noise to the aggregate. This ensures that the iterates do not overfit to any individual user’s update.

The privacy leakage of DPSGD is measured by , i.e., computing a bound for the privacy loss that holds with certain probability . Each iteration of DPSGD can be considered as a privacy mechanism that has the same pattern in terms of sensitive data access. [Abadi et al.2016]

further proposed a moment accounting mechanism which calculates the aggregate privacy bound when performing SGD for multiple steps. The moments accountant computes tighter bounds for the privacy loss compared to the standard composition theorems. The moments accountant is tailored to the Gaussian mechanism and employs the log moment of each

to derive the bound of the total privacy loss.

1:  for  do
2:     Randomly sample a batch of samples from
3:     for each sample  do
4:        
5:     end for
6:     for each sample  do
7:        
8:     end for
9:     
10:     
11:  end for
12:  Return and accumulated privacy cost
Algorithm 1 DPSGD (Dataset , loss function , learning rate , batch size , noise scale , clipping bound )

To reduce noise in private training of neural networks, DPSGD [Abadi et al.2016] truncates the gradient of a neural network to control the sensitivity of the sum of gradients. This is because the sensitivity of gradients and the scale of the noise would otherwise be unbounded. To fix this, a cap on the maximum size of a user’s contribution is adopted (Line 7 in Algorithm 1). This will bias our estimated sum but also reduce the amount of added noise, as the sensitivity of the sum is now . One question is how to choose the truncation level for the gradient norm. If set too high, the noise level may be so great that any utility in the result is lost. If set too low, a large amount of gradients will be forced to clip. DPSGD simply suggests using the median of observed gradients. [Amin et al.2019] investigated this bias-variance trade-off and showed that the limit we should choose is the -quantile of the gradients themselves. It does not matter how large or small the gradients are above or below the cutoff, only that a fixed number of values are clipped.

3.3 Within-model Fairness

Consider the classifier which predicts the class label given the unprotected attributes . Classification fairness requires that the predicted label is unbiased with respect to the protected variable . The following notions of fairness in classification was defined by [Hardt et al.2016] and refined by [Beutel et al.2017].

Definition 4.

Demographic parity Given a labeled dataset and a classifier , the property of demographic parity is defined as

This means that the predicted labels are independent of the protected attribute.

Definition 5.

Equality of odds Given a labeled dataset and a classifier , the property of equality of odds is defined as

where .

Hence, for , equality of odds requires the classifier has equal true positive rates (TPR) between two subgroups and ; for , the classifier has equal false positive rates (FPR) between two subgroups.

Equality of odds promotes that individuals who qualify for a desirable outcome should have an equal chance of being correctly classified for this outcome. It allows for higher accuracy with respect to non-discrimination. It enforces both equal true positive rates and false positive rates in all demographics, punishing models that perform well only on the majority.

4 Disparate Impact on Model Accuracy

In this section, we first discuss how differentially private learning, specifically DPSGD, causes inequality in utility loss through our preliminary observations. Then we study the cost of privacy with respect to each group in comparison with the whole population and explain how group sample size is related to the privacy impact on group accuracy along with other factors.

Dataset MNIST Adult Dutch
Group Total Class 2 Class 8 Total M F Total M F
Sample size 54649 5958 500 45222 30527 14695 60420 30273 30147
SGD 0.9855 0.9903 0.9292 0.8099 0.7610 0.9117 0.7879 0.8013 0.7744
DPSGD 0.8774 0.9196 0.2485 0.7507 0.6870 0.8836 0.6878 0.6479 0.7278
DPSGD vs. SGD -0.1081 -0.0707 -0.6807 -0.0592 -0.0740 -0.0281 -0.1001 -0.1534 -0.0466
Table 1: Model accuracy w.r.t. the total population, the majority group and the minority group for SGD and DPSGD on the unbalanced MNIST (), the original Adult () and the original Dutch () datasets
Figure 1:

The average gradient norm and the average loss w.r.t. class 2 and 8 over epochs for SGD and DPSGD on the MNIST dataset (Balanced:

, Unbalanced: )
Figure 2: The average gradient norm and the average loss w.r.t. male and female groups over epochs for SGD and DPSGD on the original Adult and the original Dutch datasets (Adult: , Dutch: )

4.1 Preliminary Observations

To explain why DPSGD has disparate impact on model accuracy w.r.t. each group, [Bagdasaryan et al.2019]

constructs an unbalanced MNIST dataset to study the effects of gradient clipping, noise addition, the size of the underrepresented group, batch size, length of training, and other hyperparameters. Training on the data of the underrepresented subgroups produces larger gradients, thus clipping reduces their learning rate and the influence of their data on the model. They also showed random noise addition has the biggest impact on the underrepresented inputs. However,

[Jaiswal and Provost2019] reports inconsistent observations on whether differential privacy has negative discrimination towards the underrepresented group in terms of reduction in accuracy. To complement their observations, we use the unbalanced MNIST dataset used in in [Bagdasaryan et al.2019] to reproduce their result, and we also use two benchmark census datasets (Adult and Dutch) in fair machine learning to study the inequality of utility loss due to differential privacy. We include the setup details in Section 6.1. Table 1 shows the model accuracy w.r.t. the total population, the majority group and the minority group for SGD and DPSGD on the MNIST, Adult and Dutch datasets.

For the unbalanced MNIST dataset, the minority group (class 8) has significantly larger utility loss than the other groups in private model. DPSGD only results in decrease in accuracy on the well-represented classes but accuracy on the underrepresented class drops , exhibiting a disparate impact on the underrepresented class. Figure 1

shows that the small sample size reduces both the convergence rate and the optimal utility of class 8 in DPSGD in comparison with the non-private SGD. The model is far from converging, yet clipping and noise addition do not let it move closer to the minimum of the loss function. Furthermore, the addition of noise whose magnitude is similar to the update vector prevents the clipped gradients of the underrepresented class from sufficiently updating the relevant parts of the model. Training with more epochs does not reduce this gap while exhausting the privacy budget. Differential privacy also slows down the convergence and degrades the utility for each group. Hence, DPSGD introduces negative discrimination against the minority group (which already has lower accuracy in the non-private SGD model) on the unbalanced MNIST dataset. This matches the observation in

[Bagdasaryan et al.2019].

However, on the Adult and Dutch datasets, we have different observations than MNIST. The Adult dataset is an unbalanced dataset, where the female group is underrepresented. Even though the male group is the majority group, it has lower accuracy in the SGD and more utility loss in DPSGD than the female group. The Dutch dataset is a balanced dataset, where the group sample sizes are similar for male and female. However, DPSGD introduces more negative discrimination against the male group and its direction (male group loses more accuracy due to DP) is even opposite to the direction of within-model discrimination (female group has less accuracy in SGD). Figure 2 shows that the average gradient norm is much higher for the male group in DPSGD on both datasets. It is not simply against the group with smaller sample size or lower accuracy in the SGD. Hence, differential privacy does not always introduce more accuracy loss to the minority group on the Adult and Dutch datasets. This matches the observation in [Jaiswal and Provost2019].

From the preliminary observations, we learn that the disproportionate effect from differential privacy is not guaranteed towards the underrepresented group or the group with “poor” accuracy. Why does differential privacy cause inequality in utility loss w.r.t. each group? It may depend on more than just the represented sample size of each group: the classification model, the mechanism to achieve differential privacy, the relative complexity of data distribution of each group subject to the model. One common observation among all settings is that the group that incurs more utility loss has larger gradients and worse convergence. In Figure 1, the underrepresented class 8 has average gradient norm of over 100 and bad utility in DPSGD. In Figure 2, the male group has much larger average gradient norm than the female group in DPSGD on both Adult and Dutch datasets. It is important to address the larger gradients and worse convergence directly in order to mitigate the inequality in utility loss.

4.2 Analysis on Cost of Privacy w.r.t. Each Group

In this section, we conduct analysis on the cost of privacy from the viewpoint of a single batch, where the utility loss is measured by the expected error of the estimated private gradient w.r.t. each group. Our analysis follows [Amin et al.2019] that investigates the bias-variance trade-off due to clipping in DPSGD. Suppose that is a collection of samples, . Each corresponds to a sample and generates the gradient . We would like to estimate the average gradient from in a differentially private way while minimizing the objective function.

Based on Algorithm 1, we denote the original gradient before clipping (in Line 4), the gradient after clipping but before adding noise (in Line 7), and the gradient after clipping and adding noise (in Line 9). The expected error of the estimate consists of a variance term (due to the noise) and a bias term (due to the contribution limit):

In the above derivation, we base the fact that the mean absolute deviation of a Laplace variable is equal to its scale parameter. We can find the optimal by noting that the bound is convex with sub-derivative , thus the minimum is achieved when is equal to the th largest value in gradients.

The expected error is tight as we have

In other words, the limit we should choose is just the -quantile of the gradients themselves.

For the same batch of samples, we derive the cost of privacy w.r.t. each group. Suppose the batch of samples are from groups and group has sample size . We have and .

DPSGD bounds the sensitivity of gradient by clipping each sample’s gradient with a clipping bound .

Then, DPSGD adds Gaussian noise on the sum of clipped gradients.

The expected error of the estimate also consists of a variance term (due to the noise) and a bias term (due to the contribution limit):

(1)

where is the number of examples that get clipped in group . Similarly, we can get the tight bound w.r.t. each group is .

From Equation 1, we know the utility loss of group , measured by the expected error of the estimated private gradient, is bounded by two terms, the bias due to contribution limit (depending on the size of gradients and the size of clipping bound) and the variance of the noise (depending on the scale of the noise). Next, we discuss their separate impacts in DPSGD.

Given the clipping bound , the bias due to clipping w.r.t. the group with large gradients is larger than the one w.r.t. the group with small gradients. Before clipping, the group with large gradients has large contribution in the total gradient in SGD, but it is not the case in DPSGD. The direction of the total gradient after clipping is closer to the direction of the gradient of the group with small bias (small gradients) in comparison with the direction of the total gradient before clipping . Due to clipping, the contribution and convergence of the group with large gradients are reduced.

The added noise increases the variance of the model gradient as it tries to hide the influence of a single record on the model. It slows down the convergence rate of the model. Because the noise scales and the sensitivity of clipped gradients are the same for all groups, the noisy gradients of all groups achieve the same level of differential privacy . The direction of the noise is random, i.e. it does not favor a particular group in expectation.

Overall in DPSGD, the group with large gradients has larger cost of privacy, i.e. they have more utility loss to achieve level of differential privacy under the same clipping bound .

We can also consider the optimal choice of which is -quantile for the whole batch. For each group, the optimal choice of is -quantile for group . The distance between and is not the same for all groups, and is closer to the choice of for the group with small bias (small gradients).

Now we look back on the preliminary observations in Section 4.1. On MNIST, the group sample size affects the convergence rate for each group. The group with large sample size (the majority group, class 2) has larger contribution in the total gradient than the group with small sample size (the minority group, class 8), and therefore it leads to a relatively faster and better convergence. As the result, the gradients of the minority group are larger than the gradients of the majority group later on. In their case, the small sample size is the main cause of large gradient norm and large utility loss in class 8. On Adult and Dutch, the average bias due to clipping for each group is different because the distributions of gradients are quite different. The average gradient norm of the male group is larger than the average gradient norm of the female group, even though the male group is not underrepresented. As the result, the male group’s contribution is limited due to clipping and it has larger utility loss in DPSGD. In there case, the group sample size is not the only reason to cause difference in the average gradient norm, and the other factors (e.g., the relative complexity of data distribution of each group subject to the model) out-weighs sample size, so the well-represented male group has larger utility loss.

This gives us an insight on the relation between differential privacy and the inequality in utility loss w.r.t. each group. The direct cause of the inequality is the large cost of privacy due to large average gradient norm (which can be caused by small group sample size along with other factors). In DPSGD, the clipping bound is selected uniformly for each group without consideration of the difference in clipping biases. As a result, the noise addition to achieve -differential privacy on the learning model results in different utility-privacy trade-off for each group, where the underrepresented or the more complex group incurs a larger utility loss. After all, DPSGD is designed to protect individual’s privacy with nice properties without consideration of its different impact towards each group. In order to avoid disparate utility loss among groups, we need to modify DPSGD such that each group needs to achieve different level of privacy to counter their difference in costs of privacy.

5 Removing Disparate Impact

Our objective is to build a learning algorithm that outputs a neural network classifier with parameter

that achieves differential privacy and equality of utility loss with satisfactory utility. Based on our preliminary observation and analysis on cost of privacy in DPSGD, we propose a heuristic removal algorithm to achieve equal utility loss w.r.t. each group, called DPSGD-F.

5.1 Equality of Impact of Differential Privacy

In within-model fairness, equality of odds results in the equality of accuracy for different groups. Note that equal accuracy does not result in equal odds. As a trade-off for privacy, differential privacy results in accuracy loss on the model. However, different groups may incur different levels of accuracy loss. We use reduction in accuracy w.r.t. group to measure utility loss between the private model and the non-privacte model , denoted by . We define a new fairness notion called equality of privacy impact for differentially private learning, which requires the group utility loss due to differential privacy should be the same for all groups.

Definition 6.

Equality of privacy impact Given a labeled dataset , a classifier and a differentially private classifier , a differentially private mechanism satisfies equality of privacy impact if

where are any two values of the protected attribute .

5.2 Removal Algorithm

1:  for  do
2:     Randomly sample a batch of samples from
3:     for each sample  do
4:        
5:     end for
6:     
7:     for each group  do
8:        
9:        
10:     end for
11:     for each sample  do
12:        
13:     end for
14:     
15:     
16:     
17:  end for
18:  Return and accumulated privacy cost
Algorithm 2 DPSGD-F (Dataset , loss function , learning rate , batch size , noise scale , base clipping bound )

We propose a heuristic approach for differentially private SGD that removes disparate impact across different groups. The intuition of our heuristic approach is to balance the level of privacy w.r.t. each group based on their utility-privacy trade-off. Algorithm 2 shows the framework of our approach. Instead of uniformly clipping the gradients for all groups, we propose to do adaptive sensitive clipping where each group gets its own clipping bound . For the group with larger clipping bias (due to large gradients), we choose a larger clipping bound to balance their higher cost of privacy. The large gradients may be due to group sample size or other factors.

Based on our observation and analysis in the previous section, to balance the difference in costs of privacy for each group, we need to adjust the clipping bound such that the contribution of each group is proportional to the size of their average gradient. Ideally, we would like to adjust the clipping bound based on the private estimate of the average gradient norm. However, the original gradient before clipping has unbounded sensitivity. It would not be practical to get its private estimate. We need to construct a good approximate estimate of the relative size of the average gradient w.r.t. each group and it needs to have a small sensitivity for private estimation.

In our algorithm, we choose adaptive clipping bound based on the , where . To avoid the influence of group sample size, we use the fraction of that represents the fraction of samples in the group with gradients larger than . The relative ratio of and can approximately represent the relative size of the average gradient. Both and have sensitivity of 1, which is much smaller than the sensitivity of the actual gradients. Note that in Algorithm 2 we do not attempt to choose the clipping bound for group in a differentially private way. We can easily use a small privacy budget to get in a differentially private way.

After the adaptive clipping, the sensitivity of the clipped gradient of group is . The sensitivity of the clipped gradient of the total population would be as the worst case in the total population needs to be considered.

For the total population, Algorithm 2 still satisfies -differential privacy as it accounts for the worst clipping bound . For the group level, each group achieves different levels of privacy depending on their utility-privacy trade-off.

In the case of [Bagdasaryan et al.2019], the difference in gradient norms is primarily decided by group sample size. Consider a majority group and a minority group . In Algorithm 1, each group achieves the same level of privacy, but the underrepresented group has higher privacy cost (utility loss). In Algorithm 2, we choose a higher clipping bound for the underrepresented group. Because the noise scale is and the sensitivity of clipped gradients for the underrepresented group is , the noisy gradient w.r.t. the underrepresented group achieves -differential privacy. The well-represented group has a smaller cost of privacy, so we choose a lower clipping bound . Because the noise scale is and the sensitivity of clipped gradients for the underrepresented group is , the noisy gradient w.r.t. the underrepresented group then achieves -differential privacy. Two groups have different clipping bounds and the same noise addition based on (same but different relative scales w.r.t. their group sensitivities). Hence, when we enforce the same level of utility loss for groups with different sample sizes, the well-represented group achieves stronger privacy (smaller than ) than the underrepresented group.

In the case of Adult/Dutch, the male group has larger gradients regardless of the sample size. The group with smaller gradients based on model and data distribution has smaller cost of privacy. Algorithm 2 can adjust the clipping bound for each group. As a result, the group with smaller gradients achieves stronger level of privacy. Eventually, they can have similar clipping bias than the ones in Algorithm 1.

5.3 Baseline

1:  for  do
2:     Randomly sample a batch of samples from
3:     for each sample  do
4:        
5:     end for
6:     for each group  do
7:        
8:     end for
9:     for each sample  do
10:        
11:     end for
12:     
13:     
14:     
15:  end for
16:  Return and accumulated privacy cost
Algorithm 3 Naïve (Dataset , loss function , learning rate , batch size , noise scale , base clipping bound )

There is no previous work on how to achieve equal utility loss in DPSGD. For experimental evaluation, we also present a naïve baseline algorithm based on reweighting (shown as Algorithm 3) in this section. The naïve algorithm considers group sample size as the main cause of disproportional impact in DPSGD and adjusts sample contribution of each group to mitigate the impact of sample size.

For the group with larger group sample size, we reweight the sample contribution with instead of using uniform weight of 1 for all groups. (Note that in Algorithm 1 is estimated based on uniform weight of each sample regardless of their group membership.) The sensitivity for group is . The sensitivity for the total population would be . The result also matches the idea that we limit the sample contribution of the group with smaller cost of privacy to achieve stronger privacy level w.r.t. the group. However, Naïve only considers the group sample size. As we know, the factors that affect the gradient norm and bias due to clipping are more complex than just the group sample size. We will compare with this Naïve approach as a baseline in our experiments.

6 Experiments

6.1 Experiment Setup

6.1.1 Datasets

We use MNIST dataset and replicate the setting in [Bagdasaryan et al.2019]. The original MNIST dataset is a balanced dataset with 60,000 training samples and each class has about 6,000 samples. Class 8 has the most false negatives, hence we choose it as the artificially underrepresented group (reducing the number of training samples from 5,851 to 500) in the unbalanced MNIST dataset. We compare the underrepresented class 8 with the well-represented class 2 that shares fewest false negatives with the class 8 and therefore can be considered independent. The testing dataset has 10,000 testing samples with about 1,000 for each class.

We also use two census datasets, Adult and Dutch. For both datasets, we consider “Sex” as the protected attribute and “Income” as decision. For unprotected attributes, we convert categorical attributes to one-hot vectors and normalize numerical attributes to range. After preprocessing, we have 40 unprotected attributes for Adult and 35 unprotected attributes for Dutch. The original Adult dataset has 45,222 samples (30,527 males and 14,695 females). We sample a balanced Adult dataset with 14,000 males and 14,000 females. The original Dutch dataset is close to balanced with 30,273 males and 30,147 females. We sample an unbalanced Dutch dataset with 30,000 males and 10,000 females. In all settings, we split the census datasets into 80% training data and 20% testing data.

6.1.2 Model

For the MNIST dataset, we use a neural network with 2 convolutional layers and 2 linear layers with 431K parameters in total. We use learning rate , batch size , and the number of training epochs .

For the census datasets, we use a logistic regression model with regularization parameter 0.01. We use learning rate

, batch size , and the number of training epochs .

6.1.3 Baseline

We compare our proposed method DPSGD-F (Algorithm 2) with the original DPSGD (Algorithm 1) and the Naïve approach (Algorithm 3). For each setting, the learning parameters are the same. We set in DPSGD-F and Naïve equal to in DPSGD. For the MNIST dataset, we set noise scale , clipping bound , and . For the census datasets, we set noise scale , clipping bound , and . The accumulated privacy budget for each setting is computed using the privacy moments accounting method [Abadi et al.2016]. All DP models are compared with the non-private SGD when we measure the utility loss due to differential privacy.

6.1.4 Metric

We use the test data to measure the model utility and fairness. Based on Definition 6, we use reduction in model accuracy for each group between the private SGD and the non-private SGD () as the metric to measure the impact of differential privacy w.r.t. each group. The difference between the impacts on groups ( for any ) measures the level of inequality in utility loss due to differential privacy. If the impacts for all groups are independent of the protected attribute ( for any ), we consider the private SGD has equal reduction in model accuracy w.r.t. each group, i.e. the private SGD achieves equality of impact of differential privacy. We also report the average loss and average gradient norm to show the convergence w.r.t. each group during training.

Dataset Balanced Unbalanced
Group Total Class 2 Class 8 Total Class 2 Class 8
Sample size 60000 5958 5851 54649 5958 500
SGD 0.9892 0.9932 0.9917 0.9855 0.9903 0.9292
DPSGD vs. SGD -0.0494 -0.0853 -0.0719 -0.1081 -0.0707 -0.6807
Naïve vs. SGD -0.0484 -0.0823 -0.0657 -0.1268 -0.1932 -0.1292
DPSGD-F vs. SGD -0.0267 -0.0387 -0.0401 -0.0320 -0.0456 -0.0432
Table 2: Model accuracy w.r.t. class 2 and 8 on the MNIST dataset (Balanced: , Unbalanced: )
Balanced:
Unbalanced:
Figure 3: Model accuracy w.r.t. each class for SGD, DPSGD, Naïve and DPSGD-F on the MNIST dataset
Figure 4: The average gradient norm and the average loss w.r.t. class 2 and 8 over epochs for SGD, DPSGD, Naïve and DPSGD-F on the unbalanced MNIST dataset ()
Figure 5: The clipping bound w.r.t. each class over epochs for DPSGD-F on the unbalanced MNIST dataset ()
Group Total Class 2 Class 8
Sample size 54649 5958 500
SGD 0.9855 0.9903 0.9292
DPSGD () vs. SGD -0.1081 -0.0707 -0.6807
DPSGD () vs. SGD -0.0587 -0.0426 -0.3286
DPSGD () vs. SGD -0.0390 -0.0232 -0.2013
DPSGD () vs. SGD -0.0286 -0.0194 -0.1376
DPSGD () vs. SGD -0.0240 -0.0145 -0.1099
DPSGD-F () vs. SGD -0.0320 -0.0456 -0.0432
Table 3: Model accuracy w.r.t. class 2 and 8 for different uniform clipping bound () in DPSGD vs. adaptive clipping bound () in DPSGD-F on the unbalanced MNIST dataset ()

6.2 MNIST Dataset

Table 2 shows the model accuracy w.r.t. class 2 and 8 on the balanced and unbalanced MNIST datasets. Figure 3 shows the model accuracy w.r.t. all classes on the MNIST dataset. On the balanced dataset, each private or non-private model achieves similar accuracy across all groups. When we artificially reduce the sample size of class 8, class 8 becomes the minority group in the unbalanced dataset. The non-private SGD model converges to accuracy on class 8 vs. accuracy on class 2. The DPSGD model causes accuracy loss on class 8 vs. on class 2, which exhibits a significant disparate impact on the underrepresented class. The Naïve approach achieves accuracy loss on class 8 vs. on class 2, which over-corrects the disparate impact on the underrepresented class. Our DPSGD-F algorithm has accuracy loss on class 8 vs. on class 2, which achieves equal impact. The total model accuracy also drops less for DPSGD-F () than the original DPSGD ().

Figure 4 shows the average gradient norm and the average loss w.r.t. class 2 and 8 for SGD and different DP models. In DPSGD, the average gradient norm for class 8 is over 100 and the average loss for class 8 is 2.16. Whereas, in DPSGD-F, the average gradient norm for class 8 is only 2.62 and the average loss for class 8 is only 0.42. The convergence loss and the gradient norm for class 8 are much closer to the ones for class 2 in DPSGD-F. It shows our adjusted clipping bound helps to achieve the same group utility loss regardless of the group sample size.

Figure 5 shows how our adaptive clipping bound changes over epochs in DPSGD-F. Because class 8 has larger clipping bias due to its underrepresented group sample size, DPSGD-F gives class 8 a higher clipping bound to increase its sample contribution in the total gradient. The maximal is close to 3. To show that the fair performance of DPSGD-F is not caused by increasing clipping bound uniformly, we run the original DPSGD with increasing clipping bound from to . Table 3 shows the level of inequality in utility loss for different clipping bound in DPSGD vs. the adaptive clipping bound in DPSGD-F. Even though increasing clipping bound in DPSGD can improve the accuracy on class 8, there is still significant difference between the accuracy loss on class 8 ( when ) and the accuracy loss on class 2 ( when ). This is because the utility-privacy trade-offs are different for the minority group and the majority group under the same clipping bound. So the inequality in utility loss cannot be removed by simply increasing the clipping bound in DPSGD. On contrast, DPSGD-F achieves equal impact on model accuracy by adjusting the clipping bound for each group according to the utility-privacy trade-off. The group with smaller cost of privacy achieves a stronger level of privacy as a result of adaptive clipping.

Dataset Balanced Adult Unbalanced Adult Balanced Dutch Unbalanced Dutch
Group Total M F Total M F Total M F Total M F
Sample size 28000 14000 14000 45222 30527 14695 60420 30273 30147 40000 30000 10000
SGD 0.824 0.748 0.899 0.809 0.761 0.911 0.787 0.801 0.774 0.802 0.834 0.706
DPSGD vs. SGD -0.036 -0.054 -0.019 -0.059 -0.074 -0.028 -0.100 -0.153 -0.046 -0.124 -0.086 -0.240
Naïve vs. SGD -0.036 -0.054 -0.019 -0.059 -0.073 -0.028 -0.101 -0.155 -0.046 -0.093 -0.129 0.016
DPSGD-F vs. SGD -0.005 -0.010 -0.001 -0.010 -0.009 -0.013 -0.004 -0.005 -0.003 -0.017 -0.017 -0.020
Table 4: Model accuracy w.r.t. the total population and each group on the Adult and Dutch datasets ( Balanced Adult (sampled): , Unbalanced Adult (original): , Balanced Dutch (original): , Unbalanced Dutch (sampled): )
Figure 6: The average gradient norm and the average loss w.r.t. each group over epochs for SGD, DPSGD, Naïve and DPSGD-F on the unbalanced Adult dataset ()
Figure 7: The average gradient norm and the average loss w.r.t. each group over epochs for SGD, DPSGD, Naïve and DPSGD-F on the unbalanced Dutch dataset ()

6.3 Adult and Dutch Datasets

Table 4 shows the model accuracy w.r.t. male and female on the balanced and unbalanced Adult and Dutch datasets. The clipping biases for both census datasets are not primarily decided by group sample size. We observe disparate impact on DPSGD in comparison to SGD against the male group, even though the male group is not underrepresented. The Naïve approach does not work at all in this case, as the importance of group sample size is not as much as in the MNIST dataset. There are still other factors that affect the gradient norm and the clipping bias w.r.t. each group. DPSGD-F can achieve similar accuracy loss for male and female in all four settings. It shows the effectiveness of our approach.

Figure 6 shows the average gradient norm and the average loss w.r.t. male and female for SGD and different DP models on the unbalanced Adult. In DPSGD, the average gradient norm for male is 5 times of the one in SGD and the average loss for male is 50% more than the one in SGD. Whereas, in DPSGD-F, the average gradient norm and the average loss for the male group are much closer to the ones in SGD. Figure 7 shows the average gradient norm and the average loss w.r.t. male and female in SGD and different DP models on the unbalanced Dutch dataset. Similar to the result for Adult dataset, in DPSGD-F, the average gradient norm and the average loss for the male group are much closer to the ones in SGD. It shows our adjusted clipping bound helps to achieve the same group utility loss regardless of the sample size.

7 Conclusion and Future Work

Gradient clipping and random noise addition, which are the core techniques in differentially private SGD, disproportionately affect underrepresented and complex classes and subgroups. As a consequence, DPSGD has disparate impact: the accuracy of a model trained using DPSGD tends to decrease more on these classes and subgroups vs. the original, non-private model. If the original model is unfair in the sense that its accuracy is not the same across all subgroups, DPSGD exacerbates this unfairness. In this work, we propose DPSGD-F to remove the potential disparate impact of differential privacy on the protected group. DPSGD-F adjusts the contribution of samples in a group depending on the group clipping bias such that differential privacy has no disparate impact on group utility. Our experimental evaluation shows how group sample size and group clipping bias affect the impact of differential privacy in DPSGD, and how adaptive clipping for each group helps to mitigate the disparate impact caused by differential privacy in DPSGD-F. Gradient clipping in the non-private context may improve the model robustness against outliers. However, examples in the minority group are not outliers. They should not be ignored by the (private) learning model. In future work, we can further improve our adaptive clipping method from group-wise adaptive clipping to element-wise (from user and/or parameter perspectives) adaptive clipping, so the model can be fair even to the unseen minority class.

8 Acknowledgments

This work was supported in part by NSF 1502273, 1920920, 1937010.

References

  • [Abadi et al.2016] Martin Abadi, Andy Chu, Ian J. Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, Vienna, Austria, October 24-28, 2016, pages 308–318, 2016.
  • [Amin et al.2019] Kareem Amin, Alex Kulesza, Andres Muñoz Medina, and Sergei Vassilvitskii. Bounding user contributions: A bias-variance trade-off in differential privacy. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, pages 263–271, 2019.
  • [Bagdasaryan et al.2019] Eugene Bagdasaryan, Omid Poursaeed, and Vitaly Shmatikov. Differential privacy has disparate impact on model accuracy. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alché-Buc, Emily B. Fox, and Roman Garnett, editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8-14 December 2019, Vancouver, BC, Canada, pages 15453–15462, 2019.
  • [Bassily et al.2018] Raef Bassily, Abhradeep Guha Thakurta, and Om Dipakbhai Thakkar. Model-agnostic private learning. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, 3-8 December 2018, Montréal, Canada, pages 7102–7112, 2018.
  • [Beutel et al.2017] Alex Beutel, Jilin Chen, Zhe Zhao, and Ed H. Chi. Data decisions and theoretical implications when adversarially learning fair representations. In FAT/ML, 2017.
  • [Calders et al.2009] T. Calders, F. Kamiran, and M. Pechenizkiy. Building classifiers with independency constraints. In 2009 IEEE International Conference on Data Mining Workshops, pages 13–18, 2009.
  • [Chaudhuri et al.2011] Kamalika Chaudhuri, Claire Monteleoni, and Anand D. Sarwate. Differentially private empirical risk minimization. J. Mach. Learn. Res., 12:1069–1109, 2011.
  • [Cummings et al.2019] Rachel Cummings, Varun Gupta, Dhamma Kimpara, and Jamie Morgenstern. On the compatibility of privacy and fairness. In Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization, UMAP 2019, Larnaca, Cyprus, June 09-12, 2019, pages 309–315, 2019.
  • [Ding et al.2020] Jiahao Ding, Xinyue Zhang, Xiaohuan Li, Junyi Wang, Rong Yu, and Miao Pan. Differentially private and fair classification via calibrated functional mechanism. In AAAI, 2020.
  • [Du et al.2019] Min Du, Ruoxi Jia, and Dawn Song. Robust anomaly detection and backdoor attack detection via differential privacy. CoRR, abs/1911.07116, 2019.
  • [Duchi et al.2013] John C. Duchi, Michael I. Jordan, and Martin J. Wainwright. Local privacy and statistical minimax rates. In 54th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2013, 26-29 October, 2013, Berkeley, CA, USA, pages 429–438, 2013.
  • [Dwork et al.2006] Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam D. Smith. Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography, Third, pages 265–284, 2006.
  • [Dwork et al.2012] Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard S. Zemel. Fairness through awareness. In Innovations in Theoretical Computer Science, pages 214–226, 2012.
  • [Edwards and Storkey2016] Harrison Edwards and Amos J. Storkey. Censoring representations with an adversary. In Yoshua Bengio and Yann LeCun, editors, 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, 2016.
  • [Ekstrand et al.2018] Michael D. Ekstrand, Rezvan Joshaghani, and Hoda Mehrpouyan. Privacy for all: Ensuring fair and equitable privacy protections. In Conference on Fairness, Accountability and Transparency, pages 35–47, 2018.
  • [Feldman et al.2015] Michael Feldman, Sorelle A. Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. Certifying and Removing Disparate Impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD ’15, New York, NY, USA, 2015.
  • [Hajian et al.2015] Sara Hajian, Josep Domingo-Ferrer, Anna Monreale, Dino Pedreschi, and Fosca Giannotti. Discrimination- and privacy-aware patterns. Data Min. Knowl. Discov., 29(6):1733–1782, 2015.
  • [Hardt et al.2016] Moritz Hardt, Eric Price, and Nathan Srebro.

    Equality of opportunity in supervised learning.

    In NeurIPS, 2016.
  • [Jagielski et al.2019] Matthew Jagielski, Michael J. Kearns, Jieming Mao, Alina Oprea, Aaron Roth, Saeed Sharifi-Malvajerdi, and Jonathan Ullman. Differentially private fair learning. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, pages 3000–3008, 2019.
  • [Jaiswal and Provost2019] Mimansa Jaiswal and Emily Mower Provost. Privacy enhanced multimodal neural representations for emotion recognition. CoRR, abs/1910.13212, 2019.
  • [Kamiran and Calders2009] Faisal Kamiran and Toon Calders. Classifying without discriminating. In 2009 2nd International Conference on Computer, Control and Communication, pages 1–6. IEEE, February 2009.
  • [Kamiran and Calders2011] Faisal Kamiran and Toon Calders. Data preprocessing techniques for classification without discrimination. Knowl. Inf. Syst., 33(1):1–33, 2011.
  • [Kamiran et al.2010] F. Kamiran, T. Calders, and M. Pechenizkiy.

    Discrimination aware decision tree learning.

    In 2010 IEEE International Conference on Data Mining, pages 869–874, 2010.
  • [Kamishima et al.2011] T. Kamishima, S. Akaho, and J. Sakuma. Fairness-aware learning through regularization approach. In 2011 IEEE 11th International Conference on Data Mining Workshops, pages 643–650, 2011.
  • [Kearns et al.2018] Michael J. Kearns, Seth Neel, Aaron Roth, and Zhiwei Steven Wu. Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, pages 2569–2577, 2018.
  • [Krasanakis et al.2018] Emmanouil Krasanakis, Eleftherios Spyromitros Xioufis, Symeon Papadopoulos, and Yiannis Kompatsiaris. Adaptive sensitive reweighting to mitigate bias in fairness-aware classification. In Proceedings of the 2018 World Wide Web Conference on World Wide Web, WWW 2018, Lyon, France, April 23-27, 2018, pages 853–862, 2018.
  • [Lee and Kifer2018] Jaewoo Lee and Daniel Kifer. Concentrated differentially private gradient descent with adaptive per-iteration privacy budget. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 2018, London, UK, August 19-23, 2018, pages 1656–1665, 2018.
  • [Madras et al.2018] David Madras, Elliot Creager, Toniann Pitassi, and Richard S. Zemel. Learning adversarially fair and transferable representations. In Jennifer G. Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pages 3381–3390. PMLR, 2018.
  • [Phan et al.2017] NhatHai Phan, Xintao Wu, Han Hu, and Dejing Dou. Adaptive laplace mechanism: Differential privacy preservation in deep learning. In 2017 IEEE International Conference on Data Mining, ICDM 2017, New Orleans, LA, USA, November 18-21, 2017, pages 385–394, 2017.
  • [Pichapati et al.2019] Venkatadheeraj Pichapati, Ananda Theertha Suresh, Felix X. Yu, Sashank J. Reddi, and Sanjiv Kumar. Adaclip: Adaptive clipping for private SGD. CoRR, abs/1908.07643, 2019.
  • [Thakkar et al.2019] Om Thakkar, Galen Andrew, and H. Brendan McMahan. Differentially private learning with adaptive clipping. CoRR, abs/1905.03871, 2019.
  • [Xie et al.2017] Qizhe Xie, Zihang Dai, Yulun Du, Eduard Hovy, and Graham Neubig. Controllable Invariance through Adversarial Feature Learning. Advances in Neural Information Processing Systems 30 (NIPS 2017), 2017.
  • [Xu et al.2019] Depeng Xu, Shuhan Yuan, and Xintao Wu. Achieving differential privacy and fairness in logistic regression. In Companion of The 2019 World Wide Web Conference, WWW 2019, San Francisco, CA, USA, May 13-17, 2019, pages 594–599, 2019.
  • [Zafar et al.2017] Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P. Gummadi. Fairness constraints: Mechanisms for fair classification. In AISTATS, 2017.
  • [Zhang et al.2017] Lu Zhang, Yongkai Wu, and Xintao Wu. A causal framework for discovering and removing direct and indirect discrimination. IJCAI’17, pages 3929–3935, 2017.
  • [Zhang et al.2018] Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. Mitigating unwanted biases with adversarial learning. In AAAI Conference on AI, Ethics and Society, 2018.