1 Introduction
In recent years, the ML community has witnessed an onslaught of charges that realworld machine learning algorithms have produced “biased” outcomes. The examples come from diverse and impactful domains. Google Photos labeled African Americans as gorillas
[Twitter, 2015; Simonite, 2018] and returned queries for CEOs with images overwhelmingly male and white [Kay et al., 2015], searches for African American names caused the display of arrest record advertisements with higher frequency than searches for white names [Sweeney, 2013], facial recognition has wildly different accuracy for white men than darkskinned women
[Buolamwini and Gebru, 2018], and recidivism prediction software has labeled lowrisk African Americans as highrisk at higher rates than lowrisk white people [Angwin et al., 2018].The community’s work to explain these observations has roughly fallen into either “biased data” or “biased algorithm” bins. In some cases, the training data might underrepresent (or overrepresent) some group, or have noisier labels for one population than another, or use an imperfect proxy for the prediction label (e.g., using arrest records in lieu of whether a crime was committed). Separately, issues of imbalance and bias might occur due to an algorithm’s behavior, such as focusing on accuracy across the entire distribution rather than guaranteeing similar false positive rates across populations, or by improperly accounting for confirmation bias and feedback loops in data collection. If an algorithm fails to distribute loans or bail to a deserving population, the algorithm won’t receive additional data showing those people would have paid back the loan, but it will continue to receive more data about the populations it (correctly) believed should receive loans or bail.
Many of the proposed solutions to “biased data” problems amount to reweighting the training set or adding noise to some of the labels; for “biased algorithms”, most work has focused on maximizing accuracy subject to a constraint forbidding (or penalizing) an unfair model. Both of these concerns and approaches have significant merit, but form an incomplete picture of the ML pipeline and where unfairness might be introduced therein. Our work takes another step in fleshing out this picture by analyzing when dimensionality reduction
might inadvertently introduce bias. We focus on principal component analysis (henceforth PCA), perhaps the most fundamental dimensionality reduction technique in the sciences
[Pearson, 1901; Hotelling, 1933; Jolliffe, 1986]. We show several realworld data sets for which PCA incurs much higher average reconstruction error for one population than another, even when the populations are of similar sizes. Figure 1 shows that PCA on labeled faces in the wild data set (LFW) has higher reconstruction error for women than men even if male and female faces are sampled with equal weight.This work underlines the importance of considering fairness and bias at every stage of data science, not only in gathering and documenting a data set
[Gebru et al., 2018]and in training a model, but also in any interim data processing steps. Many scientific disciplines have adopted PCA as a default preprocessing step, both to avoid the curse of dimensionality and also to do exploratory/explanatory data analysis (projecting the data into a number of dimensions that humans can more easily visualize). The study of human biology, disease, and the development of health interventions all face both aforementioned difficulties, as do numerous economic and financial analysis. In such highstakes settings, where statistical tools will help in making decisions that affect a diverse set of people, we must take particular care to ensure that we share the benefits of data science with a diverse community.
We also emphasize this work has implications for representational rather than just allocative harms, a distinction drawn by Crawford [2017] between how people are represented and what goods or opportunities they receive. Showing primates in search results for African Americans is repugnant primarily due to its representing and reaffirming a racist painting of African Americans, not because it directly reduces any one person’s access to a resource. If the default template for a data set begins with running PCA, and PCA does a better job representing men than women, or white people over minorities, the new representation of the data set itself may rightly be considered an unacceptable sketch of the world it aims to describe.
Our work proposes a different linear dimensionality reduction which aims to represent two populations and with similar fidelity—which we formalize in terms of reconstruction error. Given an dimensional data set and its dimensional approximation, the reconstruction error of the data with respect to its lowdimensional approximation is the sum of squares of distances between the original data points and their approximated points in the dimensional subspace. To eliminate the effect of size of a population, we focus on average reconstruction error over a population. One possible objective for our goal would find a dimensional approximation of the data which minimizes the maximum reconstruction error over the two populations. However, this objective doesn’t avoid grappling with the fact that population may perfectly embed into dimensions, whereas might require many more dimensions to have low reconstruction error. In such cases, this objective would not necessarily favor a solution with average reconstruction error of for and for over one with error for and error for . This holds even if requires reconstruction error to be embedded into dimensions and thus the first solution is nearly optimal for both populations in dimensions.
This motivates our focus on finding a projection which minimizes the maximum additional or marginal reconstruction error for each population above the optimal into projection for that population alone. This quantity captures how much a population’s reconstruction error increases by including another population in the dimensionality reduction optimization. Despite this computational problem appearing more difficult than solving “vanilla” PCA, we introduce a polynomialtime algorithm which finds an into dimensional embedding with objective value better than any dimensional embedding. Furthermore, we show that optimal solutions have equal additional average error for populations and .
Summary of our results
We show PCA can overemphasize the reconstruction error for one population over another (equally sized) population, and we should therefore think carefully about dimensionality reduction in domains where we care about fair treatment of different populations. We propose a new dimensionality reduction problem which focuses on representing and with similar additional error over projecting or individually. We give a polynomialtime algorithm which finds nearoptimal solutions to this problem. Our algorithm relies on solving a semidefinite program (SDP), which can be prohibitively slow for practical applications. We note that it is possible to (approximately) solve an SDP with a much faster multiplicativeweights style algorithm, whose running time in practice is equivalent to solving standard PCA at most 1015 times. The details of the algorithm are given in the full version of this work. We then evaluate the empirical performance of this algorithm on several humancentric data sets.
2 Related work
This work contributes to the area of fairness for machine learning models, algorithms, and data representations. One interpretation of our work is that we suggest using Fair PCA, rather than PCA, when creating a lowerdimensional representation of a data set for further analysis. Both pieces of work which are most relevant to our work take the posture of explicitly trying to reduce the correlation between a sensitive attribute (such as race or gender) and the new representation of the data. The first piece is a broad line of work [Zemel et al., 2013; Beutel et al., 2017; Calmon et al., 2017; Madras et al., 2018; Zhang et al., 2018] that aims to design representations which will be conditionally independent of the protected attribute, while retaining as much information as possible (and particularly taskrelevant information for some fixed classification task). The second piece is the work by Olfat and Aswani [2018], who also look to design PCAlike maps which reduce the projected data’s dependence on a sensitive attribute. Our work has a qualitatively different goal: we aim not to hide a sensitive attribute, but instead to maintain as much information about each population after projecting the data. In other words, we look for representation with similar richness for population as , rather than making and indistinguishable.
Other work has developed techniques to obfuscate a sensitive attribute directly [Pedreshi et al., 2008; Kamiran et al., 2010; Calders and Verwer, 2010; Kamiran and Calders, 2011; Luong et al., 2011; Kamiran et al., 2012; Kamishima et al., 2012; Hajian and DomingoFerrer, 2013; Feldman et al., 2015; Zafar et al., 2015; Fish et al., 2016; Adler et al., 2016]. This line of work diverges from ours in two ways. First, these works focus on representations which obfuscate the sensitive attribute rather than a representation with high fidelity regardless of the sensitive attribute. Second, most of these works do not give formal guarantees on how much an objective will degrade after their transformations. Our work directly minimizes the amount by which each group’s marginal reconstruction error increases.
Much of the other work on fairness for learning algorithms focuses on fairness in classification or scoring [Dwork et al., 2012; Hardt et al., 2016; Kleinberg et al., 2016; Chouldechova, 2017], or online learning settings [Joseph et al., 2016; Kannan et al., 2017; Ensign et al., 2017b, a]. These works focus on either statistical parity of the decision rule, or equality of false positives or negatives, or an algorithm with a fair decision rule. All of these notions are driven by a single learning task rather than a generic transformation of a data set, while our work focuses on a ubiquitous, taskagnostic preprocessing step.
3 Notation and vanilla PCA
We are given dimensional data points represented as rows of matrix . We will refer to the set and matrix representation interchangeably. The data consists of two subpopulations and corresponding to two groups with different value of a binary sensitive attribute (e.g., males and females). We denote by the concatenation of two matrices by row. We refer to the row of as , the column of as and the element of as . We denote the Frobenius norm of matrix by and the
norm of the vector
by . For , we write . denotes the size of a set . Given two matrices and of the same size, the Frobenius inner product of these matrices is defined as .3.1 Pca
This section recalls useful facts about PCA that we use in later sections. We begin with a reminder of the definition of the PCA problem in terms of minimizing the reconstruction error of a data set.
Definition 3.1.
(PCA problem) Given a matrix , find a matrix of rank at most that minimizes .
We will refer to as an optimal rank approximation of . The following wellknown fact characterizes the solutions to this classic problem [e.g., ShalevShwartz and BenDavid, 2014].
Fact 3.1.
If is a solution to the PCA problem, then for a matrix with . The columns of
are eigenvectors corresponding to top
eigenvalues of .The matrix is called a projection matrix.
4 Fair PCA
Given the dimensional data with two subgroups and , let be optimal rank PCA approximations for and , respectively. We introduce our approach to fair dimensionality reduction by giving two compelling examples of settings where dimensionality reduction inherently makes a tradeoff between groups and . Figure (a)a shows a setting where projecting onto any single dimension either favors or (or incurs significant reconstruction error for both), while either group separately would have a highfidelity embedding into a single dimension. This example suggests any projection will necessarily make a trade off between error on and error on .
Our second example (shown in Figure (b)b) exhibits a setting where and suffer very different reconstruction error when projected onto one dimension: has high reconstruction error for every projection while has a perfect representation in the horizontal direction. Thus, asking for a projection which minimizes the maximum reconstruction error for groups and might require incurring additional error for while not improving the error for . So, minimizing the maximum reconstruction error over and fails to account for the fact that two populations might have wildly different representation error when embedded into dimensions. Optimal solutions to such objective might behave in a counterintuitive way, preferring to exactly optimize for the group with larger inherent representation error rather than approximately optimizing for both groups simultaneously. We find this behaviour undesirable—it requires sacrifice in quality for one group for no improvement for the other group.
Remark 4.1.
We focus on the setting where we ask for a single projection into dimensions rather than two separate projections because using two distinct projections (or more generally two models) for different populations raises legal and ethical concerns. Learning two different projections also faces no inherent tradeoff in representing or with those projections.^{1}^{1}1Lipton et al. [2017] has asked whether equal treatment requires different models for two groups.
We therefore turn to finding a projection which minimizes the maximum deviation of each group from its optimal projection. This optimization asks that and suffer a similar loss for being projected together into dimensions compared to their individually optimal projections. We now introduce our notation for measuring a group’s loss when being projected to rather than to its optimal dimensional representation:
Definition 4.2 (Reconstruction error).
Given two matrices and of the same size, the reconstruction error of with respect to is defined as
Definition 4.3 (Reconstruction loss).
Given a matrix , let be the optimal rank approximation of . For a matrix with rank at most we define
Then, the optimization that we study asks to minimize the maximum loss suffered by any group. This captures the idea that, fixing a feasible solution, the objective will only improve if it improves the loss for the group whose current representation is worse. Furthermore, considering the reconstruction loss and not the reconstruction error prevents the optimization from incurring error for one subpopulation without improving the error for the other one as described in Figure (b)b.
Definition 4.4 (Fair PCA).
Given data points in with subgroups and , we define the problem of finding a fair PCA projection into dimensions as optimizing
(1) 
where and are matrices with rows corresponding to rows of for groups and respectively.
This definition does not appear to have a closedform solution (unlike vanilla PCA—see Fact 3.1). To take a step in characterizing solutions to this optimization, Theorem 4.5 states that a fair PCA low dimensional approximation of the data results in the same loss for both groups.
Theorem 4.5.
Let be a solution to the Fair PCA problem (1), then
Before proving Theorem 4.5, we need to state some building blocks of the proof, Lemmas 4.6, 4.7, and 4.8. For the proofs of the lemmas please refer to the appendix B.
Lemma 4.6.
Given a matrix such that , let . Let be an orthonormal basis of the row space of and . Then
The next lemma presents some equalities that we will use frequently in the proofs.
Lemma 4.7.
Given a matrix with orthonormal columns, we have:
Let the function measure the reconstruction error of a fixed matrix with respect to its orthogonal projection to the input subspace . The next lemma shows that the value of the function at any local minimum is the same.
Lemma 4.8.
Given a matrix , and a dimensional subspace , let the function denote the reconstruction error of matrix with respect to its orthogonal projection to the subspace , that is , where by abuse of notation we use inside the norm to denote the matrix which has an orthonormal basis of the subspace as its columns. The value of the function at any local minimum is the same.
Proof of Theorem 4.5:
Consider the functions and defined in Lemma 4.8. It follows from Lemma 4.6 and Lemma 4.7 that for with we have
(2)  
Therefore, the Fair PCA problem is equivalent to
We proceed to prove the claim by contradiction. Let be a global minimum of and assume that
(3) 
Hence, since is continuous, for any matrix with in a small enough neighborhood of , . Since is a global minimum of , it is a local minimum of or equivalently a local minimum of because of (2).
Let be an orthonormal basis of the eigenvectors
of corresponding to eigenvalues
. Let be
the subspace spanned by . Note that
. Since the loss is always nonnegative for
both and , (3) implies that .
Therefore, and . By Lemma 4.8, this is in contradiction with being a global minimum and being a local minimum of .
5 Algorithm and analysis
In this section, we present a polynomialtime algorithm for solving the fair PCA problem. Our algorithm outputs a matrix of rank at most and guarantees that it achieves the fair PCA objective value equal to the optimal dimensional fair PCA value. The algorithm has two steps: first, relax fair PCA to a semidefinite optimization problem and solve the SDP; second, solve an LP designed to reduce the rank of said solution. We argue using properties of extreme point solutions that the solution must satisfy a number of constraints of the LP with equality, and argue directly that this implies the solution must lie in or fewer dimensions. We refer the reader to Lau et al. [2011] for basics and applications of this technique in approximation algorithms.
Theorem 5.1.
There is a polynomialtime algorithm that outputs an approximation matrix of the data such that it is either of rank and is an optimal solution to the fair PCA problem OR it is of rank , has equal losses for the two populations and achieves the optimal fair PCA objective value for dimension .
(e.g. by Singular Value Decomposition). Let (
be a solution to the SDP:(4)  
s.t.  
(5)  
(6)  
(7)  
(8)  
(9) 
Proof of Theorem 5.1: The algorithm to prove Theorem 5.1 is presented in Algorithm 1. Using Lemma 4.7, we can write the semidefinite relaxation of the fair PCA objective (Def. 4.4) as SDP (4). This semidefinite program can be solved in polynomial time. The system of constraints (5)(9
) is a linear program in the variables
(with the ’s fixed). Therefore, an extreme point solution is defined by equalities, at most three of which can be constraints in (6)(8) and the rest (at least of them) must be from the or for . Given the upper bound of on the sum of the ’s, this implies that at least of them are equal to , i.e., at most two are fractional and add up to .Case 1.
All the eigenvalues are integral. Therefore, there are eigenvalues equal to . This results in orthogonal projection to dimension.
Case 2.
of eigenvalues are in and two eigenvalues . Since we have tight constraints, this means that both of the first two constraints are tight. Therefore
where the inequality is by observing that is a feasible solution. Note that the loss of group given by an affine projection is
where the last inequality is by the choice of . The same equality holds true for group . Therefore, gives the equal loss of for two groups. The embedding corresponds to the affine projection of any point (row) of defined by the solution .
In both cases, the objective value is at most that of the original fairness objective.
The result of Theorem 5.1 in two groups generalizes to more than two groups as follows. Given data points in with subgroups , and the desired number of dimensions of projected space, we generalize Definition 4.4 of fair PCA problem as optimizing
(10) 
where are matrices with rows corresponding to rows of for groups .
Theorem 5.2.
There is a polynomialtime algorithm to find a projection such that it is of dimension at most and achieves the optimal fairness objective value for dimension .
In contrast to the case of two groups, when there are more than two groups in the data, it is possible that all optimal solutions to fair PCA will not assign the same loss to all groups. However, with extra dimensions, we can ensure that the loss of each group remains at most the optimal fairness objective in dimension. The result of Theorem 5.2 follows by extending algorithm in Theorem 5.1 by adding linear constraints to SDP and LP for each extra group. An extreme solution of the resulting LP contains at most of ’s that are strictly in between 0 and 1. Therefore, the final projection matrix has rank at most .
Runtime
We now analyze the runtime of Algorithm 1, which consists of solving SDP (4) and finding an extreme solution to an LP (5)(9). The SDP and LP can be solved up to additive error of in the objective value in [BenTal and Nemirovski, 2001] and [Schrijver, 1998] time, respectively. The running time of SDP dominates the algorithm both in theory and practice, and is too slow for practical uses for moderate size of .
We propose another algorithm of solving SDP using the multiplicative weight (MW) update method. In theory, our MW takes iterations of solving standard PCA, giving a total of runtime, which may or may not be faster than depending on . In practice, however, we observe that after appropriately tuning one parameter in MW, the MW algorithm achieves accuracy within tens of iterations, and therefore is used to obtain experimental results in this paper. Our MW can handle data of dimension up to a thousand with running time in less than a minute. The details of implementation and analysis of MW method are in Appendix A.
6 Experiments
We use two common humancentric data sets for our experiments. The first one is labeled faces in the wild (LFW) [Huang et al., 2007], the second is the Default Credit data set [Yeh and Lien, 2009]. We preprocess all data to have its mean at the origin. For the LFW data, we normalized each pixel value by . The gender information for LFW was taken from Afifi and Abdelhamed [2017]
, who manually verified the correctness of these labels. For the credit data, since different attributes are measurements of incomparable units, we normalized the variance of each attribute to be equal to 1. The code of all experiments is publicly available at
https://github.com/samirasamadi/FairPCA.Results
We focus on projections into relatively few dimensions, as those are used ubiquitously in early phases of data exploration. As we already saw in Figure 1
left, at lower dimensions, there is a noticeable gap between PCA’s average reconstruction error for men and women on the LFW data set. This gap is at the scale of up to 10% of the total reconstruction error when we project to 20 dimensions. This still holds when we subsample male and female faces with equal probability from the data set, and so men and women have equal magnitude in the objective function of PCA (Figure
1 right).Figure 3 shows the average reconstruction error of each population (Male/Female, Higher/Lower education) as the result of running vanilla PCA and Fair PCA on LFW and Credit data. As we expect, as the number of dimensions increase, the average reconstruction error of every population decreases. For LFW, the original data is in 1764 dimensions (4242 images), therefore, at 20 dimensions we still see a considerable reconstruction error. For the Credit data, we see that at 21 dimensions, the average reconstruction error of both populations reach 0, as this data originally lies in 21 dimensions. In order to see how fair are each of these methods, we need to zoom in further and look at the average loss of populations.
Figure 4 shows the average loss of each population as the result of applying vanilla PCA and Fair PCA on both data sets. Note that at the optimal solution of Fair PCA, the average loss of two populations are the same, therefore we have one line for “Fair loss”. We observe that PCA suffers much higher average loss for female faces than male faces. After running fair PCA, we observe that the average loss for fair PCA is relatively in the middle of the average loss for male and female. So, there is improvement in terms of the female average loss which comes with a cost in terms of male average loss. Similar observation holds for the Credit data set. In this context, it appears there is some cost to optimizing for the less well represented population in terms of the betterrepresented population.
7 Future work
This work is far from a complete study of when and how dimensionality reduction might help or hurt the fair treatment of different populations. Several concrete theoretical questions remain using our framework. What is the complexity of optimizing the fairness objective? Is it NPhard, even for ? Our work naturally extends to predefined subgroups rather than just , where the number of additional dimensions our algorithm uses is . Are these additional dimensions necessary for computational efficiency?
In a broader sense, this work aims to point out another way in which standard ML techniques might introduce unfair treatment of some subpopulation. Further work in this vein will likely prove very enlightening.
Acknowledgements
This work was supported in part by NSF awards CCF1563838, CCF1717349, and CCF1717947.
References
 Adler et al. [2016] Philip Adler, Casey Falk, Sorelle Friedler, Gabriel Rybeck, Carlos Scheidegger, Brandon Smith, and Suresh Venkatasubramanian. Auditing blackbox models for indirect influence. In Proceedings of the 16th International Conference on Data Mining, pages 1–10, 2016.
 Afifi and Abdelhamed [2017] Mahmoud Afifi and Abdelrahman Abdelhamed. Afif4: Deep gender classification based on adaboostbased fusion of isolated facial features and foggy faces. arXiv preprint arXiv:1706.04277, 2017.
 Angwin et al. [2018] Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. Machine bias  propublica. https://www.propublica.org/article/machinebiasriskassessmentsincriminalsentencing, 2018.
 Arora et al. [2012] Sanjeev Arora, Elad Hazan, and Satyen Kale. The multiplicative weights update method: a metaalgorithm and applications. Theory of Computing, 8(1):121–164, 2012.
 BenTal and Nemirovski [2001] Ahron BenTal and Arkadi Nemirovski. Lectures on modern convex optimization: analysis, algorithms, and engineering applications, volume 2. Siam, 2001.
 Beutel et al. [2017] Alex Beutel, Jilin Chen, Zhe Zhao, and Ed Huaihsin Chi. Data decisions and theoretical implications when adversarially learning fair representations. CoRR, abs/1707.00075, 2017.
 Buolamwini and Gebru [2018] Joy Buolamwini and Timnit Gebru. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on Fairness, Accountability and Transparency, pages 77–91, 2018.

Calders and Verwer [2010]
Toon Calders and Sicco Verwer.
Three naive Bayes approaches for discriminationfree classification.
Data Mining and Knowledge Discovery, 21(2):277–292, 2010.  Calmon et al. [2017] Flavio Calmon, Dennis Wei, Bhanukiran Vinzamuri, Karthikeyan Natesan Ramamurthy, and Kush R Varshney. Optimized preprocessing for discrimination prevention. In Advances in Neural Information Processing Systems, pages 3992–4001, 2017.
 Chouldechova [2017] Alexandra Chouldechova. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data, 5(2):153–163, 2017.
 Crawford [2017] Kate Crawford. The trouble with bias, 2017. URL http://blog.revolutionanalytics.com/2017/12/thetroublewithbiasbykatecrawford.html. Invited Talk by Kate Crawford at NIPS 2017, Long Beach, CA.
 Dwork et al. [2012] Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference, pages 214–226. ACM, 2012.
 Ensign et al. [2017a] Danielle Ensign, Sorelle A Friedler, Scott Neville, Carlos Scheidegger, and Suresh Venkatasubramanian. Runaway feedback loops in predictive policing. arXiv preprint arXiv:1706.09847, 2017a.
 Ensign et al. [2017b] Danielle Ensign, Sorelle A. Friedler, Scott Neville, Carlos Eduardo Scheidegger, and Suresh Venkatasubramanian. Runaway feedback loops in predictive policing. Workshop on Fairness, Accountability, and Transparency in Machine Learning, 2017b.
 Feldman et al. [2015] Michael Feldman, Sorelle Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. Certifying and removing disparate impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 259–268, 2015.
 Fish et al. [2016] Benjamin Fish, Jeremy Kun, and Ádám Dániel Lelkes. A confidencebased approach for balancing fairness and accuracy. In Proceedings of the 16th SIAM International Conference on Data Mining, pages 144–152, 2016.
 Gebru et al. [2018] Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumeé III, and Kate Crawford. Datasheets for datasets. arXiv preprint arXiv:1803.09010, 2018.
 Hajian and DomingoFerrer [2013] Sara Hajian and Josep DomingoFerrer. A methodology for direct and indirect discrimination prevention in data mining. IEEE Transactions on Knowledge and Data Engineering, 25(7):1445–1459, 2013.

Hardt et al. [2016]
Moritz Hardt, Eric Price, Nati Srebro, et al.
Equality of opportunity in supervised learning.
In Advances in neural information processing systems, pages 3315–3323, 2016.  Hotelling [1933] Harold Hotelling. Analysis of a complex of statistical variables into principal components. Journal of educational psychology, 24(6):417, 1933.
 Huang et al. [2007] Gary B. Huang, Manu Ramesh, Tamara Berg, and Erik LearnedMiller. Labeled faces in the wild: A database for studying face recognition in unconstrained environments. Technical Report 0749, University of Massachusetts, Amherst, October 2007.
 Jolliffe [1986] Ian T Jolliffe. Principal component analysis and factor analysis. In Principal component analysis, pages 115–128. Springer, 1986.
 Joseph et al. [2016] Matthew Joseph, Michael Kearns, Jamie H Morgenstern, and Aaron Roth. Fairness in learning: Classic and contextual bandits. In Advances in Neural Information Processing Systems, pages 325–333, 2016.
 Kamiran and Calders [2011] Faisal Kamiran and Toon Calders. Data preprocessing techniques for classification without discrimination. Knowledge and Information Systems, 33(1):1–33, 2011.

Kamiran et al. [2010]
Faisal Kamiran, Toon Calders, and Mykola Pechenizkiy.
Discrimination aware decision tree learning.
In Proceedings of the 10th IEEE International Conference on Data Mining, pages 869–874, 2010.  Kamiran et al. [2012] Faisal Kamiran, Asim Karim, and Xiangliang Zhang. Decision theory for discriminationaware classification. In Proceedings of the 12th IEEE International Conference on Data Mining, pages 924–929, 2012.

Kamishima et al. [2012]
Toshihiro Kamishima, Shotaro Akaho, Hideki Asoh, and Jun Sakuma.
Fairnessaware classifier with prejudice remover regularizer.
In Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases, pages 35–50, 2012.  Kannan et al. [2017] Sampath Kannan, Michael Kearns, Jamie Morgenstern, Mallesh M. Pai, Aaron Roth, Rakesh V. Vohra, and Zhiwei Steven Wu. Fairness incentives for myopic agents. In Proceedings of the 2017 ACM Conference on Economics and Computation, pages 369–386, 2017.
 Kay et al. [2015] Matthew Kay, Cynthia Matuszek, and Sean A Munson. Unequal representation and gender stereotypes in image search results for occupations. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pages 3819–3828. ACM, 2015.
 Kleinberg et al. [2016] Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. Inherent tradeoffs in the fair determination of risk scores. arXiv preprint arXiv:1609.05807, 2016.

Lau et al. [2011]
Lap Chi Lau, Ramamoorthi Ravi, and Mohit Singh.
Iterative methods in combinatorial optimization
, volume 46. Cambridge University Press, 2011.  Lipton et al. [2017] Zachary C. Lipton, Alexandra Chouldechova, and Julian McAuley. Does mitigating ML’s disparate impact require disparate treatment? arXiv preprint arXiv:1711.07076, 2017.
 Luong et al. [2011] Binh Thanh Luong, Salvatore Ruggieri, and Franco Turini. kNN as an implementation of situation testing for discrimination discovery and prevention. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 502–510. ACM, 2011.
 Madras et al. [2018] David Madras, Elliot Creager, Toniann Pitassi, and Richard Zemel. Learning adversarially fair and transferable representations. In Proceedings of the 35th International Conference on Machine Learning, pages 3384–3393, 2018.
 Olfat and Aswani [2018] Matt Olfat and Anil Aswani. Convex formulations for fair principal component analysis. arXiv preprint arXiv:1802.03765, 2018.
 Pearson [1901] Karl Pearson. On lines and planes of closest fit to systems of points in space. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 2(11):559–572, 1901.
 Pedreshi et al. [2008] Dino Pedreshi, Salvatore Ruggieri, and Franco Turini. Discriminationaware data mining. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 560–568. ACM, 2008.
 Schrijver [1998] Alexander Schrijver. Theory of linear and integer programming. John Wiley & Sons, 1998.
 ShalevShwartz and BenDavid [2014] Shai ShalevShwartz and Shai BenDavid. Understanding machine learning: From theory to algorithms. Cambridge University Press, 2014.
 Simonite [2018] Tom Simonite. When it comes to gorillas, google photos remains blind. https://www.wired.com/story/whenitcomestogorillasgooglephotosremainsblind/, Jan 2018.
 Sweeney [2013] Latanya Sweeney. Discrimination in online ad delivery. Communications of the ACM, 56(5):44–54, 2013.
 Twitter [2015] Twitter. Jacky lives: Google photos, y’all fucked up. My friend’s not a gorilla. https://twitter.com/jackyalcine/status/615329515909156865, June 2015.
 Yeh and Lien [2009] ICheng Yeh and Chehui Lien. The comparisons of data mining techniques for the predictive accuracy of probability of default of credit card clients. Expert Systems with Applications, 36(2):2473–2480, 2009.
 Zafar et al. [2015] Muhammad Zafar, Isabel Valera, Manuel GomezRodriguez, and Krishna Gummadi. Fairness constraints: A mechanism for fair classification. CoRR, abs/1507.05259, 2015.
 Zemel et al. [2013] Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. Learning fair representations. In International Conference on Machine Learning, pages 325–333, 2013.
 Zhang et al. [2018] Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. Mitigating unwanted biases with adversarial learning. arXiv preprint arXiv:1801.07593, 2018.
Appendix A Improved runtime of semidefinite relaxation by multiplicative weight update method
In this section, we show the multiplicative weight (MW) algorithm and runtime analysis to solve the fair PCA relaxation in two groups for matrix up to additive error in iterations of solving a standard PCA, such as Singular Value Decomposition (SVD). Because SVD takes time, the SDP relaxation (4) for two groups can be solved in . Comparing to runtime of an SDP solver that is commonly implemented with the interior point method [BenTal and Nemirovski, 2001], our algorithm may be faster or slower depending on . In practice, however, we tune the parameter of MW algorithm much more aggressively than in theory, and often take the last iterate solution of MW rather the average when the last iterate performs better, which gives a much faster convergence rate. Our runs of MW show that MW converges in at most 1020 iterations. Therefore, we use MW to implement our fair PCA algorithm. We note at the conclusion of this section that the algorithm and analysis can be extended to solving fair PCA in groups up to additive error in iterations.
Technically, the number of iterations for groups is , where is the width of the problem, as defined in Arora et al. [2012]. can usually be bounded by the maximum number of input or the optimal objective value. For our purpose, if the total variance of input data over all dimension is , then the width is at most . For simplicity, we assume (e.g. by normalization in prepossessing step), hence obtaining the bound on number of iterations.
We first present an algorithmic framework and the corresponding analysis in the next two subsections, and later apply those results to our specific setting of solving the SDP (4) from fair PCA problem. The previous work by Arora et al. [2012] shows how we may solve a feasibility problem of an LP using MW technique. Our main theoretical contribution is to propose and analyze the optimization counterpart of the feasibility problem, and the MW algorithm we need to solve such problem. The MW we develop fits more seamlessly into our fair PCA setting and simplifies the algorithm to be implemented for solving the SDP (4).
a.1 Problem setup and oracle access
We first formulate the feasibility problem and its optimization counterpart in this section. The previous and new MW algorithms and their analysis are presented in the following Section A.2.
a.1.1 Previous work: multiplicative weight on feasibility problem
Problem
As in Arora et al. [2012], we are given as an real matrix, , and as a convex set in , and the goal is to check the feasibility problem
(11) 
by giving a feasible or correctly deciding that such does not exist.
Oracle Access
We assume the existence if an oracle that, given any probability vector over constraints of (11), correctly answers a singleconstraint problem
(12) 
by giving a feasible or correctly deciding that such does not exist. We may think of (12) as a weighted version of (11), with weights on each constraint being .
As (12) consists of only one constraint, solving (12) is much easier than (11) in many problem settings. For example, in our PCA setting, solving (4) directly is nontrivial, but the weighted version (12) is a standard PCA problem: we weight each group based on , and then apply a PCA algorithm (Singular Value Decomposition) on the sum of two weighted groups. The solution gives an optimal value of in (12). More details of application in fair PCA settings are in Section A.3
a.1.2 New setting: multiplicative weight on optimization problem
Problem
The previous work gives an MW framework for the feasibility question. Here we propose an optimization framework, which asks for the best rather than an existence of . The optimization framework can be formally stated as, given as an real matrix, , and as a convex set in , we need to solve
(13) 
where denotes the vector with entries 1. Denote the optimum of (13).
With the same type of oracle access, we may run (11) for iterations to do binary search for the correct value of optimum up to an additive error . However, our main contribution is to modify the previous multiplicative weight algorithm and the definition of the oracle to solve (13) without guessing the optimum . This improves the runtime slightly (reduce the factor) and simplifies the algorithm.
Feasibility Oracle Access
We assume the existence of an oracle that, given any probability vector over constraints of (13), correctly answers a singleconstraint problem
(14) 
Optimization Oracle Access
We define the oracle that, given over constraints of (13), correctly answers one maximizer of
(15) 
which is stronger than and is sufficient to solve (14). This is because of (13) is one feasible to (15), so the optimum of (15) is at most . Therefore, the optimum by (15) can be a feasible solution to (14). In many setting, because (14) is only oneconstraint problem, it is possible to solve the optimization version (15) instead. For example, in our fair PCA on two groups setting, we can solve the (15) by standard PCA on the union of two groups after an appropriate weighting on each group. More details of application in fair PCA settings are in Section A.3.
a.2 Algorithm and Analysis
The line of proof follows similarly from Arora et al. [2012]. We first state the technical property that the oracle satisfies in our optimization framework, then show how to use that property to bound the number of iterations. We fix as an real matrix, , and is a convex set in
Definition A.1.
Note that even though we do not know , if we know the range of for all , we can bound the range of . Therefore, we can still find a useful that an oracle satisfies.
Now we are ready to state the main result of this section: that we may solve the optimization version by multiplicative update as quickly as solving the feasibility version of the problem.
Theorem A.2.
Proof.
The proof follows similarly as Theorem 3.3 in Arora et al. [2012], but we include details here for completeness. The algorithm is multiplicative update in nature, as in equation (2.1) of Arora et al. [2012]. The algorithm starts with uniform over constraints. Each step the algorithm asks the with input and receive . We use the loss vector to update the weight for the next step with learning rate . After iterations (which will be specified later), the algorithm outputs .
Note that using either the loss and behaves the same algorithmically due to the renormalization step on the vector . Therefore, just for analysis, we use a hypothetical loss to update (this loss can’t be used algorithmically since we do not know ). By Theorem 2.1 in Arora et al. [2012], for each constraint and all ,
(19) 
By property (14) of the ,
(20) 
We now split into two cases. If , then (19) and (20) imply
Multiplying the last inequality by and rearranging terms, we have
(21) 
Multiplying inequality by and rearranging terms, we have
(22) 
To use (21) and (22) to show that is close to 0 simultaneously for two cases, pick (note that by requiring , so we may apply Theorem 2.1 in Arora et al. [2012]). Then for all , we have
(23) 
Hence, (21) implies
(24) 
and (22) implies
(25) 
using the fact that . ∎
a.3 Application of multiplicative update method to the fair PCA problem
In this section, we apply MW results for solving LP to solve the SDP relaxation (4) of fair PCA.
LP formulation of fair PCA relaxation
Oracle Access
First, we present an the oracle in Algorithm 2, which is in the form (15) and therefore can be used to solve (14). As defined in (15), the optimization oracle, given a weight vector , should be able to solve the LP with one weighted constraint obtained from weighting two constraints (27) and (28) by . However, because both constraints involve only dot products of same variable with constant matrices and , which are linear functions, the weighted constraint will involve the dot product of the same variable with weighted sum of those constant matrices .
MW Algorithm
Our multiplicative weight update algorithm for solving fair PCA relaxation (26)(28) is presented in Algorithm 3. The algorithm follows exactly from the construction in Theorem A.2. The runtime analysis of our MW Algorithm 3 follows directly from the same theorem.
Corollary A.3.
Proof.
We first check that the oracle presented in Algorithm 2 satisfies boundedness and find those parameters. We may normalize the data so that the variances of and are bounded by 1. Therefore, for any PSD matrix , we have . In addition, in the application to fair PCA setting, we have . Hence, for any feasible by the definition of (recall Definition 3.1). Therefore,
Comments
There are no comments yet.