Fair Clustering with Multiple Colors

02/18/2020 ∙ by Matteo Böhm, et al. ∙ 0

A fair clustering instance is given a data set A in which every point is assigned some color. Colors correspond to various protected attributes such as sex, ethnicity, or age. A fair clustering is an instance where membership of points in a cluster is uncorrelated with the coloring of the points. Of particular interest is the case where all colors are equally represented. If we have exactly two colors, Chierrichetti, Kumar, Lattanzi and Vassilvitskii (NIPS 2017) showed that various k-clustering objectives admit a constant factor approximation. Since then, a number of follow up work has attempted to extend this result to a multi-color case, though so far, the only known results either result in no-constant factor approximation, apply only to special clustering objectives such as k-center, yield bicrititeria approximations, or require k to be constant. In this paper, we present a simple reduction from unconstrained k-clustering to fair k-clustering for a large range of clustering objectives including k-median, k-means, and k-center. The reduction loses only a constant factor in the approximation guarantee, marking the first true constant factor approximation for many of these problems.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Clustering is one of the fundamental building blocks of data analysis. Due to the enormous amount of attention it has received in research, classic optimization problems such as -means or -median are well (if not completely) understood both in theory and practice. Nevertheless, there exist a number of open questions. In recent years, the researchers have started to address clustering with cardinality constraints. If we require clusters to have a maximum or minimum size, these problems tend to become far less tractable.

Cardinality constraints arise naturally in a number of settings including but not limited to privacy preserving clustering, capacitated clustering, and fairness. In this paper we focus on the latter. Given two distinct populations and consisting of points each, a clustering is considered to be fair if and are equally represented in every cluster. Informally speaking, the separation of the point set into two subsets can be a way of modeling specific binary attributes (e.g., sex, citizenship) against which an algorithm (or indeed a clustering) should not discriminate. Our aim is to find a good clustering that obeys such a fairness constraint.

This (comparatively recent) line of research was initiated by [18] and further developed in subsequent works [8, 11, 12, 41, 44]. [18] affirmatively answered the question of whether a high quality fair clustering can be obtained efficiently for two distinct populations. Most follow up work (see e.g. [11, 12, 30, 41, 44]) has since considered the question of whether these guarantees can be extended for multiple populations. Even though some progress has been made for specific clustering problems such as -center or by allowing small fairness violations, the general problem remains open despite considerable effort by the research community.

1.1 Our Contribution

We settle the problem of the design algorithms with good approximations for fair clustering with multiple populations in the affirmative by showing that for all center based clustering objectives (including -means, -median, -center) and all metrics. Our main result is as follows.

Theorem 1 (Informal, see Theorem 2).

Given an -approximation for an unconstrained center-based -clustering problem, there exists an approximation for the -clustering problem with fairness constraints.

Given the large number of good approximation algorithms known for the -clustering problems, our result can be widely applied in most practical setting of interest for fairness applications.

Previous results either only applied to -center [12, 41], yielded bicriteria approximations [11, 12], or required to be constant [44]. The general algorithm we propose is quite simple, in contrast to earlier work requiring geometric decompositions [44, 30]

or rounding linear programs 

[11, 12].

The caveat of our approach is that it requires solving multiple instances of a transportation problem. We remedy this by giving algorithms for -center and -median that run in linear time. Specifically, we show that for

-center, there exists a simple greedy heuristic that induces a

-approximation. For -median, we extend the linear time -approximation by [8] to multiple attributes.

Lastly, we also consider hardness of approximation for these problems. If the number of centers is constant and the points lie in Euclidean space, we show that a polynomial time approximation exists. This is complemented with the following hardness proof. Given three point sets consisting of exactly points, finding a fair -median or -center clustering is APX hard. This already shows that considering fair clustering with at least populations is harder than the same problem with only populations.

1.2 Related work

Algorithmic Fairness and Fair Clustering

Fairness in algorithms has recently received considerable attention, see [29, 46, 47, 49] and references therein. The idea of clustering using balancing constraints is derived from notion of disparate impact. Disparate impact was first proposed by [26]. Despite some impossibility results in certain settings [20, 31], it has been used successfully combined for classification [28, 40, 48], ranking [15, 16], regression [3], graph embeddings [14] and indeed clustering. Its application to clustering was initiated by [18]. They showed that for two protected classes, fair clustering for various objectives such as -median, -center, and (implicitly, though unstated) -means can be approximated as well as the unconstrained variants of the problem (up to constant factors). Building upon their work, [8][44], and [30] considered this problem for large data sets. The main open problem left in their work is whether the approximability can be extended for multiple color classes. Here, the -center problem has received the most attention [11, 12, 41], with the current state of the art being a -approximation or a bicretieria -approximation that violates the fairness constraint by a small amount. Conversely, prior to our work, for -means with multiple protected classes, only a PTAS for constant  [44, 30] in Euclidean spaces of constant dimension and bicriteria approximation algorithms were known [11, 12].

We note that there exist other models combining fairness and clustering objectives. Disparity of impact for spectral clustering has been studied by 

[33]. Further spectral algorithms with fairness considerations appear in [39, 43, 45]. [32] considered -centers under the fairness constraint that the set of centers, rather than the composition of the clusters.

Cardinality Constrained Clustering

As mentioned above, fair clustering is a special case of clustering with cardinality constraints. When the cardinalities are bounded from above, this is known as capacitated clustering. For -center, constant factor approximation algorithms are known [5, 21]. For -median no constant factor approximation is known, though there are several bicriteria approximations [2, 22, 36]. [4] and [41] consider a variant of -center where the cluster sizes are lower bounded, which models privacy preserving clustering. For this problem, they obtain a -approximation. A similar notion by [17] phrases fairness in terms of proportionality of a given solution. [35] also considered a -center problem with a diversity constraint. Here, we are given color classes and every cluster must contain at least one point from every class. For this problem, they achieved a -approximation. For arbitrary constraints on cluster sizes, [13, 23, 24] obtain polynomial time approximation schemes, given constant .

1.3 Preliminaries and Problem Definition

Throughout this paper, let be the set of natural numbers from to . A -clustering of an -point set is a disjoint partition of into subsets called clusters. Further, we are given a coloring . A set is called balanced if for any . A clustering is called balanced or fair if every cluster is balanced. We view our input point set as a matrix , where the rows of correspond to points and the columns denote features. If the input point set is balanced, we say that is an matrix and use a coloring . We further use to denote the set .

The norm of a

-dimensional vector

is denoted . Taking the limit of , we define . For a matrix , define the cascaded norm
, that is, we first compute the -norms of the rows of , and then compute the norm of the resulting vector. It is perhaps instructive to note that is identical to the Frobenius norm of .

The clustering problem consists of computing an matrix with at most distinct rows minimizing . The cluster induced by the row are the points . For example, clustering is the -median problem in Hamming space, and clustering is Euclidean -center. The fair- clustering problem further requires that every cluster induced by is balanced.

Finally, given two point sets and , a matching is a bijection . Given some matching , we say that the -cost is . The optimal matching with respect to the -cost is called the min-cost perfect -matching, or simply min-cost perfect matching if and are clear from the context. In literature, this is sometimes referred to as the Earth-Mover’s distance between and , for which we use the shorthand . The time required to compute an optimal min-cost perfect matching on -point sets is denoted by 111There exist algorithms that run faster in special cases, such as -matching in low-dimensional Euclidean space. For a single algorithm that solves the problem for all and , we refer the reader as an example to the Hungarian algorithm [34]..

2 Approximate Fair -Clustering

We start with our main result for -clustering objectives such as -median and -center.

Theorem 2.

Let be an matrix, let be a balanced coloring of , and let be an integer. Given an -approximation of the unconstrained -clustering problem running in time , there exists an approximation for the fair -clustering problem with colors running in time .

0:  Balanced point set with , ,
  for  do
     for  do
        compute min-cost perfect matching between and
     end for
     Run unconstrained -clustering algorithm on with centers
     for  do
        for  do
           Assign to
        end for
     end for
  end for
  Output the best fair clustering among all
Algorithm 1 Fair to Unfair Reduction for -Clustering

We give in Algorithm  1 the pseudocode of the algorithm used to prove this theorem. The high level ideas are as follows. Our algorithm is based on the computation of a solution for fair -clustering initially proposed by [18]. We show that a -approximate fair -clustering can always be obtained by selecting the input color such that is minimized. Run an algorithm for unconstrained -clustering on then allows to recover a fair -clustering of .

What is left to show for the analysis is that clustering (multiplicities of) can be related to clustering the entire point set. The total cost of the final solution is actually bounded by times the cost of the fair -clustering plus the cost of an -approximation algorithm for -clustering. This allows to recover an -approximate fair clustering. To obtain an -approximation, additional care is needed. Instead of separately bounding and the cost of clustering , we show that there always exists a color such that the sum of both is small. While the proof is non-constructive, we can recover an appropriate solution by running the approximation algorithm on all , choosing the assignment by pairwise minimization of , and finally selecting the cheapest resulting fair clustering.

Proof.

We denote by the clusters of an optimal balanced -clustering and the associated centers. We denote by the cost of an optimal balanced -clustering. We also observe that the existence of balanced -clustering always implies the existence of a balanced -clustering

For the analysis of the algorithm, it is useful to rearrange the rows of in a way that they are sorted by cluster membership in the optimal balanced -clustering. First, we partition into blocks consisting of rows, where each block represents a color. In each block, the first rows are the points in cluster (in arbitrary order), the next rows are the points in cluster , and so on.

We now define an auxiliary matrix containing the information on the centers of the optimal balanced -clustering. Let denote the matrix such that the row contains the center of cluster . Finally let be matrix obtained by the union of copies of , that is, .

We then have
Next, denote by the matrix whose rows are all points of color (i.e., corresponds to rows of ). Now consider the -clustering of minimum cost obtained by using points of the same color, that is, the matrix minimizing the expression

where inequality follows by application of triangle inequality, for every possible value of . Note that the first summand is always equal to and therefore independent of the minimization with respect to . For the second summand, we observe that the cheapest color always satisfies

(1)

Therefore, the minimizer (which is not necessarily equal to ) wrt the above expression has cost at most . Moreover, the very same definition of the optimal (left hand side of the inequality above) suggests that we can compute by simply evaluating a min-cost perfect -matching between all pairs and for and aggregating these costs by taking the -norm of the pairwise Earth-Mover’s distances. To see this, observe that the matrix

induces a matching between and , for all , using the bijection that maps to . Then

Thus we can conclude that the clustering to the color minimizing the aggregated cost of Earth Mover’s distances satisfies

(2)

Now, assuming that satisfies Equations 1 and 2

(an assumption we will remove in a moment), we run an

-approximate algorithm for unconstrained -clustering on . Let be the set of centers computed by the approximation.

The fair assignment is obtained by mapping the points of the remaining colors to the center to which their matching partner in was assigned, that is, gets mapped to . Denote the resulting center matrix , that is, the rows of contain the center to which the corresponding rows of were assigned. We then have, after suitably rearranging rows of

(3)

where the first inequality follows from the triangle inequality and the second inequality holds by the assumption that satisfies Equations 1 and 2.

To remove the assumption on , we simply run an -approximation for all colors and compute the assignment in the aforementioned way, outputting the cheapest one at the end. The cheapest one is guaranteed to satisfy Equation 3. ∎

Remark 1.

A minor modification to this proof is necessary when considering the -center objective, i.e., the objective is to compute a -clustering. In this case, one has to manipulate the norms with some additional care, as some operations such as taking the -th power are no longer well-defined when taking the limit . However, in many ways the proof becomes easier, while retaining the same line of reasoning. Since previous papers already published proofs for -center clustering with similar approximation ratios and we will present in the next section a substantially simpler and faster algorithm, we omit details.

We also remark that for fair -means in Euclidean spaces, which corresponds to finding a matrix with distinct rows such that is minimized. Using standard manipulations found in [10, 25], one can derive an approximation ratio of for any with the analysis of Theorem 2.

While Theorem 2, as stated, only applies to clustering in an space, it immediately generalizes to arbitrary metrics as well, with minor modifications. This follows from the fact that any finite point metric can be isometrically embedded into with dimensions; see, for instance, Matousek [38]. In other words, the problem of computing a fair -clustering in an arbitrary metric space can always be reduced to computing a fair -clustering in space, i.e. fair clustering. We summarize this in the following corollary.

Corollary 1.

Suppose we are given a balanced point set in some finite metric space. Then there exists a polynomial time approximation algorithm for fair -clustering.

3 Faster Algorithms for Fair -Median and -Center

Fair -Median

For fair -median, we obtain an -approximation as in Theorem 2, albeit in a substantially faster running time. As mentioned in [8], computing min-cost perfect matchings is expensive and tends to dominate the running time of fair clustering. In their paper, they proposed an algorithm that computes an -approximate fairlet decomposition for fair -median in nearly linear time222The dependency on may be further reduced to using dimension reduction techniques from [37]. The result from Theorem 2 can be combined with this approach, yielding an -approximation in time time. In this paper we briefly illustrate how to obtain a linear time randomized algorithm (i.e running in time ).

Recall that a fairlet, as defined by [18] is a -clustering with possibly more than centers, for which a single point is used as a representative. Clustering the representatives and merging the fairlets then results in a fair clustering, for any value of . Note that the existence of a fair -clustering always implies the existence of a fair -clustering, for any number of colors. [18] show that computing an optimal fair -median is possible if we are given only two colors. While the same problem is APX-hard for three colors (see Proposition 2), the following theorem establishes that a randomly sampled color is always an -approximate fair -median on expectation. Repeating the sampling process allows us to find a good

-median clustering with high probability. The pseudocode is given in Algorithm

2.

0:  Balanced point set with
  for  do
     Sample uniformly at random
     
     for  do
        Compute (approximate) and add it to cost
     end for
  end for
  Output with minimal cost
Algorithm 2 Fast Randomized Fair -Median Clustering
Theorem 3.

Let be an matrix, let be a balanced coloring of . Given an algorithm that computes a -approximate fair -median clustering with -colors in time , there exists a randomized algorithm that computes a -approximate fair -median clustering with colors. The algorithm runs in time and succeeds with probability .

Proof.

We will start by recalling the following fact that establishes the metric properties of the Earth Mover’s distance.

Fact 1 (Rubner et al. [42], Appendix A).

Let be a metric space with points and distance function . Then the Earth Mover’s distance on (weighted) point sets of equal size (or total weight) using as a ground distance is a metric.

Given -point sets lying in some metric space, the fair -median problem consists of finding an -point set such that

is minimized.

We now sample a point set uniformly at random. Then

where we use Fact 1 in the inequality. Hence, a random point set is always a good candidate solution for an approximate fair -median clustering, with probability at least . Repeating the sampling process times and picking the best one yields a -approximation with probability .

We now run the -approximate computation of fair -median with respect to every sampled color . Let be the matching computed by this algorithm, for every . We then have

Fair -Center

In the full version of the paper we show that for the special case of -center in finite metrics, we can compute a set of -centers that induce a -approximate fair -clustering. Moreover, this algorithm runs in nearly linear time. The algorithm is essentially the farthest first traversal that is well known to produce an optimal -approximation for unconstrained metric -center [27]. This result, that improves for fair -center over Theorem 2, is based on the following theorem.

Theorem 4.

Let be a set of points in a finite metric, let be a balanced coloring of , and let be an integer. There exists a time algorithm that computes a set of points such that there exists a -approximate fair clustering using as centers.

Proof.

We argue why the final set of points computed by the farthest first heuristic fulfills the desired criteria.

First, consider the case that every point of is in a different optimal cluster. In this case, we may upper bound the cost of clustering to by via the triangle inequality. If does not hit all clusters of the optimal clustering, there must be some cluster that is hit at least twice. Let be the first iteration in which this occurs and denote by the points collected so far and by the added point. It then holds .

By definition of , we know that for any cluster with center not hit by , we have . Since the distance of any point to is at most , we therefore have .

Finally, we argue why there exists a valid fair clustering with this bound. The union of two disjoint balanced clusters is a balanced cluster. Let , for any cluster hit by . We assign all the points of to . For any cluster not hit by , we assign the points of to the center minimizing . ∎

However, we remark that while we can guarantee the existence of a good clustering using as centers, it seems hard to recover it while ensuring fairness. This stands in contrast to unconstrained clustering, where one can simply assign every point to its closest center. For the special case , a fair clustering may be recovered using flow-based techniques. For , deciding whether there exists a clustering with some cost, given a candidate set of centers, it is a hard problem. The proof is a simple reduction from the 3D matching problem. Since the reduction is similar to the Proposition 2, we omit details.

Proposition 1.

Let be a set of points in some finite metric with a fair coloring , let be (a possibly optimal) set of centers and let be a parameter. Then deciding whether there exists a fair -center clustering using as centers with the range is NP-hard.

4 A PTAS for Fair Clustering in Euclidean Spaces with Constant

Lastly, we briefly show how to derive a approximation for fair -clustering in Euclidean spaces if the number of centers is constant. This shows that a separation between the hardness of unconstrained clustering and fair clustering has to consider large values of .

Theorem 5.

Let be a set of points in Euclidean space and let be a constant. Then there exists an algorithm that computes in time a approximation for fair -median, fair -means, and fair -center.

Proof.

The high level idea is similar to early polynomial time approximation schemes for unconstrained -clustering [1, 9], with a few modifications to account for fairness. Assume we are given an oracle that (i) returns a set of centers such that these centers form a approximation and (ii) returns the size of the clusters associated to these centers. If we have access to both, we can recover a clustering with the same approximation ratio by solving the following minimum transportation problem. For every color, we construct an assignment as follows. Every input point corresponds to a node in a flow network. Every center corresponds to a node . These nodes are connected by a unit capacity edge. Furthermore, we have unit capacity edges from the source node to each , as well as edges from the nodes to the target node. These edges have capacity that are exactly the target size of the clustering. We now find a feasible flow such that the connection cost is minimized, where corresponds to the Euclidean distance between points and  333For -means, we would have to use squared Euclidean distances. For -center, we would use a threshold network that only connects nodes to centers that are within distance and find an arbitrary flow.. Finding a feasible flow can be done in polynomial time, moreover such a flow is integral, i.e. guaranteed to be a fair assignment.

To remove the oracle, we do the following. For (ii), we observe that there are different ways of selecting the sizes of the clusters, given a ground set of points. For (i), it is well known that for all of the considered objectives, there exist weak coresets of for a single center of size , see [9] and [1]. Weak coresets essentially satisfy the following property: Given a point set , a weak coreset wrt to some objective is a subset of of such that a approximation computed on is a computed on .

Hence, we can find a suitable set of points from which to compute candidate centers by enumerating all -tuples in time . ∎

We complement this result by showing a fairlet decomposition is APX-hard for . In particular, we also show that computing a better than -approximate -center clustering decomposition is NP-hard for . Hence, the analysis of Theorem 2 is tight. If a better approximation algorithm for fair clustering exists, it will have to rely on a different technique. Note that this stands in contrast to the computability of an optimal fairlet decomposition for colors proposed by [18].

Proposition 2.

Let be a set of points in a finite metric, , let be a balanced coloring of . Then approximating fair -center beyond a factor of and approximation fair -median beyond a factor of is -hard.

Proof.

We give a reduction from -dimensional matching to fair -center with three colors (a generalization from -dimensional matching and colors is straightforward). Given a hypergraph , with disjoint nodes sets of size each and hyperedges , -dimensional matching consists of deciding whether there exists perfect hypermatching, i.e. a collection of pairwise disjoint hyperedges .

We construct an instance of fair -center as follows. Each hyperedge will be mapped to some point and also every node will be mapped to some point . The points corresponding to hyperedges will be our candidate set of centers . We now define the distances between our points as follows. For nodes and hyperedges , we set . The remaining distances are set to . This trivially results in a metric.

Now, assume that a perfect hypermatching exists. Then the fair -center clustering cost is precisely . If, however, no perfect hypermatching exists, the cost is . Distinguishing between these two cases is NP-hard, hence approximating fair -center beyond a factor is also hard.

Similary, if a perfect hypermatching exists, the cost of a fair -median clustering is precisely . If the size of the largest hypermatching is , then at least points have to pay , i.e. the total cost is at least . Since distinguishing between a perfect hypermatching and a hypermatching of size  [19], this implies and therefore APX hardness beyond a factor

5 Experimental Analysis

5.1 Dataset Description

The datasets used for experiments are taken from the previous literature [11, 18, 8]. As our interest is in the multiple-color scenario, we ran our experiments considering 8 colors; for completeness, we also consider similar experiments with 4 colors in the supplementary material. Each color represents a protected class, characterized by some particular value of the chosen protected attributes. We selected protected attributes to obtain 8 classes in total, and we also subsampled the original records for obtaining the same number of records for each class. In total, we used the six data sets. We report averages computed over 100 samples of 1000 distinct points. Each sample is a perfectly balanced set of points with respect to the eight colors described in the following.

Adults

This dataset444https://archive.ics.uci.edu/ml/datasets/Adult contains “1994 US census” records about registered individuals including age, education, marital status, occupation, ethnicity, sex, hours worked per week, native country, and others. Following [11] and [18], the numerical attributes chosen to represent points in the Euclidean space are age, fnlwgt, education-num, capital-gain, hours-per-week. The protected attributes chosen to represent the classes are Sex, Ethnicity, Income, where each of them takes only 2 possible values. For the experiments, we used 100 balanced subsamples of 1000 distinct records.

Athletes

This dataset555www.kaggle.com/heesoo37/120-years-of-olympic-history-athletes-and-results contains bio data on Olympic athletes and medal results from Athens 1896 to Rio 2016. The selected features are Age, Height, Weight. The protected attributes are Sex, Sport, Medal (two sports were selected - gymnastics and basketball - and two types of athletes were considered for the third attribute - athletes who won at least one medal and athletes who did not). For the experiments, we used 100 balanced subsamples of 1000 distinct records.

Bank

This dataset666https://archive.ics.uci.edu/ml/datasets/Bank+Marketing stems from direct marketing campaigns, based on phone calls, of a Portuguese banking institution. As in [11] and [18], the selected features to represent the points in the space are age, balance, duration. The protected attributes are marital status (married or not), education (secondary or tertiary), housing. For the experiments, we used 100 balanced subsamples of 1000 distinct records.

Diabetes

The dataset777https://archive.ics.uci.edu/ml/datasets/Diabetes+130-US+hospitals+for+years+1999-2008, used for experiments in [18], represents 10 years (1999-2008) of clinical care at 130 US hospitals and integrated delivery networks. It includes over 50 features representing patient and hospital outcomes; of these features, 4 were chosen to represent the points in the space: time_in_hospital, num_lab_procedures, num_medications, number_diagnoses. The protected attributes are sex, ethnicity (Caucasian or AfricanAmerican), age (this attribute has been dichotomised in order to have two classes of ages: people who are respectively less and more than 50 years old). We ran the experiments on 100 subsamples of 1000 distinct records each. For the experiments, we used 100 balanced subsamples of 1000 distinct records.

Credit cards

This dataset888https://archive.ics.uci.edu/ml/datasets/default+of+credit+card+clients, contains information on credit card holders from a certain credit card in Taiwan. Here, the same 14 features chosen by [11] were selected, while the protected attributes are sex, education (Graduate School or University), marriage (married or not). For the experiments, we used 100 balanced subsamples of 1000 distinct records.

CensusII

This dataset contains records extracted from the USCensus1990raw999https://archive.ics.uci.edu/ml/datasets/US+Census+Data+%281990%29 data set (also used in [8]), containing 2458285 records composed by 68 attributes. Among all of these attributes, 9 have been chosen to represent the points in the Euclidean space: AGE, AVAIL, CITIZEN, CLASS, DEPART, HOUR89, HOURS, PWGT1, TRAVTIME. For this dataset, the selected protected attributes are SEX (Female, Male), RACE (dichotomized as White, notWhite) and MARITAL (dichotomized as NowMarried, NowNotMarried). For the experiments, we used 100 balanced subsamples of 1000 and 450000 distinct records.

5.2 Setup and Algorithms

We solved the fair -median problem by implementing Algorithms 12, Q and Excellent. Q is similar to Algorithm 1, except that we select the color with minimum perfect matching cost. This algorithm is guaranteed to return an approximation and does so slightly faster than Algorithm 1. Excellent is a further variant of Algorithm 1 that computes a good clustering for each color and subsequently performs a fair assignment. The approximation factor is theoretically equal to that of Algorithm 1, but sometimes improves empirically.

We ran the algorithms for all values of between and . For center, any solution is naturally fair. Since there was already little to no difference between the cost of a fair clustering and the cost of a fair clustering, we did not consider larger values of .

We compared these algorithms with the implementation of [11]. For the largest data set (USCensus1990raw) consisting of colors with a total of points, the code by [11] did not terminate. On this dataset, we showcased the modularity of our approach by combining it with the fast fairlet algorithm by [8].

Since our algorithm requires a solver for the unconstrained -median problem, For all 1000-points datasets, we used the single-swap local search heuristic, while yields a 5-approximation in the worst case [7].

For the 450000-points USCensus1990raw dataset, local search is infeasible to run. Instead, we used a simple heuristic that essentially mimics the -means++ algorithm [6]: First we sample centers by iteratively picking the next center proportionate to its distance to the previously chosen centers, and then running the -medoids algorithm to further refine the solution. For the experiments we used a Intel Xeon 2.4Ghz with 24GB of RAM and a Linux Ubuntu 18.04 LTS Operating System.

5.3 Results

The left plot of the following figures reports the aggregated average cost of all tested methods; the right plot reports running times. In addition the fair clustering algorithms, we also reported costs for fair clustering, which provides a lower bound for any fairlet-based algorithm, as well as the cost of an unconstrained solution.

For the most part, the algorithm by [11] has comparable cost to our algorithms. Furthermore, we empirically observe that [11] almost always computes a balanced solution, as opposed to the bicriteria result sated in their paper. Specifically, less than 0.8% of instances for Diabetes, less than 0.6% of instances for Credit cards dataset, less than 0.3% of instances for USCensus1990raw dataset, less than 0.2% of instances for Athletes dataset, and less than 0.05% of instances for Bank yielding an unfair solution. Our algorithms, of course, always guarantee fairness. All of our algorithms perform slightly better than [11] on data sets in which the fairlets (i.e. the fair clusterings) are very cheap compared to the cost of a -clustering, see Figure 11. On data sets, where the fairlets are more expensive their is little difference in cost, see Figure 7.

In terms of running time, all of our algorithms run substantially faster than [11] by roughly factors of 100 or more. Algorithms 2 and Q have an average running time of 176msc and 258msec, respectively. This is also significantly faster than Algorithms 1 and Excellent (1111msec and 1044msc on average respectively), while having a roughly comparable cost.

(a)
(b)
Figure 1: Average cost and execution time of the fair-k-median methods on Adults dataset: 8 colors, 100 subsamples of 1000 distinct points each.
(a)
(b)
Figure 2: Average cost and execution time of the fair-k-median methods on Adults dataset: 4 colors, 100 subsamples of 1000 distinct points each.

Algorithm 1: k=n 15981152.5 1968761.8 15981152.5 1968761.8 15981152.5 1968761.8
k-median++ 35093729.4 10482181.7 16518716.6 2704863.2 9390692.2 1632344.3
Excellent 37334601.9 9646426.1 22164330.5 2448591.7 18136890.1 2030020.3
Algorithm 1 38601051.2 9319399.3 23480401.1 2606059.3 18863910.7 2110809.0
Q 39150199.6 9445281.9 23786205.0 2731965.7 19069140.1 2222137.3
Algorithm 2 40372180.0 9277749.3 25507540.8 3548581.9 20864471.0 3425165.9
 [11] 38101545.1 9528290.7 22713743.0 2587922.8 18488295.4 2018105.9

Table 1:

Average and standard deviation of the cost of the fair-k-median methods on

Adults dataset: 8 colors, 100 subsamples of 1000 distinct points each.

Algorithm 1: k=n 11839345.7 2070002.0 11839345.7 2070002.0 11839345.7 2070002.0
k-median++ 35093729.4 10482181.7 16518716.6 2704863.2 9390692.2 1632344.3
Excellent 36625310.1 9921526.0 20339741.6 2570564.3 15488437.9 2061712.3
Algorithm 1 37621394.0 9712253.2 21512025.2 2700895.7 16314405.3 2192492.3
Q 37958840.6 9713117.1 21687331.5 2756245.2 16417451.2 2213958.7
Algorithm 2 38834788.7 9553619.9 23278062.9 3622493.8 18138272.2 3406247.1
 [11] 37450212.1 9797625.9 21096203.8 2680446.0 16013188.3 2054989.7

Table 2: Average and standard deviation of the cost of the fair-k-median methods on Adults dataset: 4 colors, 100 subsamples of 1000 distinct points each.
(a)
(b)
Figure 3: Average cost and execution time of the fair-k-median methods on Athletes dataset: 8 colors, 100 subsamples of 1000 distinct points each.
(a)
(b)
Figure 4: Average cost and execution time of the fair-k-median methods on Athletes dataset: 4 colors, 100 subsamples of 1000 distinct points each.

Algorithm 1: k=n 20278.4 280.9 20278.4 280.9 20278.4 280.9
k-median++ 9970.3 2171.1 6327.1 439.9 5079.1 327.3
Excellent 21107.9 300.0 20769.6 276.7 20585.8 273.5
Algorithm 1 21299.8 297.6 20948.8 276.4 20728.2 275.6
Q 21340.9 302.0 20965.2 278.8 20736.5 276.4
Algorithm 2 21629.2 391.5 21333.2 418.7 21092.3 411.6
 [11] 21959.2 796.3 21098.5 295.8 20793.4 288.7

Table 3: Average and standard deviation of the cost of the fair-k-median methods on Athletes dataset: 8 colors, 100 subsamples of 1000 distinct points each.

Algorithm 1: k=n 20157.3 279.6 20157.3 279.6 20157.3 279.6
k-median++ 9970.3 2171.1 6327.1 439.9 5079.1 327.3
Excellent 21105.5 302.2 20763.7 275.0 20564.7 275.7
Algorithm 1 21311.2 298.7 20959.8 278.5 20731.3 277.7
Q 21349.7 299.4 20999.8 289.8 20758.7 285.3
Algorithm 2 21694.3 384.7 21410.3 416.6 21171.7 405.5
 [11] 21927.4 790.9 21058.0 300.0 20744.4 288.3

Table 4: Average and standard deviation of the cost of the fair-k-median methods on Athletes dataset: 4 colors, 100 subsamples of 1000 distinct points each.
(a)
(b)
Figure 5: Average cost and execution time of the fair-k-median methods on Bank dataset: 8 colors, 100 subsamples of 1000 distinct points each.
(a)
(b)
Figure 6: Average cost and execution time of the fair-k-median methods on Bank dataset: 4 colors, 100 subsamples of 1000 distinct points each.

Algorithm 1: k=n 434892.4 63810.1 434892.4 63810.1 434892.4 63810.1
k-median++ 699773.5 178243.2 375005.8 53842.9 240197.8 33496.6
Excellent 782666.7 148502.7 566505.9 66761.8 501728.7 63811.2
Algorithm 1 799852.1 147306.8 581807.0 68002.9 512378.1 64506.9
Q 813091.6 151845.3 585861.4 68185.5 514900.4 64694.5
Algorithm 2 840799.3 158718.3 622778.7 89740.2 551295.3 86282.3
 [11] 806863.1 146858.0 595020.8 70544.2 522864.6 65312.6

Table 5: Average and standard deviation of the cost of the fair-k-median methods on Bank dataset: 8 colors, 100 subsamples of 1000 distinct points each.

Algorithm 1: k=n 348806.2 68568.3 348806.2 68568.3 348806.2 68568.3
k-median++ 699773.5 178243.2 375005.8 53842.9 240197.8 33496.6
Excellent 757508.6 157020.1 515590.9 67641.0 439276.2 65541.3
Algorithm 1 776267.5 154638.7 533173.4 69012.5 451817.3 65885.7
Q 783853.3 154205.1 536940.7 68463.6 453614.1 65842.6
Algorithm 2 799761.9 154970.7 559700.2 73057.8 477443.6 72098.2
 [11] 779977.8 153222.8 542883.4 72216.1 461341.9 66716.7

Table 6: Average and standard deviation of the cost of the fair-k-median methods on Bank dataset: 4 colors, 100 subsamples of 1000 distinct points each.
(a)
(b)
Figure 7: Average cost and execution time of the fair-k-median methods on CensusII dataset: 8 colors, 100 subsamples of 1000 distinct points each.
(a)
(b)
Figure 8: Average cost and execution time of the fair-k-median methods on CensusII dataset: 4 colors, 100 subsamples of 1000 distinct points each.

Algorithm 1: k=n 143647.8 11160.9 143647.8 11160.9 143647.8 11160.9
k-median++ 95258.4 25778.1 56917.8 5417.5 41405.2 3832.5
Excellent 184695.5 20692.4 160658.8 10799.8 153238.9 10856.3
Algorithm 1 187589.5 20660.7 163074.4 10881.9 155139.9 10832.0
Q 189288.9 21659.0 163719.4 11099.0 155544.5 10952.7
Algorithm 2 205788.6 33060.7 173844.1 14035.4 165075.1 13902.7
 [11] 186506.1 21502.2 162025.1 10575.3 154067.1 10496.4

Table 7: Average and standard deviation of the cost of the fair-k-median methods on CensusII dataset: 8 colors, 100 subsamples of 1000 distinct points each.

Algorithm 1: k=n 57322.8 8683.0 57322.8 8683.0 57322.8 8683.0
k-median++ 95258.4 25778.1 56917.8 5417.5 41405.2 3832.5
Excellent 118812.9 22916.8 87243.8 8554.2 75646.0 8366.8
Algorithm 1 119795.5 22706.1 88760.4 8645.0 77237.1 8464.1
Q 121837.5 23860.1 89385.4 8779.7 77455.6 8511.2
Algorithm 2 129809.6 26354.5 95423.7 11718.4 82617.0 10875.8
 [11] 121847.6 22859.6 91267.5 8347.0 79891.8 8120.9

Table 8: Average and standard deviation of the cost of the fair-k-median methods on CensusII dataset: 4 colors, 100 subsamples of 1000 distinct points each.
(a)
(b)
Figure 9: Average cost and execution time of the fair-k-median methods on Credit cards dataset: 8 colors, 100 subsamples of 1000 distinct points each.
(a)
(b)
Figure 10: Average cost and execution time of the fair-k-median methods on Credit cards dataset: 4 colors, 100 subsamples of 1000 distinct points each.

Algorithm 1: k=n 72339739.1 3121117.5 72339739.1 3121117.5 72339739.1 3121117.5
k-median++ 107302571.2 21435714.7 71975930.7 5299320.0 57207897.4 4066524.0
Excellent 119312415.8 16746700.6 93053938.9 4189509.1 84871284.9 3599454.6
Algorithm 1 122659466.9 16198184.0 96614545.3 4341943.3 87722007.7 3818697.4
Q 124251499.5 16883999.8 97391340.5 4602652.1 88173991.3 3940251.9
Algorithm 2 126706176.1 16066799.1 101750820.7 7338227.3 92743080.9 7061486.0
 [11] 123705922.5 17036610.1 97835628.9 4640655.8 89211162.0 3889060.7

Table 9: Average and standard deviation of the cost of the fair-k-median methods on Credit cards dataset: 8 colors, 100 subsamples of 1000 distinct points each.

Algorithm 1: k=n 60896014.3 3371582.0 60896014.3 3371582.0 60896014.3 3371582.0
k-median++ 107302571.2 21435714.7 71975930.7 5299320.0 57207897.4 4066524.0
Excellent 117011650.1 17505918.1 88851550.1 4828231.4 79239484.5 3885547.9
Algorithm 1 120774309.1 16788025.7 93023954.1 4925886.7 82960786.4 4133732.4
Q 121293644.5 17044979.9 93318777.1 4971420.4 83129910.2 4191054.8
Algorithm 2 122939880.2 16637229.6 95576130.1 5432173.1 85525305.9 4762009.5
 [11] 120760156.9 17782849.1 93085262.4 4758334.7 83669285.3 4118306.9

Table 10: Average and standard deviation of the cost of the fair-k-median methods on Credit cards dataset: 4 colors, 100 subsamples of 1000 distinct points each.
(a)
(b)
Figure 11: Average cost and execution time of the fair-k-median methods on Diabetes dataset: 8 colors, 100 subsamples of 1000 distinct points each.
(a)
(b)
Figure 12: Average cost and execution time of the fair-k-median methods on Diabetes dataset: 4 colors, 100 subsamples of 1000 distinct points each.

Algorithm 1: k=n 4789.9 229.5 4789.9 229.5 4789.9 229.5
k-median++ 10354.9 1691.9 7218.8 485.1 5710.4 396.4
Excellent 10689.6 1541.6 7927.3 431.1 6749.5 343.6
Algorithm 1 10855.6 1488.8 8188.5 435.3 7016.6 354.3
Q 10975.4 1499.6 8288.9 449.5 7089.5 367.7
Algorithm 2 11082.9 1468.3 8423.7 490.0 7219.0 394.0
 [11] 11148.2 1454.4 8653.1 444.2 7466.4 392.0

Table 11: Average and standard deviation of the cost of the fair-k-median methods on Diabetes dataset: 8 colors, 100 subsamples of 1000 distinct points each.

Algorithm 1: k=n 3505.7 222.2 3505.7 222.2 3505.7 222.2
k-median++ 10354.9 1691.9 7218.8 485.1 5710.4 396.4
Excellent 10583.8 1590.7 7691.6 458.9 6380.3 364.3
Algorithm 1 10713.0 1546.6 7905.0 446.0 6622.5 368.3
Q 10793.2 1537.9 7958.1 461.2 6661.0 371.4
Algorithm 2 10842.1 1536.2 8039.4 470.0 6732.3 390.5
 [11] 11078.4 1490.2 8428.6 463.1 7164.5 392.5

Table 12: Average and standard deviation of the cost of the fair-k-median methods on Diabetes dataset: 4 colors, 100 subsamples of 1000 distinct points each.
(a)
(b)
Figure 13: Average cost and execution time of the fair-k-median methods combined with fast fairlets decomposition [8] on the CensusII dataset: 8 colors, 100 subsamples of 450000 distinct points each.

Algorithm 1 + FF: k=n 108369855.9 3054639.9 108369855.9 3054639.9 108369855.9 3054639.9
k-median++ 57863762.6 19309933.8 32275620.7 3110506.0 24170463.4 1923303.3
Excellent + FF 171581102.9 8929810.2 174418606.9 6230287.9 171759787.4 5951334.0
Algorithm 1 + FF 126373453.8 3582880.6 124704090.1 3391359.4 124237402.6 3429534.5
Q + FF 126885878.7 4115222.4 124681188.1 3454367.0 124220881.3 3424855.2
Algorithm 2 + FF 135025164.1 7359895.1 134129796.4 7632188.8 133834734.3 7742148.8

Table 13: Average cost and execution time of the fair-k-median methods combined with fast fairlets decomposition [8] on the CensusII dataset: 8 colors, 100 subsamples of 450000 distinct points each.

For the larger data set 450000-points CensusII, we used the fast fairlet decomposition by [8] to ensure scalability. Unfortunately, the implementation by [11] could not benefit from this preprocessing step and the implementation itself was not able to process data sets at this scale. Cost and running time are given in Figure 13. Running times are similar to those of the small data sets. The most notable difference is that computing an approximate fair assignment after optimization as done by Algorithm Excellent negatively affects the approximation.

6 Conclusion and Future Work

In this paper, we studied the fair clustering problem in which we are given points from distinct protected groups and wish to cluster these points such that every group is equally represented in each cluster. We have presented a generic reduction from fair clustering with multiple (i.e. ) protected classes to to unconstrained clustering at which retains the approximation factor up to constant factors. This result holds for any center-based -clustering objective, including -median, -means, and -center. Moreover, our reduction is robust to approximation and can be easily combined with methods designed to make clustering in general and fair clustering in particular more scalable [8, 30, 44].

A number of problems are left in this work. The most challenging one is to show whether there the constant factor loss in the approximation is necessary or not. In other words, does there exist a result showing that fair clustering is strictly harder than unconstrained clustering, for any objective? Since this question is rather general and might be hard to answer, we propose a few simpler problems. First, we have showed that in general metrics, fair -center is APX-hard if the number of colors is greater than . Does this result also hold for the Euclidean plane? Moreover, what can we say about computing a fair -median? We also showed that a PTAS for fair clustering exists, provided that is constant. -median and -means in constant dimension admit a PTAS. A natural question is whether such a PTAS also exists for the fair variants of the problem. This problem is open, even in the case of two protected attributes.

Lastly, the balancing constraint we considered in this paper assumes that all protected classes are disjoint and of equal cardinality. If either of these assumptions do not hold, the bi-criteria results by [11] and [12] are still the state of the art. Therefore a further open question is: Under which circumstances is a constant factor approximation to these generalizations obtainable?

References

  • [1] Marcel R. Ackermann, Johannes Blömer, and Christian Sohler. Clustering for metric and non-metric distance measures. In Proceedings of the Nineteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2008, San Francisco, California, USA, January 20-22, 2008, pages 799–808, 2008.
  • [2] Marek Adamczyk, Jaroslaw Byrka, Jan Marcinkowski, Syed M. Meesum, and Michal Wlodarczyk. Constant-factor FPT approximation for capacitated k-median. In 27th Annual European Symposium on Algorithms, ESA 2019, September 9-11, 2019, Munich/Garching, Germany., pages 1:1–1:14, 2019.
  • [3] Alekh Agarwal, Miroslav Dudík, and Zhiwei Steven Wu. Fair regression: Quantitative definitions and reduction-based algorithms. In

    Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA

    , pages 120–129, 2019.
  • [4] Gagan Aggarwal, Rina Panigrahy, Tomás Feder, Dilys Thomas, Krishnaram Kenthapadi, Samir Khuller, and An Zhu. Achieving anonymity via clustering. ACM Trans. Algorithms, 6(3):49:1–49:19, 2010.
  • [5] Hyung-Chan An, Aditya Bhaskara, Chandra Chekuri, Shalmoli Gupta, Vivek Madan, and Ola Svensson. Centrality of trees for capacitated k-center. Math. Program., 154(1-2):29–53, 2015.
  • [6] David Arthur and Sergei Vassilvitskii. k-means++: the advantages of careful seeding. In Proceedings of the 18th ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1027 – 1035, 2007.
  • [7] Vijay Arya, Naveen Garg, Rohit Khandekar, Adam Meyerson, Kamesh Munagala, and Vinayaka Pandit. Local search heuristics for k-median and facility location problems. SIAM J. Comput., 33(3):544–562, 2004.
  • [8] Arturs Backurs, Piotr Indyk, Krzysztof Onak, Baruch Schieber, Ali Vakilian, and Tal Wagner. Scalable fair clustering. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, pages 405–413, 2019.
  • [9] Mihai Badoiu, Sariel Har-Peled, and Piotr Indyk. Approximate clustering via core-sets. In

    Proceedings on 34th Annual ACM Symposium on Theory of Computing, May 19-21, 2002, Montréal, Québec, Canada

    , pages 250–257, 2002.
  • [10] Luca Becchetti, Marc Bury, Vincent Cohen-Addad, Fabrizio Grandoni, and Chris Schwiegelshohn. Oblivious dimension reduction for k-means: beyond subspaces and the johnson-lindenstrauss lemma. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, STOC 2019, Phoenix, AZ, USA, June 23-26, 2019., pages 1039–1050, 2019.
  • [11] Suman Kalyan Bera, Deeparnab Chakrabarty, Nicolas Flores, and Maryam Negahbani. Fair algorithms for clustering. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8-14 December 2019, Vancouver, BC, Canada, pages 4955–4966, 2019.
  • [12] Ioana Oriana Bercea, Martin Groß, Samir Khuller, Aounon Kumar, Clemens Rösner, Daniel R. Schmidt, and Melanie Schmidt. On the cost of essentially fair clusterings. In

    Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, APPROX/RANDOM 2019, September 20-22, 2019, Massachusetts Institute of Technology, Cambridge, MA, USA.

    , pages 18:1–18:22, 2019.
  • [13] Anup Bhattacharya, Ragesh Jaiswal, and Amit Kumar. Faster algorithms for the constrained k-means problem. Theory Comput. Syst., 62(1):93–115, 2018.
  • [14] Avishek Joey Bose and William L. Hamilton. Compositional fairness constraints for graph embeddings. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, pages 715–724, 2019.
  • [15] L. Elisa Celis, Lingxiao Huang, and Nisheeth K. Vishnoi. Multiwinner voting with fairness constraints. In

    Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden.

    , pages 144–151, 2018.
  • [16] L. Elisa Celis, Damian Straszak, and Nisheeth K. Vishnoi. Ranking with fairness constraints. In 45th International Colloquium on Automata, Languages, and Programming, ICALP 2018, July 9-13, 2018, Prague, Czech Republic, pages 28:1–28:15, 2018.
  • [17] Xingyu Chen, Brandon Fain, Liang Lyu, and Kamesh Munagala. Proportionally fair clustering. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, pages 1032–1041, 2019.
  • [18] Flavio Chierichetti, Ravi Kumar, Silvio Lattanzi, and Sergei Vassilvitskii. Fair clustering through fairlets. In Proceedings of the 30th Annual Conference on Neural Information Processing Systems (NIPS), pages 5036–5044, 2017.
  • [19] Miroslav Chlebík and Janka Chlebíková. Inapproximability results for bounded variants of optimization problems. In Fundamentals of Computation Theory, 14th International Symposium, FCT 2003, Malmö, Sweden, August 12-15, 2003, Proceedings, pages 27–38, 2003.
  • [20] Alexandra Chouldechova. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, 5(2):153–163, 2017.
  • [21] Marek Cygan, MohammadTaghi Hajiaghayi, and Samir Khuller. LP rounding for k-centers with non-uniform hard capacities. In 53rd Annual IEEE Symposium on Foundations of Computer Science, FOCS 2012, New Brunswick, NJ, USA, October 20-23, 2012, pages 273–282, 2012.
  • [22] H. Gökalp Demirci and Shi Li. Constant approximation for capacitated k-median with (1+epsilon)-capacity violation. In 43rd International Colloquium on Automata, Languages, and Programming, ICALP 2016, July 11-15, 2016, Rome, Italy, pages 73:1–73:14, 2016.
  • [23] Hu Ding and Manni Liu. On geometric prototype and applications. In 26th Annual European Symposium on Algorithms, ESA 2018, August 20-22, 2018, Helsinki, Finland, pages 23:1–23:15, 2018.
  • [24] Hu Ding and Jinhui Xu. A unified framework for clustering constrained data without locality property. In Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2015, San Diego, CA, USA, January 4-6, 2015, pages 1471–1490, 2015.
  • [25] Dan Feldman, Melanie Schmidt, and Christian Sohler. Turning big data into tiny data: Constant-size coresets for k-means, PCA and projective clustering. In Proceedings of the Twenty-Fourth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2013, New Orleans, Louisiana, USA, January 6-8, 2013, pages 1434–1453, 2013.
  • [26] Michael Feldman, Sorelle A. Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. Certifying and removing disparate impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 259–268, 2015.
  • [27] Teofilo F. Gonzalez. Clustering to minimize the maximum intercluster distance. Theor. Comput. Sci., 38:293–306, 1985.
  • [28] Paula Gordaliza, Eustasio del Barrio, Fabrice Gamboa, and Jean-Michel Loubes. Obtaining fairness using optimal transport theory. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, pages 2357–2365, 2019.
  • [29] Moritz Hardt, Eric Price, and Nati Srebro.

    Equality of opportunity in supervised learning.

    In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 3315–3323, 2016.
  • [30] Lingxiao Huang, Shaofeng H.-C. Jiang, and Nisheeth K. Vishnoi. Coresets for clustering with fairness constraints. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alché-Buc, Emily B. Fox, and Roman Garnett, editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8-14 December 2019, Vancouver, BC, Canada, pages 7587–7598, 2019.
  • [31] Jon M. Kleinberg, Sendhil Mullainathan, and Manish Raghavan. Inherent trade-offs in the fair determination of risk scores. In 8th Innovations in Theoretical Computer Science Conference, ITCS 2017, January 9-11, 2017, Berkeley, CA, USA, pages 43:1–43:23, 2017.
  • [32] Matthäus Kleindessner, Pranjal Awasthi, and Jamie Morgenstern. Fair k-center clustering for data summarization. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, pages 3448–3457, 2019.
  • [33] Matthäus Kleindessner, Samira Samadi, Pranjal Awasthi, and Jamie Morgenstern. Guarantees for spectral clustering with fairness constraints. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, pages 3458–3467, 2019.
  • [34] H. W. Kuhn and Bryn Yaw. The hungarian method for the assignment problem. Naval Res. Logist. Quart, pages 83–97, 1955.
  • [35] Jian Li, Ke Yi, and Qin Zhang. Clustering with diversity. In Automata, Languages and Programming, 37th International Colloquium, ICALP 2010, Bordeaux, France, July 6-10, 2010, Proceedings, Part I, pages 188–200, 2010.
  • [36] Shi Li. On uniform capacitated k-median beyond the natural LP relaxation. ACM Trans. Algorithms, 13(2):22:1–22:18, 2017.
  • [37] Konstantin Makarychev, Yury Makarychev, and Ilya P. Razenshteyn. Performance of johnson-lindenstrauss transform for k-means and k-medians clustering. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, STOC 2019, Phoenix, AZ, USA, June 23-26, 2019., pages 1027–1038, 2019.
  • [38] Jiří Matoušek. On the distortion required for embedding finite metric spaces into normed spaces. Israel Journal of Mathematics, 93:333–344, 01 1996.
  • [39] Jamie Morgenstern, Samira Samadi, Mohit Singh, Uthaipon Tao Tantipongpipat, and Santosh Vempala. Fair dimensionality reduction and iterative rounding for sdps. CoRR, abs/1902.11281, 2019.
  • [40] Alejandro Noriega-Campero, Michiel Bakker, Bernardo Garcia-Bulle, and Alex Pentland. Active fairness in algorithmic decision making. CoRR, abs/1810.00031, 2018.
  • [41] Clemens Rösner and Melanie Schmidt. Privacy preserving clustering with constraints. In 45th International Colloquium on Automata, Languages, and Programming, ICALP 2018, July 9-13, 2018, Prague, Czech Republic, pages 96:1–96:14, 2018.
  • [42] Yossi Rubner, Carlo Tomasi, and Leonidas J. Guibas.

    The earth mover’s distance as a metric for image retrieval.

    International Journal of Computer Vision

    , 40(2):99–121, 2000.
  • [43] Samira Samadi, Uthaipon Tao Tantipongpipat, Jamie H. Morgenstern, Mohit Singh, and Santosh Vempala. The price of fair PCA: one extra dimension. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, 3-8 December 2018, Montréal, Canada., pages 10999–11010, 2018.
  • [44] Melanie Schmidt, Chris Schwiegelshohn, and Christian Sohler. Fair coresets and streaming algorithms for fair k-means. In Approximation and Online Algorithms - 17th International Workshop, WAOA 2019, Munich, Germany, September 12-13, 2019, Revised Selected Papers, pages 232–251, 2019.
  • [45] Uthaipon Tantipongpipat, Samira Samadi, Mohit Singh, Jamie H. Morgenstern, and Santosh S. Vempala. Multi-criteria dimensionality reduction with applications to fairness. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alché-Buc, Emily B. Fox, and Roman Garnett, editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8-14 December 2019, Vancouver, BC, Canada, pages 15135–15145, 2019.
  • [46] Binh Luong Thanh, Salvatore Ruggieri, and Franco Turini. k-nn as an implementation of situation testing for discrimination discovery and prevention. In Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 502–510, 2011.
  • [47] Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P. Gummadi. Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In Proceedings of the 26th International Conference on World Wide Web (WWW), pages 1171–1180, 2017.
  • [48] Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez-Rodriguez, and Krishna P. Gummadi. Fairness constraints: Mechanisms for fair classification. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017, 20-22 April 2017, Fort Lauderdale, FL, USA, pages 962–970, 2017.
  • [49] Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez-Rodriguez, Krishna P. Gummadi, and Adrian Weller. From parity to preference-based notions of fairness in classification. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 228–238, 2017.