1 Introduction
The data points in machine learning are often real human beings. There is legitimate concern that traditional machine learning algorithms that are blind to this fact may inadvertently exacerbate problems of bias and injustice in society [25]
. Motivated by concerns ranging from the granting of bail in the legal system to the quality of recommender systems, researchers have devoted considerable effort to developing fair algorithms for the canonical supervised learning tasks of classification and regression
[13, 28, 20, 27, 34, 11, 30, 35, 26, 18, 21].We extend this work to a canonical problem in unsupervised learning: centroid clustering. In centroid clustering, we want to partition data into
clusters by choosing “centers” and then matching points to one of the centers. This is a classic context for clustering work [19, 33, 8, 2], and is perhaps best known as the setting for the celebratedmeans heuristic (independently discovered many times, see
[22] for a brief history). We provide a novel group based notion of fairness as proportionality, inspired by recent related work on the fair allocation of public resources [3, 10, 15, 17]. We suppose that data points represent the individuals to whom we wish to be fair, and that these agents prefer to be clustered accurately (that is, they prefer their cluster center to be representative of their features). A solution is fair if it respects the entitlements of groups of agents, where we assume that a subset of agents is entitled to choose a center for themselves if they constitute a sufficiently large fraction of the population with respect to the total number of clusters (e.g., of the population, if we are clustering into groups). The guarantee must hold for all subsets of sufficient size, and therefore does not hinge on any particular a priori knowledge about which points should be protected. This is in line with other recent observations that information about which individuals should be protected may not be available in practice [21].Consider a motivating example where proportional clustering might be preferable to more standard clusterings that try to minimize the means or median objective. Suppose there are 3 spherical clusters in the data: A, B, and C, and we are computing a 3clustering. A, B, and C each contain one third of the total data. The radius of A is very large compared to the radii of B and C, and A is very far away from B and C compared to the radius of A. The radii of B and C are very small, and B and C are close relative to the radius of A. More simply, A is a large sphere very far away from two small spheres B and C, which are close together.
Simply placing centers at the middle of A, B, and C is proportional. However, the global kmeans or kmedian minimizer places 1 center for B and C to share, and uses the remaining 2 centers to cover A. Such a solution is arbitrarily notproportional as the radii of B and C become arbitrarily small. Essentially, the global optimum forces B and C to share a center in order to pay for the high variance in A.
To interpret this example, suppose we are clustering home locations to decide where to build public parks. B and C are dense urban centers, and A is a suburb. Minimizing total distance seems reasonable, but the global optimum builds 2 parks for A, and only 1 that B and C must share. Alternatively, A, B, and C might represent clusters of patients in a medical study. Both solutions distinguish A from B and C, but the global optimum obscures the secondary difference between B and C. In both instances, B or C could represent a protected group (e.g., home location may be racially divided, and race or sex could cause differences in medical data), in which case proportionality provides a guarantee even if we do not have access to this information.
1.1 Preliminaries and Definition of Proportionality
We have a set of individuals or data points, and a set of feasible cluster centers. We will sometimes consider the important special case where (i.e., where one is only given a single set of points as input), but most of our results are for the general case where we make no assumption about . For all , we have a distance satisfying the triangle inequality. Our task is centroid clustering as treated in the classic median, means, and center problems. We wish to open a set of centers (assume ), and then match all points in to their closest center in . For a particular solution and agent , let . In general, a good clustering solution will have small values of , although the aforementioned objectives differ slightly in how they measure this. In particular, the median objective is , the means objective is , and the center objective is .
To define proportional clustering, we assume that individuals prefer to be closer to their center in terms of distance (i.e., ensuring that the center is more representative of the point). Any subset of at least individuals is entitled to choose centers. We call a solution proportional if there does not exist any such sufficiently large set of individuals who, using the number of centers to which they are entitled, could produce a clustering among themselves that is to their mutual benefit in the sense of Pareto dominance. More formally, a blocking coalition is a set of at least points and a set of at most centers such that for all . It is easy to see that because , this is functionally equivalent to Definition 1; a larger blocking coalition necessarily implies a blocking coalition with a single center.
Definition 1.
Let with . is a blocking coalition against if and such that , . is proportional if there is no blocking coalition against .
Equivalently, is proportional if with and for all , there exists with . It is important to note that this quantification is over all subsets of sufficient size. Hence, in attempting to satisfy the guarantee for a particular subset , one cannot simply consider a single and ignore all of the other points, as may itself be a subset to which the guarantee applies.
It is instructive to briefly consider an example. In Figure 1, , , and there are individuals, represented by the embedded points. Suppose we want to minimize the center objective in the pursuit of fairness: we would then choose the red points. However, this is not a proportional solution because the middle six points constitute half of the points, and would all prefer to be matched to the central blue point. Furthermore, choosing the blue point (and any other center) is a proportional solution, because for any arbitrary group of six points and new proposed center, at least one of the six points will be closer to the blue point than the proposed center.
Proportionality has many advantages as a notion of fairness in clustering, beyond the intuitive appeal of groups being entitled to a proportional share of centers. We name a few of these advantages explicitly.

Proportionality implies (weak) Pareto optimality: namely, for any proportional solution , there does not exist another solution such that for all .

Proportionality is oblivious in the sense that it does not depend on the definition of sensitive attributes or protected subgroups.

Proportionality is robust
to outliers in the data, since only groups of points of sufficient size are entitled to their own center.

Proportionality is scale invariant in the sense that a multiplicative scaling of all distances does not affect the set of proportional solutions.

Proportionality can be efficiently audited, in the sense that one does not need to compute the entire pairwise distance matrix in order to check for violations of proportionality, as we show in Section 4.
In the worst case, proportionality is incompatible with all three of the classic center, means, and median objectives; i.e., there exist instances for which any proportional solution has an arbitrarily bad approximation to all objectives. We present such an instance in Example 1, and show in Section 5 that this behavior also arises in realworld datasets.
Example 1.
There exist problems for which any proportional clustering has an unbounded approximation to the optimal center, means, and median objectives, and conversely, any clustering with bounded approximation to the optimum on these objectives is not proportional.
Proof.
The simplest example to see this has , , and (i.e., we want to choose 3 centers from six individuals, all of which are possible cluster centers). There are two points at position a, two at position b, and one each at positions c and d. The pairwise distances are given in the following matrix.
a  b  c  d  

a  0  1  
b  1  0  
c  0  
d  0 
Because and =3, and there are points at and , any proportional solution must include and . Therefore, one of the points at or will have an arbitrarily large value . The optimal solution on any of the three objectives is to instead choose and one of or . ∎
Furthermore, as we show in Section 2 and observe empirically in Section 5, proportional solutions may not always exist. We therefore consider the natural approximate notion of proportionality that relaxes the Pareto dominance condition by a multiplicative factor.
Definition 2.
with is approximate proportional (hereafter proportional) if with and for all , there exists with .
To parse the definition, again consider Figure 1. Although choosing the red points is not a proportional solution, it is an approximate proportional solution. To see this, suppose the middle six agents wish to deviate to the blue point as before. The green agent would decrease it’s distance to a center by deviating, but not by more than a constant factor, say , so the red points would constitute a 3proportional solution. Note that even with this notion, it remains true that any approximately proportional clustering is incompatible with any approximately optimal clustering on the center, means, and median objectives, in the worst case.
1.2 Results and Outline
In Section 2 we show that proportional solutions may not always exist. In fact, one cannot get better than a proportional solution in the worst case. In contrast, we give a greedy algorithm (Algorithm 1) and prove Theorem 1: The algorithm yields a proportional solution in the worst case.
In Section 3, we treat proportionality as a constraint and seek to optimize the median objective subject to that constraint. We show how to write approximate proportionality as
linear constraints. Incorporating this into the standard linear programming relaxation of the
median problem, we show how to use the rounding from [8] to find an proportional solution that is an approximation to the median objective of the optimal proportional solution.In Section 4, we show that proportionality is approximately preserved if we take a random sample of the data points of size , where the hides low order terms. This immediately implies that for constant , we can check if a given clustering is proportional as well as compute approximately proportional solutions in near linear time, comparable to the time taken to run the classic means heuristic.
In Section 5, we provide a local search heuristic that efficiently searches for a proportional clustering. Our heuristic is able to consistently find nearly proportional solutions in practice. We test our heuristic and Algorithm 1 empirically against the celebrated means heuristic in order to understand the tradeoff between proportionality and the means objective. We find that the tradeoff is highly data dependent: Though these objectives are compatible on some datasets, there exist others on which these objectives are in conflict.
1.3 Related Work
Unsupervised Learning. Metric clustering is a well studied problem. There are constant approximation polynomial time algorithms for both the median [24, 8, 2, 29, 6] and center objective [19, 33]. Proportionality is a constraint on the centers as opposed to the data points; this makes it difficult to adapt standard algorithmic approaches for medians and means such as local search [2], primaldual [24], and greedy dual fitting [23]. For instance, our greedy algorithm in Section 2 grows balls around potential centers, which is very different from how balls are grown in the primaldual schema [24, 29]. Somewhat surprisingly, in Section 2 we show that for the problem of minimizing the median objective subject to proportionality as a constraint, we can extend the linear program rounding technique of [8] to get a constant approximation algorithm. However, the additional constraints we add in the linear program formulation render the primaldual and other methods inapplicable.
In [9] and subsequent generalizations [31, 4], the authors consider fair clustering in terms of balance: There are red and blue points, and a balanced solution has roughly the same ratio of blue to red points in every cluster as in the overall population. The authors are motivated to extract features that cannot discriminate between status in different groups. This ensures that subsequent regression or classification on these features will be fair between these groups. In contrast, we assume that our data points prefer to be accurately clustered, and that an unfair solution provides accurate clusters for some groups, while giving other large groups low quality clusters. Finally, we note that there is a line of work in fair unsupervised learning concerned with constructing word embeddings that avoid bias [5, 7], but these problems seem orthogonal to our concerns in clustering.
Supervised Learning. The standard model in fair supervised learning [13, 28, 27, 35, 34] has a set of protected agents
given as input to an algorithm which must classify agents into a positive and negative group. Most of these notions of fairness do not apply in any natural way to unsupervised learning problems. Our work further differs from the supervised learning literature in that we do not assume information about which agents are to be protected. Instead, we provide a fairness guarantee to arbitrary groups of agents, including protected groups even if we do not know their identity, similar to the ideas considered in
[26] and [21].Fair Resource Allocation. Our notion of proportionality is derived from the notion of core in economics [32, 16]. The core has been adapted as a natural generalization to groups of the idea of fairness as proportionality [14, 15], similar to other group fairness concepts for public goods that explicitly consider shared resources [10, 3]. In clustering, the public goods are the centers themselves, and the “agents” are the data points, which share the centers. The fair clustering problem differs in that it is framed in terms of costs instead of positive utility, and agents only care about their most preferred good. That is, an agent’s cost for a clustering solution is just the distance to the closest center, as opposed to much of the previous resource allocation literature where agents have additive utility across the allocated goods. One can interpret our work as results for computing the core for a resource allocation problem where agents have a mincost function with respect to allocations.
2 Existence and Computation of Proportional Solutions
We begin with a negative result: in the worst case, there may not be an exact proportional solution. The impossibility remains even in the special case when . The latter construction is slightly more involved; we begin with the arbitrary and setting. The basic idea behind both constructions is to create two groups of points very far away from one another with , ensuring that one group will be served by only one center.
Claim 1.
For all , a proportional solution is not guaranteed to exist.
Proof.
Consider the following instance with , and . Distances are specified in the following table.
4  1  2  
2  4  1  
1  2  4  
4  1  2  
2  4  1  
1  2  4 
Notice that the data is separate into two areas. Since , in a feasible solution, we only open one center in one of these two areas. Without loss of generality, suppose that we open exactly one center among . The instance is symmetric, so again suppose without loss of generality that we open . Then consider the individuals in . This coalition is of size , and both individuals would reduce their distance by a factor of by switching to . Thus, any solution is only 2proportional. ∎
Claim 2.
In the special case where , for all , a proportional solution is not guaranteed to exist.
Proof.
Let . There are three identical clusters of 303 points each (so ). The pairwise distance between any two points in different clusters is . Construct each cluster as follows. There are six types of points, (all feasible, since ). There is exactly one point of type , one point of type , and one point of type . There are 100 points each of type , , and (that is, there are 100 points colocated at each position). The pairwise distance between points in a cluster of given types is specified in the following table. The pairwise distance between any two points of types in is equal to the distance between any two points of types in is equal to 3, which follows from the shortest path distances on a weighted bipartite graph with weights defined by the table.
4  1  2  
2  4  1  
1  2  4 
Since and there are three clusters, in a feasible solution there is a cluster in which we choose at most one center. In that cluster, suppose first that we choose a center of type , , or . Since the instance is symmetric with respect to this choice, suppose without loss of generality that we choose . Then the 200 points of types and could decrease their distance by a factor of by switching to . Since any points are entitled to deviate, such a choice of is not proportional for .
Instead, suppose that we choose a center of type , , or . The instance is again symmetric with respect to this choice, so suppose without loss of generality that we choose . Then the 200 points of types and could decrease their distance by a factor of 1.5 by switching to . Thus, in either case, the solution is not proportional for any . ∎
2.1 Computing a Approximate Proportional Clustering
Claim 1 establishes that we should focus our attention on designing an efficient approximation algorithm. We give a simple and efficient algorithm that achieves a proportional solution, very close to the existential lower bound of 2. For notational ease, let . That is, is the ball (defined on ) of distance about center . For simplicity of exposition, we present Algorithm 1 as a continuous algorithm where a parameter is smoothly increasing. The algorithm can be easily discretized using priority queues.
Algorithm 1 runs in time.^{1}^{1}1To state running times simply, we use the convention that is if is up to polylogarithmic factors. In essence, the algorithm grows balls continuously around the centers, and when the ball around a center has “captured” points, we greedily open that center and disregard all of the captured points. Open centers continue to greedily capture points as their balls continue to expand. Though [24, 29] similarly expand balls about points to compute approximately optimal solutions to the median problem, there is a crucial difference: They grow balls around data points rather than centers.
Theorem 1.
Algorithm 1 yields a proportional clustering, and there exists an instance for which this bound is tight.
Proof.
Let be the solution computed by Algorithm 1. First note that uses at most centers, since it only opens a center when unmatched points are absorbed by the ball around that center, and this can happen at most times. Now, suppose for a contradiction that is not a proportional clustering. Then there exists with and such that
(1) 
Let be the distance of the farthest agent from in , that is, , and call this agent . There are two cases. In the first case, for all . This immediately yields a contradiction, because it implies that Algorithm 1 would have opened . In particular, note that , so if for all , then would have had at least unmatched points.
In the second case, and such that . This case is drawn below in Figure 2. By the triangle inequality, . Therefore, . Also, , since . Consider the minimum multiplicative improvement of and :
which violates equation 1.
It is not hard to show that there exists an instance for which Algorithm 1 yields exactly this bound. Consider the following instance with , and . Distances are specified in the following table, where is some small constant.
1  
1  
The distances satisfy the triangle inequality. Note that Algorithm 1 will open and . The coalition can each reduce their distance by a multiplicative factor approaching as by deviating to . ∎
2.2 Local Capture Heuristic
We observe that while our Greedy Capture algorithm (Algorithm 1) always produces an approximately proportional solution, it may not produce an exactly proportional solution in practice, even on instances where such solutions exist (see Figure 3(a) and Figure 3(b)). We therefore introduce a Local Capture heuristic for searching for more proportional clusterings. Algorithm 2 takes a target value of as a parameter, and proceeds by iteratively finding a center that violates fairness and swapping it for the center in the current solution that is least demanded.
Every iteration of Algorithm 2 (the entire inner for loop) runs in time. There is no guarantee of convergence (for a given input , there may not even exist a proportional solution), but if Algorithm 2 terminates, then it returns a proportional solution. In our experiments (see Section 5), we search for the minimum for which the algorithm terminates in a small number of iterations via binary search over possible input of . In [2], the authors also evaluate a local search swapping procedure for the median problem, but their swap condition is based on the relative median objective of two solutions, whereas our swap condition is based on violations to proportionality.
3 Proportionality as a Constraint
One concern with the previous algorithms is that they may find a proportional clustering with poor global objective (e.g., median), even when exact proportional clusterings with good global objectives exist. For example, suppose and there are two easily defined clusters, containing and of the data respectively. It is possible that Algorithm 1 will only open centers inside of the larger cluster. This is proportional, but undesirable from an optimization perspective (note that the “correct” clustering of such an example is still proportional). Here, we show how to address this concern by optimizing the median objective subject to proportionality as a constraint. Later, in Section 5, we empirically study the tradeoff between the means objective and proportionality on real data.
We consider the median and means objectives to be reasonable measures of the global quality of a solution. We see minimizing the center objective more as a competing notion of fairness, and so we focus on optimizing the median objective subject to proportionality.^{2}^{2}2A constant approximation algorithm for minimizing the median objective immediately implies a constant approximation algorithm for minimizing the means objective by running the algorithm on the squared distances [29]. Minimizing the median objective without proportionality is a well studied problem in approximation algorithms, and several constant approximations are known [8, 2, 29]. Most of this work is in the model where , and we follow suit in this section. We show the following.
Theorem 2.
Suppose there is a proportional clustering with median objective . In polynomial time in and , we can compute a proportional clustering with median objective at most .
In particular, we can compute a constant approximate proportional clustering with median objective at most eight times the minimum median objective proportional clustering. Note that the exact running time will depend on the algorithm used to solve the linear program. In the remainder of this section, we will sketch the proof of Theorem 2. We begin with the standard linear programming relaxation of the median minimization problem, and then add a constraint to encode proportionality. The final linear program is shown in Figure 3. Recall that .
Minimize  (2)  
Subject to  (3)  
(4)  
(5)  
(6)  
(7) 
In the LP, is an indicator variable equal to 1 if is matched to . is an indicator variable equal to if , i.e., if we want to use center in our clustering. Objective 2 is the median objective. Constraint 3 requires that every point be matched, and constraint 4 only allows a point to be matched to an open center. Constraint 5 allows at most centers to be opened, and constraint 7 relaxes the indicator variables to real values between 0 and 1.
Constraint 6 is the new constraint that we introduce. Our crucial lemma argues that constraint 6 approximately encodes proportionality. Let be the minimum value such that . In other words, is the distance of the farthest point in from .
Lemma 1.
Let be a clustering, and let . If there exists some such that , then is proportional. If is proportional, then there exists some such that .
Proof.
Suppose that there exists some such that . Suppose for a contradiction that is not proportional. Then there exists with and such that . By assumption, such that , so by the triangle inequality . Therefore, . However, by definition of , since , there must exist some such that .
Suppose that is proportional. Let . Consider the set of the closest points in to . By definition of proportionality and such that . Therefore, by the triangle inequality, . By definition of , , so there exists such that . ∎
Now, suppose there is a proportional clustering with median objective . Then we write the linear program shown in Figure 3 with in constraint 6. Lemma 1 guarantees that is feasible for the resulting linear program, so the fractional solution has median objective at most . We then round the resulting fractional solution. In [8], the authors give a rounding algorithm for the the linear program in Figure 3 without Constraint 6. We show that a slight modification to this rounding algorithm also preserves Constraint 6 to a constant approximation.
Lemma 2.
The proof of Lemma 2 is in parts and involves the technical details of [8]. We provide the proof in section 3.1, but first we complete the overall proof of Theorem 2. Given Lemma 2, applying Lemma 1 again implies that the result of the rounding is proportional, since we set . Since the median objective of the fractional solution is at most , the fact that the median objective of the rounded solution is at most follows directly from the proof from [8]. We note that the constant factor of 27 can be improved to 13 in the special case where . Interestingly, the ostensibly similar primaldual approach of [24] does not appear amenable to the added constraint of proportionality (in particular, the reduction to facility location from [24] is no longer straightforward).
3.1 Proof of Lemma 2
First, we give a brief overview of the method from [8]. Note that our goal in this argument is to show that the new constraint we added (constraint 6) is approximately satisfied after this rounding. The authors work in a setting where points can have demand. In our setting, this just corresponds to points in having a demand of 1, and moving or consolidating demand can be thought of as changing the instance by moving points in . Note that the original linear program requires , and we follow suit in this proof.
Given a fractional solution to the linear program in Figure 3 as , let . That is, is the contribution of point to the median objective in the fractional optimum.

Step 1: Consolidate all demands to obtain such that for all with , they must be sufficiently far away such that . Let be the set of centers with positive demand after this step, i.e. .

Step 2a: Consolidate open centers by moving each center not in to the nearest center in . This gives a new solution with if and if . We call this a restricted solution.

Step 2b: Modify the solution further to obtain with if and if . We call this a integral solution.

Step 3: Round to obtain an integer solution .
We introduce constraint 6 in our linear program, and make a small modification to Step 2b as described in sections below.
3.1.1 Step 1: Consolidating Demands
Our first observation is that during the demand consolidation, it cannot be the case that all of the demand within a given from constraint 6 is moved arbitrarily far away.
Lemma 3.
Fix . For each that had its demand moved to , .
Proof.
Let . Let be the location to which the demand for was moved in Step 1. Note that Step 1 is designed so that if demand at is moved to another point , then , and . By constraint 6, we know that the demand of could be completely satisfied by centers fractionally opened inside of , so . Since , it follows that . ∎
3.1.2 Step 2: Consolidating Centers
Next, we argue that the consolidation of fractional centers in Step 2 approximately preserves constraint 6.
Lemma 4.
After Step 2a, , .
Proof.
For our algorithm, we slightly change Step 2b to the following: Let , and . Sort the locations in decreasing order of , where is the location in closest to (other than ). Set for the first locations in or if , and otherwise. In other words, the only difference from the original Step 2b in [8] is that points with integral will not participate in the sorting; we simply set for such points, and then perform the standard rounding on . [8] use the following statement in their proof, and we show that it still holds after our modification to Step 2b.
Lemma 5.
For any restricted solution , the modified Step 2b givs a integral solution with no greater cost.
Proof.
From Lemma 7 in [8], the cost of the restricted solution is
where the second line follows because . Our algorithm in Step 2b maximizes for the given set of , hence achieves a cost at most that of . ∎
Lemma 6.
After Step 2b, , there is either at least one with or at least two with .
Proof.
Given Lemma 4 and the constraints on after Step 2a, there must be at least one with positive demand. If there is exactly one such , Lemma 3 is equivalent to and Step 2b will ensure . If there are at least two such , all of them will have after Step 2b. ∎
3.1.3 Step 3: Rounding an Integer Solution
[8] gives two rounding schemes, and we use the first one that at most doubles the cost. The important observation is that any center with will be opened in the integral solution (that is, if then , and for any center with , either itself or another center in closest to will be opened. This allows us to complete our argument.
Lemma 7.
For all , .
Proof.
By Lemma 6, there are two cases: either there is some with or there are at least two with . In the first case, , so the lemma statement clearly holds. In the second case, we are guaranteed that for each point , either we set (in which case the Lemma holds) or we set where is the closest other at least partially open center. But since in this case there are two points in partially open, their pairwise distance is at most , so ∎
This concludes the proof of Lemma 2. The fact that is an 8approximation of the objective follows immediately from the proof from [8] given Lemma 5, as all other constraints are still satisfied. Finally, we note that the constant factor of 27 can be tightened to 13 in the special case where . The argument is essentially the same. The crucial improvement comes from the guarantee that , there is demand at the center of each . Tracking this demand throughout the rounding leads to the tightened result.
4 Sampling for LinearTime Implementations and Auditing
In this section, we study proportionality under uniform random sampling (i.e., draw
individuals i.i.d. from the uniform distribution on
). In particular, we show that proportionality is well preserved under random sampling. This allows us to design efficient implementations of Algorithm 1 and Algorithm 2, and to introduce an efficient algorithm for auditing proportionality. We first present the general property and then demonstrate its various applications.4.1 Proportionality Under Random Sampling
For any of size and center , define . Note that solution is not proportional with respect to if and only if there is some such that . A random sample approximately preserves this fraction for all solutions and deviating centers . The important idea in the proof is that we take a union bound over all possible solutions and deviations, and there are only such combinations.
Theorem 3.
Given , and parameter , fix parameters . Let of size
be chosen uniformly at random. Then, with probability at least
, the following holds for all :Proof.
Recall that is a random sample of . Hoeffding’s inequality implies that for any fixed , a sample of size is sufficient to achieve
with probability at least . Note that there are possible choices of over which we take the union bound. Setting , and is sufficient for the union bound to yield the theorem statement. ∎
In order to apply the above theorem, we say that a solution is proportional to deviations if for all and for all where , there exists some such that . Note that if is proportional to 1deviations, it is simply proportional. We immediately have the following:
Corollary 1.
Let be a uniform random sample of size . Suppose with is proportional with respect to . Then with probability at least , is proportional to deviations with respect to .
4.2 Linear Time Implementation
We now consider how to take advantage of Theorem 3 to optimize Algorithm 1 and Algorithm 2. First, note that Algorithm 1 takes time, which is quadratic in input size. A corollary of Theorem 3 is that we can approximately implement Algorithm 1 in nearly linear time, comparable to the running time of the standard means heuristic.
Corollary 2.
Algorithm 1, when run on and a random sample of size , provides a solution that is proportional to deviations with high probability in time.
We also get a substantial speedup for our Local Capture algorithm. Recall that Local Capture (Algorithm 2) is an iterative algorithm that takes a target value of as a parameter, and if it converges, returns a proportional clustering. Without sampling, each iteration of Algorithm 2 takes time. Another corollary of Theorem 3 is that it is sufficient to run the Local Capture on a random sample of out of the points in in order to search for a clustering that is proportional with respect to deviations.
4.3 Efficient Auditing
Alternatively, one might still want to run a nonproportional clustering algorithm, and ask whether the solution produced happens to be proportional. We call this the Audit Problem. Given and with , find the minimum value of such that is proportional. It is not too hard to see that one can solve the Audit Problem exactly in time by computing for each , the quantity , the largest value of . We subsequently find the that maximizes . Again, this takes quadratic time, which can be worse than the time taken to find the clustering itself.
Consider a slightly relaxed Audit Problem where we are asked to find the minimum value of such that is proportional to deviations with probability at least . This problem can be efficiently solved by using a random sample of points to conduct the audit.
Corollary 3.
The Audit Problem can be solved in time.
5 Implementations and Empirical Results
In this section, we study proportionality on real data taken from the UCI Machine Learning Repository [12]. We consider three qualitatively different data sets used for clustering: Iris, Diabetes, and KDD. For each data set, we only have a single set of points given as input, so we take to be the set of all points in the data set. We use the standard Euclidean L2 distance.
Iris. This data set contains information about the petal dimensions of three different species of iris flowers. There are 50 samples of each species.
Diabetes. The Pima Indians Diabetes data set contains information about 768 diabetes patients, recording features like glucose, blood pressure, age and skin thickness.
KDD. The KDD cup 1999 data set contains information about sequences of TCP packets. Each packet is classified as normal or one of twentytwo types of intrusions. Of these 23 classes, normal, “neptune”, and “smurf” account for 98.3 of the data. The data set contains 18 million samples; we work with a subsample of 100,000 points.^{3}^{3}3We run means++ on this entire 100,000 point sample. For efficiency, we run our Local Capture algorithm by further sampling 5,000 points uniformly at random to treat as and sampling 400 points via the means++ initialization to treat as . For the sake of a fair comparison, we generate a different sample of 400 centers using the means++ initialization that we use to determine the value of we report for both Local Capture and the means++ algorithm. The means objective is measured on the original 100,000 points for both algorithms.
5.1 Proportionality and means Objective Tradeoff
We compare Greedy Capture (Algorithm 1) and Local Capture (Algorithm 2) with the means++ algorithm (Lloyd’s algorithm for means minimization with the means++ initialization [1]) for a range of values of . For the Iris data set, Local Capture and means++ always find an exact proportional solution (Figure 3(a)), and have comparable means objectives (Figure 4(a)). The Iris data set is very simple with three natural clusters, and validates the intuition that proportionality and the means objective are not always opposed.
The Diabetes data set is larger and more complex. As shown in Figure 3(b), means++ no longer always finds an exact proportional solution. Local Capture always finds a better than 1.01proportional solution. As shown in Figure 4(b), the means objectives of the solutions are separated, although generally on the same order of magnitude.
For the KDD data set, proportionality and the means object appear to be in conflict. Greedy Capture’s performance is comparable to Local Capture on KDD, so we omit it for clarity. In Figures 3(c) and 4(c), note that the gap between and the means objective for the means++ and Local Capture algorithms is between three and four orders of magnitude. We suspect this is due to the presence of significant outliers in the KDD data set. This is in keeping with the theoretical impossibility of simultaneously approximating the optima on both objectives, and demonstrates that this tension arises in practice as well as theory.
5.2 Proportionality and Low means Objective
Note that if one is allowed to use centers when is given as input, one can trivially achieve the proportionality of Local Capture and the means objective of the means++ algorithm by taking the union of the two solutions. Thinking in this way leads to a different way of quantifying the tradeoff between proportionality and the means objective: Given an approximately proportional solution, how many extra centers are necessary to get comparable means objective as the means++ algorithm? For a given data set, the answer is a value between 0 and , where larger numbers indicate more incompatibility, and lower numbers indicate less incompatibility.
To answer this question, we compute the union of centers found by Local Capture and the means++ algorithm. We then greedily remove centers as long as doing so does not increase the minimum such that the solution is proportional (defined on , not ) by more than a multiplicative factor of , and does not increase the means objective by more than a multiplicative factor .
On the KDD dataset, we set and , so the proportionality of the result is within 1.2 of Local Capture in Figure 3(c), and the means objective is within 1.5 of means++ in Figure 4(c). We observe that this heuristic uses at most extra centers for any . So while there is real tension between proportionality and the means objective, this tension is still not maximal. In the worst case, one might need to add centers to a proportional solution to compete with the means objective of the means++ algorithm, but in practice we find that we need at most for .
6 Conclusion and Open Directions
We have introduced proportionality as a fair solution concept for centroid clustering. Although exact proportional solutions may not exist, we gave efficient algorithms for computing approximate proportional solutions, and considered constrained optimization and sampling for further applications. Finally, we studied proportionality on real data and observed a data dependent tradeoff between proportionality and the means objective. While this tradeoff is in some sense a negative result, it also demonstrates that proportionality as a fairness guarantee matters in the sense that it meaningfully constrains the space of solutions.
We have shown that proportional solutions need not exist for , and always exist for . Closing this approximability gap is one outstanding question. Another is whether there is a more efficient and easily interpretable algorithm for optimizing total cost subject to proportionality, as our approach in Section 3 requires solving a linear program on the entire data set. We would ideally like a more efficient and easily interpretable primaldual or local search type algorithm. More generally, what other fair solution concepts for clustering should be considered alongside proportionality, and can we characterize their relative advantages and disadvantages? Finally, can the idea of proportionality as a group fairness concept be adapted for supervised learning tasks like classification and regression?
Acknowledgements
Brandon Fain is supported by NSF grants CCF1408784 and IIS1447554. Kamesh Munagala is supported by NSF grants CCF1408784, CCF1637397, and IIS1447554; and research awards from Adobe and Facebook.
References
 [1] D. Arthur and S. Vassilvitskii. Kmeans++: The advantages of careful seeding. In Proceedings of the Eighteenth Annual ACMSIAM Symposium on Discrete Algorithms (SODA), pages 1027–1035, 2007.
 [2] V. Arya, N. Garg, R. Khandekar, A. Meyerson, K. Munagala, and V. Pandit. Local search heuristics for kmedian and facility location problems. SIAM Journal on Computing, 33(3):544–562, 2004.
 [3] H. Aziz, M. Brill, V. Conitzer, E. Elkind, R. Freeman, and T. Walsh. Justified representation in approvalbased committee voting. Social Choice and Welfare, 48(2):461–485, 2017.
 [4] S. K. Bera, D. Chakrabarty, and M. Negahbani. Fair Algorithms for Clustering. arXiv eprints, page arXiv:1901.02393, Jan 2019.
 [5] T. Bolukbasi, K.W. Chang, J. Y. Zou, V. Saligrama, and A. T. Kalai. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Advances in Neural Information Processing Systems (NIPS), pages 4349–4357. 2016.
 [6] J. Byrka, T. Pensyl, B. Rybicki, A. Srinivasan, and K. Trinh. An improved approximation for kmedian and positive correlation in budgeted optimization. ACM Transactions on Algorithms (TALG), 13(2):23:1–23:31, 2017.
 [7] A. Caliskan, J. J. Bryson, and A. Narayanan. Semantics derived automatically from language corpora contain humanlike biases. Science, 356(6334):183–186, 2017.
 [8] M. Charikar, S. Guha, Éva Tardos, and D. B. Shmoys. A constantfactor approximation algorithm for the kmedian problem. Journal of Computer and System Sciences, 65(1):129 – 149, 2002.
 [9] F. Chierichetti, R. Kumar, S. Lattanzi, and S. Vassilvitskii. Fair clustering through fairlets. In Advances in Neural Information Processing Systems (NIPS), pages 5029–5037. 2017.
 [10] V. Conitzer, R. Freeman, and N. Shah. Fair public decision making. In Proceedings of the 2017 ACM Conference on Economics and Computation (EC), pages 629–646, 2017.
 [11] S. CorbettDavies, E. Pierson, A. Feller, S. Goel, and A. Huq. Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), pages 797–806, 2017.
 [12] D. Dheeru and E. Karra Taniskidou. UCI machine learning repository, 2017.
 [13] C. Dwork, M. Hardt, T. Pitassi, O. Reingold, and R. Zemel. Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference (ITCS), pages 214–226, 2012.
 [14] B. Fain, A. Goel, and K. Munagala. The core of the participatory budgeting problem. In Proceedings of the 12th International Conference on Web and Internet Economics (WINE), pages 384–399, 2016.
 [15] B. Fain, K. Munagala, and N. Shah. Fair allocation of indivisible public goods. In Proceedings of the 2018 ACM Conference on Economics and Computation (EC), pages 575–592, 2018.
 [16] D. K. Foley. Lindahl’s solution and the core of an economy with public goods. Econometrica, 38(1):66–72, 1970.
 [17] N. Garg, A. Goel, and B. Plaut. Markets for Public Decisionmaking. arXiv eprints, page arXiv:1807.10836, July 2018.

[18]
N. Goel, M. Yaghini, and B. Faltings.
Nondiscriminatory machine learning through convex fairness criteria.
In
Proceedings of 2018 AAAI Conference on Artificial Intelligence (AAAI)
, pages 3029–3036, 2018.  [19] T. F. Gonzalez. Clustering to minimize the maximum intercluster distance. Theoretical Computer Science, 38:293 – 306, 1985.
 [20] M. Hardt, E. Price, , and N. Srebro. Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems (NIPS), pages 3315–3323. 2016.
 [21] T. Hashimoto, M. Srivastava, H. Namkoong, and P. Liang. Fairness without demographics in repeated loss minimization. In Proceedings of the 35th International Conference on Machine Learning (ICML), pages 1929–1938, 2018.
 [22] A. K. Jain. Data clustering: 50 years beyond kmeans. Pattern Recognition Letters, 31(8):651 – 666, 2010.

[23]
K. Jain, M. Mahdian, and A. Saberi.
A new greedy approach for facility location problems.
In
Proceedings of the Thiryfourth Annual ACM Symposium on Theory of Computing
, pages 731–740, 2002.  [24] K. Jain and V. V. Vazirani. Primaldual approximation algorithms for metric facility location and kmedian problems. In 40th Annual Symposium on Foundations of Computer Science (FOCS), pages 2–13, 1999.
 [25] S. M. Julia Angwin, Jeff Larson and P. Lauren Kirchner. Machine bias, 2016.
 [26] M. Kearns, S. Neel, A. Roth, and Z. S. Wu. Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. In Proceedings of the 35th International Conference on Machine Learning (ICML), pages 2569–2577, 2018.
 [27] J. Kleinberg, H. Lakkaraju, J. Leskovec, J. Ludwig, and S. Mullainathan. Human decisions and machine predictions. Working Paper 23180, National Bureau of Economic Research, 2017.
 [28] J. Kleinberg, S. Mullainathan, and M. Raghavan. Inherent TradeOffs in the Fair Determination of Risk Scores. ArXiv eprints, 2016.
 [29] R. R. Mettu and C. G. Plaxton. Optimal time bounds for approximate clustering. Machine Learning, 56(13):35–60, 2004.
 [30] G. Pleiss, M. Raghavan, F. Wu, J. Kleinberg, and K. Q. Weinberger. On fairness and calibration. In Advances in Neural Information Processing Systems (NIPS), pages 5680–5689. 2017.
 [31] C. Rösner and M. Schmidt. Privacy Preserving Clustering with Constraints. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018), pages 96:1–96:14, 2018.
 [32] H. E. Scarf. The core of an n person game. Econometrica, 35(1):pp. 50–69, 1967.
 [33] D. B. Shmoys, E. Tardos, and K. Aardal. Approximation algorithms for facility location problems. In Proceedings of the Twentyninth Annual ACM Symposium on Theory of Computing (STOC), pages 265–274, 1997.
 [34] M. B. Zafar, I. Valera, M. Gomez Rodriguez, and K. P. Gummadi. Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In Proceedings of the 26th International Conference on World Wide Web (WWW), pages 1171–1180, 2017.
 [35] M. B. Zafar, I. Valera, M. Rodriguez, K. Gummadi, and A. Weller. From parity to preferencebased notions of fairness in classification. In Advances in Neural Information Processing Systems (NIPS), pages 229–239. 2017.