1 Introduction
The study of fairness in machine learning is driven by an abundance of examples where learning algorithms were perceived as discriminating against protected groups (Sweeney, 2013; Datta, Tschantz, and Datta, 2015). Addressing this problem requires a conceptual — perhaps even philosophical — understanding of what fairness means in this context. In other words, the million dollar question is (arguably^{1}^{1}1Recent work takes a somewhat different view (Kilbertus et al., 2017).) this: What are the formal constraints that fairness imposes on learning algorithms? On a very high level, most of the answers proposed so far (Luong, Ruggieri, and Turini, 2011; Dwork et al., 2012; Zemel et al., 2013; Hardt, Price, and Srebro, 2016; Joseph et al., 2016; Zafar et al., 2017) fall into two (partially overlapping) categories: individual fairness notions, and group fairness notions.
In the former category, the best known example is the influential fair classification model of Dwork et al. (2012)
. The model involves a set of individuals and a set of outcomes. It is instructive to think of financiallymotivated settings where the outcomes are, say, credit card offerings or displayed advertisements, and a loss function represents the benefit (e.g., in terms of revenue) of mapping a given individual to a given outcome. The centerpiece of the model is a
similarity metric on the space of individuals; it is specific to the classification task at hand, and ideally captures the ethical ground truth about relevant attributes. For example, a man and a woman who are similar in every other way should be considered similar for the purpose of credit card offerings, but perhaps not for lingerie advertisements. Assuming such a metric is available, fairness can be naturally formalized as a Lipschitz constraint, which requires that individuals who are close according to the similarity metric be mapped to distributions over outcomes that are close according to some standard metric (such as total variation). The algorithmic problem is then to find a classifier that minimizes loss, subject to the Lipschitz constraint.As attractive as this model is, it has one clear weakness from a practical viewpoint: the availability of a similarity metric. Dwork et al. (2012) are well aware of this issue; they write that justifying this assumption is “one of the most challenging aspects” of their approach. They add that “in reality the metric used will most likely only be society’s current best approximation to the truth.” But, despite recent progress on automating ethical decisions in certain domains (Noothigattu et al., 2018; Freedman et al., 2018), the taskspecific nature of the similarity metric makes even a credible approximation thereof seem unrealistic. In particular, if one wanted to learn a similarity metric, it is unclear what type of examples a relevant dataset would consist of.
An alternative notion of individual fairness, therefore, is called for. And our proposal draws on an extensive body of work on rigorous approaches to fairness, which — modulo one possible exception (see Section 1.2) — has not been tapped by machine learning researchers: the literature on fair division (Brams and Taylor, 1996; Moulin, 2003). The most prominent notion is that of envyfreeness (Foley, 1967; Varian, 1974), which, in the context of the allocation of goods, requires that the utility of each individual for his allocation be at least as high as his utility for the allocation of any other individual; this is the gold standard of fairness for problems such as cake cutting (Robertson and Webb, 1998; Procaccia, 2013) and rent division (Su, 1999; Gal et al., 2017).
Similarly, in the classification setting, envyfreeness would simply mean that the utility of each individual for his distribution over outcomes is at least as high as his utility for the distribution over outcomes assigned to any other individual. For example, it may well be the case that Bob is offered a worse credit card than that offered to Alice (in terms of, say, annual fees), but this outcome is not unfair if Bob is genuinely more interested in the card offered to him because he does not qualify for Alice’s card, or because its specific rewards program better fits his needs. Such rich utility functions are also evident in the context of job advertisements (Datta, Tschantz, and Datta, 2015): people generally want higher paying jobs, but would presumably have higher utility for seeing advertisements for jobs that better fit their qualifications and interests.
Of course, as before, envyfreeness requires access to individuals’ utility functions, but — in stark contrast to the similarity metric of Dwork et al. (2012) — we do not view this assumption as a barrier to implementation. Indeed, there are a variety of techniques for learning utility functions (Chajewska, Koller, and Ormoneit, 2001; Nielsen and Jensen, 2004; Balcan et al., 2012). Moreover, in our running example of advertising, one can even think of standard measures like expected clickthrough rate (CTR) as an excellent proxy for utility.
It is worth noting that the classification setting is different from classic fair division problems in that the “goods” (outcomes) are nonexcludable. In fact, one envyfree solution simply assigns each individual to his favorite outcome; but when the loss function disagrees with the utility functions, it may be possible to achieve smaller loss without violating the envyfreeness constraint.
In summary, we view envyfreeness as a compelling, wellestablished, and, importantly, practicable notion of individual fairness for classification tasks. Our goal is to understand its learningtheoretic properties.
1.1 Our Results
The technical challenge we face is that the space of individuals is potentially huge, yet we seek to provide universal envyfreeness guarantees. To this end, we are given a sample consisting of individuals drawn from an unknown distribution. We are interested in learning algorithms that minimize loss, subject to satisfying the envyfreeness constraint, on the sample. Our primary technical question is that of generalizability, that is, given a classifier that is envy free on a sample, is it approximately envy free on the underlying distribution? Surprisingly, Dwork et al. (2012) do not study generalizability in their model, and we are aware of only one subsequent paper that takes a learningtheoretic viewpoint on individual fairness and gives theoretical guarantees (see Section 1.2).
In Section 3, we do not constrain the classifier in question. Therefore, we need some strategy to extend a classifier that is defined on a sample; assigning an individual the same outcome as his nearest neighbor in the sample is a popular choice. However, we show that any strategy for extending a classifier from a sample, on which it is envy free, to the entire set of individuals is unlikely to be approximately envy free on the underlying distribution, unless the sample is exponentially large.
For this reason, in Section 4, we focus on structured families of classifiers. On a high level, our goal is to relate the combinatorial richness of the family to generalization guarantees. One obstacle is that standard notions of dimension do not extend to the analysis of randomized classifiers, whose range is distributions
over outcomes (equivalently, real vectors). We circumvent this obstacle by considering mixtures of
deterministic classifiers that belong to a family of bounded Natarajan dimension (an extension of the wellknown VC dimension to multiclass classification). Our main technical result asserts that, under this assumption, envyfreeness on a sample does generalize to the underlying distribution, even if the sample is relatively small (its size grows almost linearly in the Natarajan dimension). Finally, we discuss the implications of this result in Section 5.1.2 Related Work
Conceptually, our work is most closely related to work by Zafar et al. (2017). They are interested in group notions of fairness, and advocate preferencebased notions instead of paritybased notions. In particular, they assume that each group has a utility function for classifiers, and define the preferred treatment property, which requires that the utility of each group for its own classifier be at least its utility for the classifier assigned to any other group. Their model and results focus on the case of binary classification where there is a desirable outcome and an undesirable outcome, so the utility of a group for a classifier is simply the fraction of its members that are mapped to the desirable outcome. Although, at first glance, this notion seems similar to envyfreeness, it is actually fundamentally different.^{2}^{2}2On a philosophical level, the fair division literature deals exclusively with individual notions of fairness. In fact, even in groupbased extensions of envyfreeness (Manurangsi and Suksompong, 2017) the allocation is shared by groups, but individuals must not be envious. We subscribe to the view that grouporiented notions (such as statistical parity) are objectionable, because the outcome can be patently unfair to individuals. Our paper is also completely different from that of Zafar et al. in terms of technical results; theirs are purely empirical in nature, and focus on the increase in accuracy obtained when paritybased notions of fairness are replaced with preferencebased ones.
Very recent, concurrent work by Rothblum and Yona (2018) provides generalization guarantees for the metric notion of individual fairness introduced by Dwork et al. (2012), or, more precisely, for an approximate version thereof. There are two main differences compared to our work: first, we propose envyfreeness as an alternative notion of fairness that circumvents the need for a similarity metric. Second, they focus on randomized binary classification, which amounts to learning a realvalued function, and so are able to make use of standard Rademacher complexity results to show generalization. By contrast, standard tools do not directly apply in our setting. It is worth noting that several other papers provide generalization guarantees for notions of group fairness, but these are more distantly related to our work (Zemel et al., 2013; Woodworth et al., 2017; Donini et al., 2018; Kearns et al., 2018; HébertJohnson et al., 2018).
2 The Model
We assume that there is a space of individuals, a finite space of outcomes, and a utility function encoding the preferences of each individual for the outcomes in . In the advertising example, individuals are users, outcomes are advertisements, and the utility function reflects the benefit an individual derives from being shown a particular advertisement. For any distribution (where is the set of distributions over ) we let denote individual ’s expected utility for an outcome sampled from . We refer to a function as a classifier, even though it can return a distribution over outcomes.
2.1 EnvyFreeness
Roughly speaking, a classifier is envy free if no individual prefers the outcome distribution of someone else over his own.
Definition 1.
A classifier is envy free (EF) on a set of individuals if for all . Similarly, is EF with respect to a distribution on if
Finally, is pairwise EF on a set of pairs of individuals if
Any classifier that is EF on a sample of individuals is also pairwise EF on any pairing of the individuals in , for any and . The weaker pairwise EF condition is all that is required for our generalization guarantees to hold.
2.2 Optimization and Learning
Our formal learning problem can be stated as follows. Given sample access to an unknown distribution over individuals and their utility functions, and a known loss function , find a classifier that is EF with respect to minimizing expected loss , where for and , .
We follow the empirical risk minimization (ERM) learning approach, i.e., we collect a sample of individuals drawn i.i.d from and find an EF classifier with low loss on the sample. Formally, given a sample of individuals and their utility functions , we are interested in a classifier that minimizes among all classifiers that are EF on . The algorithmic problem itself is beyond the scope of the current paper; see Section 5 for further discussion.
Recall that we consider randomized classifiers that can assign a distribution over outcomes to each of the individuals. However, one might wonder whether the EF classifier that minimizes loss on a sample happens to always be deterministic. Or, at least, the optimal deterministic classifier on the sample might incur a loss that is very close to that of the optimal randomized classifier. If this were true, we could restrict ourselves to classifiers of the form , which would be much easier to analyze. Unfortunately, it turns out that this is not the case. In fact, there could be an arbitrary (multiplicative) gap between the optimal randomized EF classifier and the optimal deterministic EF classifier. The intuition behind this is as follows. A deterministic classifier that has very low loss on the sample, but is not EF, would be completely discarded in the deterministic setting. On the other hand, a randomized classifier could take this lossminimizing deterministic classifier and mix it with a classifier with high “negative envy”, so that the mixture ends up being EF and at the same time has low loss. This is made concrete in Example 1 in the appendix.
3 Arbitrary Classifiers
An important (and typical) aspect of our learning problem is that the classifier needs to provide an outcome distribution for every individual, not just those in the sample. For example, if chooses advertisements for visitors of a website, the classifier should still apply when a new visitor arrives. Moreover, when we use the classifier for new individuals, it must continue to be EF. In this section, we consider twostage approaches that first choose outcome distributions for the individuals in the sample, and then extend those decisions to the rest of .
In more detail, we are given a sample of individuals and a classifier assigning outcome distributions to each individual. Our goal is to extend these assignments to a classifier that can be applied to new individuals as well. For example, could be the lossminimizing EF classifier on the sample .
For this section, we assume that is equipped with a distance metric . Moreover, we assume in this section that the utility function is Lipschitz on . That is, for every and for all , we have .
Under the foregoing assumptions, one natural way to extend the classifier on the sample to all of is to assign new individuals the same outcome distribution as their nearest neighbor in the sample. Formally, for a set and any individual , let denote the nearest neighbor of in with respect to the metric (breaking ties arbitrarily). The following simple result (whose proof is relegated to Appendix B) establishes that this approach preserves envyfreeness in cases where the sample is exponentially large.
Theorem 1.
Let be a metric on , be a distribution on , and be an Lipschitz utility function. Let be a set of individuals such that there exists with and . Then for any classifier that is EF on , the extension given by is EF on .
The conditions of Theorem 1 require that the set of individuals is a net for at least a fraction of the mass of on . In several natural situations, an exponentially large sample guarantees that this occurs with high probability. For example, if is a subset of , , and has diameter at most , then for any distribution on , if is an i.i.d. sample of size , it will satisfy the conditions of Theorem 1 with probability at least . This sampling result is folklore, but, for the sake of completeness, we prove it in Lemma 5 of Appendix B.
However, the exponential upper bound given by the nearest neighbor strategy is as far as we can go in terms of generalizing envyfreeness from a sample (without further assumptions). Specifically, our next result establishes that any algorithm — even randomized — for extending classifiers from the sample to the entire space requires an exponentially large sample of individuals to ensure envyfreeness on the distribution . The proof of Theorem 2 can be found in Appendix B.
Theorem 2.
There exists a space of individuals , and a distribution over such that, for every randomized algorithm that extends classifiers on a sample to , there exists an Lipschitz utility function such that, when a sample of individuals of size is drawn from without replacement, there exists an EF classifier on for which, with probability at least jointly over the randomness of and , its extension by is not EF with respect to for any and .
We remark that a similar result would hold even if we sampled with replacement; we sample here without replacement purely for ease of exposition.
Proof of Theorem 2.
Let the space of individuals be and the outcomes be . We partition the space into cubes of side length . So, the total number of cubes is . Let these cubes be denoted by , and let their centers be denoted by . Next, let
be the uniform distribution over the centers
. For brevity, whenever we say “utility function” in the rest of the proof, we mean “Lipschitz utility function.”To prove the theorem, we use Yao’s minimax principle (Yao, 1977). Specifically, consider the following twoplayer zero sum game. Player 1 chooses a deterministic algorithm that extends classifiers on a sample to , and player 2 chooses a utility function on . For any subset , define the classifier by assigning each individual in to his favorite outcome with respect to the utility function , i.e. for each , breaking ties lexicographically. Define the cost of playing algorithm against utility function as the probability over the sample (of size drawn from without replacement) that the extension of by is not EF with respect to for any and . Yao’s minimax principle implies that for any randomized algorithm , its expected cost with respect to the worstcase utility function is at least as high as the expected cost of any distribution over utility functions that is played against the best deterministic algorithm (which is tailored for that distribution). Therefore, we establish the desired lower bound by choosing a specific distribution over utility functions, and showing that the best deterministic algorithm against it has an expected cost of at least .
To define this distribution over utility functions, we first sample outcomes i.i.d. from Bernoulli(). Then, we associate each cube center with the outcome , and refer to this outcome as the favorite of . For brevity, let denote the outcome other than , i.e. . For any , we define the utility function as follows. Letting be the cube that belongs to,
(1) 
See Figure 1 for an illustration.
We claim that the utility function of Equation (1) is indeed Lipschitz with respect to any norm. This is because for any cube , and for any , we have
Moreover, for the other outcome, we have . It follows that is Lipschitz within every cube. At the boundary of the cubes, the utility for any outcome is , and hence is also continuous throughout . Because it is piecewise Lipschitz and continuous, must be Lipschitz throughout , with respect to any norm.
Next, let be an arbitrary deterministic algorithm that extends classifiers on a sample to . We draw the sample of size from without replacement. Consider the distribution over favorites of individuals in . Each individual in has a favorite that is sampled independently from Bernoulli. Hence, by Hoeffding’s inequality, the fraction of individuals in with a favorite of is between and with probability at least . The same holds simultaneously for the fraction of individuals with favorite .
Given the sample and the utility function on the sample (defined by the instantiation of their favorites), consider the classifier , which maps each individual in the sample to his favorite . This classifier is clearly EF on the sample. Consider the extension of to the whole of as defined by algorithm . Define two sets and by letting , and let denote an outcome that is assigned to at least half of the outofsample centers, i.e., an outcome for which . Furthermore, let denote the fraction of outofsample centers assigned to . Note that, since , the number of outofsample centers is also exactly . This gives us , where .
Consider the distribution of favorites in (these are independent from the ones in the sample since is disjoint from ). Each individual in this set has a favorite sampled independently from Bernoulli. Hence, by Hoeffding’s inequality, the fraction of individuals in whose favorite is is at least with probability at least . We conclude that with a probability at least , the sample and favorites (which define the utility function ) are such that: (i) the fraction of individuals in whose favorite is is between and , and (ii) the fraction of individuals in whose favorite is is at least .
We now show that for such a sample and utility function , cannot be EF with respect to for any and . To this end, sample and from . One scenario where envies occurs when (i) the favorite of is , (ii) is assigned to , and (iii) is assigned to . Conditions (i) and (ii) are satisfied when is in and his favorite is . We know that at least a fraction of the individuals in have the favorite . Hence, the probability that conditions (i) and (ii) are satisfied by is at least . Condition (iii) is satisfied when is in and has favorite (and hence assigned ), or, if is in . We know that at least a fraction of the individuals in have the favorite . Moreover, the size of is . So, the probability that condition (iii) is satisfied by is at least
Since and are sampled independently, the probability that all three conditions are satisfied is at least
This expression is a quadratic function in , that attains its minimum at irrespective of the value of . Hence, irrespective of , this probability is at least . For concreteness, let us choose to be (although it can be set to be much smaller). On doing so, we have that the three conditions are satisfied with probability at least . And when these conditions are satisfied, we have and , i.e., envies by . This shows that, when and are sampled from , with probability at least , envies by . We conclude that with probability at least jointly over the selection of the utility function and the sample , the extension of by is not EF with respect to for any and .
To convert the joint probability into expected cost in the game, note that for two discrete, independent random variables
and , and for a Boolean function , it holds that(2) 
Given sample and utility function , let be the Boolean function that equals if and only if the extension of by is not EF with respect to for any and . From Equation (2), is equal to . The latter term is exactly the expected value of the cost, where the expectation is taken over the randomness of . It follows that the expected cost of (any) with respect to the chosen distribution over utilities is at least . ∎
4 LowComplexity Families of Classifiers
In this section we show that (despite Theorem 2) generalization for envyfreeness is possible using much smaller samples of individuals, as long as we restrict ourselves to choosing a classifier from a family of relatively low complexity.
In more detail, two classic complexity measures are the VCdimension (Vapnik and Chervonenkis, 1971) for binary classifiers, and the Natarajan dimension (Natarajan, 1989) for multiclass classifiers. However, to the best of our knowledge, there is no suitable dimension directly applicable to functions ranging over distributions, which in our case can be seen as dimensional real vectors. One possibility would be to restrict ourselves to deterministic classifiers of the type . However, we have seen in Section 2 that envyfreeness is a very strong constraint on deterministic classifiers. Instead, we will consider a family consisting of randomized mixtures of deterministic classifiers belonging to a family of low Natarajan dimension. This allows us to adapt Natarajandimensionbased generalization results to our setting while still working with randomized classifiers.
4.1 Natarajan Dimension Primer
Before presenting our main result, we briefly summarize the definition and relevant properties of the Natarajan dimension. For more details, we refer the reader to (ShalevShwartz and BenDavid, 2014).
We say that a family multiclass shatters a set of points if there exist labels and such that for every we have , and for any subset there exists such that if and otherwise. The Natarajan dimension of a family is the cardinality of the largest set of points that can be multiclass shattered by .
For example, suppose we have a feature map that maps each individualoutcome pair to a dimensional feature vector, and consider the family of functions that can be written as for weight vectors . This family has Natarajan dimension at most .
For a set of points, we let denote the restriction of to , which is any subset of of minimal size such that for every there exists such that for all . The size of is the number of different labelings of the sample achievable by functions in . The following Lemma is the analogue of Sauer’s lemma for binary classification.
Lemma 1 (Natarajan).
For a family of Natarajan dimension and any subset , we have .
Classes of low Natarajan dimension also enjoy the following uniform convergence guarantee.
Lemma 2.
Let have Natarajan dimension and fix a loss function . For any distribution over , if is an i.i.d. sample drawn from of size , then with probability at least we have
4.2 Main Result
We consider the family of classifiers that can be expressed as a randomized mixture of deterministic classifiers selected from a family . Our generalization guarantees will depend on the complexity of the family , measured in terms of its Natarajan dimension, and the number of functions we are mixing. More formally, let be a vector of functions in and be a distribution over , where is the dimensional probability simplex. Then consider the function with assignment probabilities given by Intuitively, for a given individual , chooses one of the randomly with probability , and outputs . Let
be the family of classifiers that can be written this way. Our main technical result shows that envyfreeness generalizes for this class.
Theorem 3.
Suppose is a family of deterministic classifiers of Natarajan dimension , and let for . For any distribution over , , and , if is an i.i.d. sample of pairs drawn from of size
then with probability at least , every classifier that is pairwiseEF on is also EF on .
Theorem 3 is only effective insofar as families of classifiers of low Natarajan dimension are useful. And, indeed, several prominent families have low Natarajan dimension (Daniely, Sabato, and ShalevShwartz, 2012), including one vs. all (which is a special case of the example given in Section 4.1), multiclass SVM, treebased classifiers, and error correcting output codes.
We now turn to the theorem’s proof, which consists of two steps. First, we show that envyfreeness generalizes for finite classes. Second, we show that can be approximated by a finite subset.
Lemma 3.
Let be a finite family of classifiers. For any , , and if is an i.i.d. sample of pairs from of size , then with probability at least , every that is pairwiseEF on (for any is also EF on .
Proof.
Let be the indicator that is envious of by at least under classifier . Then is a Bernoulli random variable with success probability . Applying Hoeffding’s inequality to any fixed hypothesis guarantees that . Therefore, if is EF on , then it is also EF on with probability at least . Applying the union bound over all and using the lower bound on completes the proof. ∎
Next, we show that can be covered by a finite subset. Since each classifier in is determined by the choice of functions from and mixing weights , we will construct finite covers of and . Our covers and will guarantee that for every , there exists such that . Similarly, for any mixing weights , there exists such that . If is the mixture of with weights , we let be the mixture of with weights . This approximation has two sources of error: first, for a random individual , there is probability up to that at least one will disagree with , in which case and may assign completely different outcome distributions. Second, even in the highprobability event that for all , the mixing weights are not identical, resulting in a small perturbation of the outcome distribution assigned to .
Lemma 4.
Let be a family of deterministic classifiers with Natarajan dimension , and let for some . For any , there exists a subset of size such that for every there exists satisfying:

[topsep=0pt,itemsep=0pt,leftmargin=*]

.

If is an i.i.d. sample of individuals of size then w.p. , we have for all but a fraction of .
Proof.
As described above, we begin by constructing finite covers of and . First, let be the set of distributions over where each coordinate is a multiple of . Then we have and for every , there exists such that .
In order to find a small cover of , we use the fact that it has low Natarajan dimension. This implies that the number of effective functions in when restricted to a sample grows only polynomially in the size of . At the same time, if two functions in agree on a large sample, they will also agree with high probability on the distribution.
Formally, let be an i.i.d. sample drawn from of size , and let be any minimal subset of that realizes all possible labelings of by functions in . We now argue that with probability 0.99, for every there exists such that . For any pair of functions , let be the function given by , and let . The Natarajan dimension of is at most (see Lemma 6 in Appendix C). Moreover, consider the loss given by . Applying Lemma 2 with the chosen size of ensures that with probability at least every pair satisfies
By the definition of , for every , there exists for which for all , which implies that .
Using Lemma 1 to bound the size of , we have that
Since this construction succeeds with nonzero probability, we are guaranteed that such a set exists. Finally, by an identical uniform convergence argument, it follows that if is a fresh i.i.d. sample of the size given in Item 2 of the lemma’s statement, then, with probability at least , every and will disagree on at most a fraction of , since they disagree with probability at most on .
Next, let be the same family as , except restricted to choosing functions from and mixing weights from . Using the size bounds above and the fact that , we have that
Suppose that is the mixture of with weights . Let be the approximation to for each , let be such that , and let be the random mixture of with weights . For an individual drawn from , we have with probability at most , and therefore they all agree with probability at least . When this event occurs, we have .
The second part of the claim follows by similar reasoning, using the fact that for the given sample size , with probability at least , every disagrees with its approximation on at most a fraction of . This means that for all on at least a fraction of the individuals in . For these individuals, . ∎
Combining the generalization guarantee for finite families given in Lemma 3 with the finite approximation given in Lemma 4, we are able to show that envyfreeness also generalizes for .
Proof of Theorem 3.
Let be the finite approximation to constructed in Lemma 4. If the sample is of size , we can apply Lemma 3 to this finite family, which implies that for any , with probability at least every that is pairwiseEF on (for any ) is also EF on . We apply this lemma with . Moreover, from Lemma 4, we know that if , then with probability at least , for every , there exists satisfying for all but a fraction of the individuals in . This implies that on all but at most a fraction of the pairs in , and satisfy this inequality for both individuals in the pair. Assume these high probability events occur. Finally, from Item 1 of the lemma we have that .
Now let be any classifier that is pairwiseEF on . Since the utilities are in and for all but a fraction of the pairs in , we know that is pairwiseEF on . Applying the envyfreeness generalization guarantee (Lemma 3) for , it follows that is also EF on . Finally, using the fact that
it follows that is EF on . ∎
It is worth noting that the (exponentially large) approximation is only used in the generalization analysis; importantly, an ERM algorithm need not construct it.
5 Discussion
We believe that envyfreeness gives a new, useful perspective on individual fairness in classification — when individuals have rich utility functions, which, as we have argued in detail in Section 1, is the case in advertising. However, in some domains there are only two possible outcomes, one of which is ‘good’ and the other ’bad’; examples include predicting whether an individual would default on a loan, and whether an offender would recidivate. In these degenerate cases envyfreeness would require that the classifier assign each and every individual the exact same probability of obtaining the ‘good’ outcome, which, clearly, is not a reasonable constraint.
It is also worth noting that we have not directly addressed the problem of computing the lossminimizing envyfree classifier from a given family on a given sample of individuals. Just like in the work of Dwork et al. (2012)
, when the classifier is arbitrary, this problem can be written as a linear program of polynomial size in the number of outcomes, because envyfreeness amounts to a set of linear constraints. In both settings, though, one needs to restrict the family of classifiers to obtain good sample complexity, and, moreover, the naïve formulation would be intractable when dealing with a combinatorial space of outcomes. Nevertheless, the linearity of envyfreeness may enable practical mixedinteger linear programming formulations with respect to certain families. More generally, given the wealth of powerful optimization tools at the community’s disposal, we do not view computational complexity as a longterm obstacle to implementing our approach.
References
 Balcan et al. (2012) Balcan, M.F.; Constantin, F.; Iwata, S.; and Wang, L. 2012. Learning valuation functions. In Proc. of 25th COLT, 4.1–4.24.
 Brams and Taylor (1996) Brams, S. J., and Taylor, A. D. 1996. Fair Division: From CakeCutting to Dispute Resolution. Cambridge University Press.
 Chajewska, Koller, and Ormoneit (2001) Chajewska, U.; Koller, D.; and Ormoneit, D. 2001. Learning an agent’s utility function by observing behavior. In Proc. of 18th ICML, 35–42.
 Daniely, Sabato, and ShalevShwartz (2012) Daniely, A.; Sabato, S.; and ShalevShwartz, S. 2012. Multiclass learning approaches: A theoretical comparison with implications. In Proc. of 25th NIPS, 485–493.
 Datta, Tschantz, and Datta (2015) Datta, A.; Tschantz, M. C.; and Datta, A. 2015. Automated experiments on ad privacy settings: A tale of opacity, choice, and discrimination. In Proc. of 15th PETS, 92–112.
 Donini et al. (2018) Donini, M.; Oneto, L.; BenDavid, S.; ShaweTaylor, J.; and Pontil, M. 2018. Empirical Risk Minimization under Fairness Constraints. arXiv:1802.08626.
 Dwork et al. (2012) Dwork, C.; Hardt, M.; Pitassi, T.; Reingold, O.; and Zemel, R. S. 2012. Fairness through awareness. In Proc. of 3rd ITCS, 214–226.
 Foley (1967) Foley, D. 1967. Resource allocation and the public sector. Yale Economics Essays 7:45–98.
 Freedman et al. (2018) Freedman, R.; Schaich Borg, J.; SinnottArmstrong, W.; Dickerson, J. P.; and Conitzer, V. 2018. Adapting a kidney exchange algorithm to align with human values. In Proc. of 32nd AAAI, 1636–1645.
 Gal et al. (2017) Gal, Y.; Mash, M.; Procaccia, A. D.; and Zick, Y. 2017. Which is the fairest (rent division) of them all? Journal of the ACM 64(6): article 39.

Hardt, Price, and Srebro (2016)
Hardt, M.; Price, E.; and Srebro, N.
2016.
Equality of opportunity in supervised learning.
In Proc. of 30th NIPS, 3315–3323.  HébertJohnson et al. (2018) HébertJohnson, U.; Kim, M. P.; Reingold, O.; and Rothblum, G. N. 2018. Calibration for the (computationallyidentifiable) masses. In Proc. of 35th ICML. Forthcoming.
 Joseph et al. (2016) Joseph, M.; Kearns, M.; Morgenstern, J.; and Roth, A. 2016. Fairness in learning: Classic and contextual bandits. In Proc. of 30th NIPS, 325–333.
 Kearns et al. (2018) Kearns, M.; Neel, S.; Roth, A.; and Wu, S. 2018. Computing parametric ranking models via rankbreaking. In Proc. of 35th ICML.
 Kilbertus et al. (2017) Kilbertus, N.; RojasCarulla, M.; Parascandolo, G.; Hardt, M.; Janzing, D.; and Schölkopf, B. 2017. Avoiding discrimination through causal reasoning. In Proc. of 31st NIPS, 656–666.
 Luong, Ruggieri, and Turini (2011) Luong, B. T.; Ruggieri, S.; and Turini, F. 2011. NN as an implementation of situation testing for discrimination discovery and prevention. In Proc. of 17th KDD, 502–510.
 Manurangsi and Suksompong (2017) Manurangsi, P., and Suksompong, W. 2017. Asymptotic existence of fair divisions for groups. Mathematical Social Sciences 89:100–108.
 Moulin (2003) Moulin, H. 2003. Fair Division and Collective Welfare. MIT Press.
 Natarajan (1989) Natarajan, B. K. 1989. On learning sets and functions. Machine Learning 4(1):67–97.
 Nielsen and Jensen (2004) Nielsen, T. D., and Jensen, F. V. 2004. Learning a decision maker’s utility function from (possibly) inconsistent behavior. Artificial Intelligence 160(1–2):53–78.
 Noothigattu et al. (2018) Noothigattu, R.; Gaikwad, S. S.; Awad, E.; Dsouza, S.; Rahwan, I.; Ravikumar, P.; and Procaccia, A. D. 2018. A votingbased system for ethical decision making. In Proc. of 32nd AAAI, 1587–1594.
 Procaccia (2013) Procaccia, A. D. 2013. Cake cutting: Not just child’s play. Communications of the ACM 56(7):78–87.
 Robertson and Webb (1998) Robertson, J. M., and Webb, W. A. 1998. Cake Cutting Algorithms: Be Fair If You Can. A. K. Peters.
 Rothblum and Yona (2018) Rothblum, G. N., and Yona, G. 2018. Probably approximately metricfair learning. arXiv:1803.03242.
 ShalevShwartz and BenDavid (2014) ShalevShwartz, S., and BenDavid, S. 2014. Understanding machine learning: From theory to algorithms. Cambridge University Press.
 Su (1999) Su, F. E. 1999. Rental harmony: Sperner’s lemma in fair division. American Mathematical Monthly 106(10):930–942.
 Sweeney (2013) Sweeney, L. 2013. Discrimination in online ad delivery. Communications of the ACM 56(5):44–54.
 Vapnik and Chervonenkis (1971) Vapnik, V., and Chervonenkis, A. 1971. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Applications 16(2):264–280.
 Varian (1974) Varian, H. 1974. Equity, envy and efficiency. Journal of Economic Theory 9:63–91.
 Woodworth et al. (2017) Woodworth, B.; Gunasekar, S.; Ohannessian, M. I.; and Srebro, N. 2017. Learning nondiscriminatory predictors. In Proc. of 30th COLT, 1920–1953.
 Yao (1977) Yao, A. C. 1977. Probabilistic computations: Towards a unified measure of complexity. In Proc. of 17th FOCS, 222–227.
 Zafar et al. (2017) Zafar, M. B.; Valera, I.; GomezRodriguez, M.; Gummadi, K. P.; and Weller, A. 2017. From parity to preferencebased notions of fairness in classification. In Proc. of 31st NIPS, 228–238.
 Zemel et al. (2013) Zemel, R.; Wu, Y.; Swersky, K.; Pitassi, T.; and Dwork, C. 2013. Learning fair representations. In Proc. of 30th ICML, 325–333.
Appendix A Appendix for Section 2
Example 1.
Let and . Let the loss function be such that
And let the utility function be such that
where . Now, the only deterministic classifier with a loss of is such that and . But, this is not EF, since . Furthermore, every other deterministic classifier has a total loss of at least , causing the optimal deterministic EF classifier to have loss of at least .
To show that randomized classifiers can do much better, consider the randomized classifier such that and . This classifier can be seen as a mixture of the classifier of loss, and the deterministic classifier , where and , which has high “negative envy”. One can observe that this classifier is EF, and has a loss of just . Hence, the loss of the optimal randomized EF classifier is times smaller than the loss of the optimal deterministic one, for any .
Appendix B Appendix for Section 3
Theorem 1. Let be a metric on , be a distribution on , and be an Lipschitz utility function. Let be a set of individuals such that there exists with and . Then for any classifier that is EF on , the extension given by is EF on .
Proof.
Let be any EF classifier on and be the nearest neighbor extension. Sample and
Comments
There are no comments yet.