A continuous framework for fairness

12/21/2017
by   Philipp Hacker, et al.
0

Increasingly, discrimination by algorithms is perceived as a societal and legal problem. As a response, a number of criteria for implementing algorithmic fairness in machine learning have been developed in the literature. This paper proposes the Continuous Fairness Algorithm (CFAθ) which enables a continuous interpolation between different fairness definitions. More specifically, we make three main contributions to the existing literature. First, our approach allows the decision maker to continuously vary between concepts of individual and group fairness. As a consequence, the algorithm enables the decision maker to adopt intermediate "worldviews" on the degree of discrimination encoded in algorithmic processes, adding nuance to the extreme cases of "we're all equal" (WAE) and "what you see is what you get" (WYSIWYG) proposed so far in the literature. Second, we use optimal transport theory, and specifically the concept of the barycenter, to maximize decision maker utility under the chosen fairness constraints. Third, the algorithm is able to handle cases of intersectionality, i.e., of multi-dimensional discrimination of certain groups on grounds of several criteria. We discuss three main examples (college admissions; credit application; insurance contracts) and map out the policy implications of our approach. The explicit formalization of the trade-off between individual and group fairness allows this post-processing approach to be tailored to different situational contexts in which one or the other fairness criterion may take precedence.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset