Fair Algorithms for Learning in Allocation Problems

08/30/2018
by   Hadi Elzayn, et al.
0

Settings such as lending and policing can be modeled by a centralized agent allocating a resource (loans or police officers) amongst several groups, in order to maximize some objective (loans given that are repaid or criminals that are apprehended). Often in such problems fairness is also a concern. A natural notion of fairness, based on general principles of equality of opportunity, asks that conditional on an individual being a candidate for the resource, the probability of actually receiving it is approximately independent of the individual's group. In lending this means that equally creditworthy individuals in different racial groups have roughly equal chances of receiving a loan. In policing it means that two individuals committing the same crime in different districts would have roughly equal chances of being arrested. We formalize this fairness notion for allocation problems and investigate its algorithmic consequences. Our main technical results include an efficient learning algorithm that converges to an optimal fair allocation even when the frequency of candidates (creditworthy individuals or criminals) in each group is unknown. The algorithm operates in a censored feedback model in which only the number of candidates who received the resource in a given allocation can be observed, rather than the true number of candidates. This models the fact that we do not learn the creditworthiness of individuals we do not give loans to nor learn about crimes committed if the police presence in a district is low. As an application of our framework, we consider the predictive policing problem. The learning algorithm is trained on arrest data gathered from its own deployments on previous days, resulting in a potential feedback loop that our algorithm provably overcomes. We empirically investigate the performance of our algorithm on the Philadelphia Crime Incidents dataset.

READ FULL TEXT
research
11/11/2019

Fairness through Equality of Effort

Fair machine learning is receiving an increasing attention in machine le...
research
02/20/2018

Online Learning with an Unknown Fairness Metric

We consider the problem of online learning in the linear contextual band...
research
06/16/2021

Maxmin-Fair Ranking: Individual Fairness under Group-Fairness Constraints

We study a novel problem of fairness in ranking aimed at minimizing the ...
research
02/14/2023

For One and All: Individual and Group Fairness in the Allocation of Indivisible Goods

Fair allocation of indivisible goods is a well-explored problem. Traditi...
research
08/21/2023

Fair Rank Aggregation

Ranking algorithms find extensive usage in diverse areas such as web sea...
research
06/21/2019

Fairness and Utilization in Allocating Resources with Uncertain Demand

Resource allocation problems are a fundamental domain in which to evalua...
research
02/26/2021

Evolution of collective fairness in complex networks through degree-based role assignment

From social contracts to climate agreements, individuals engage in group...

Please sign up or login with your details

Forgot password? Click here to reset