Algorithmic and Economic Perspectives on Fairness
Algorithmic systems have been used to inform consequential decisions for at least a century. Recidivism prediction dates back to the 1920s. Automated credit scoring dates began in the middle of the last century, but the last decade has witnessed an acceleration in the adoption of prediction algorithms. They are deployed to screen job applicants for the recommendation of products, people, and content, as well as in medicine (diagnostics and decision aids), criminal justice, facial recognition, lending and insurance, and the allocation of public services. The prominence of algorithmic methods has led to concerns regarding their systematic unfairness in their treatment of those whose behavior they are predicting. These concerns have found their way into the popular imagination through news accounts and general interest books. Even when these algorithms are deployed in domains subject to regulation, it appears that existing regulation is poorly equipped to deal with this issue. The word 'fairness' in this context is a placeholder for three related equity concerns. First, such algorithms may systematically discriminate against individuals with a common ethnicity, religion, or gender, irrespective of whether the relevant group enjoys legal protections. The second is that these algorithms fail to treat people as individuals. Third, who gets to decide how algorithms are designed and deployed. These concerns are present when humans, unaided, make predictions.
READ FULL TEXT