A Simulation Based Dynamic Evaluation Framework for System-wide Algorithmic Fairness

03/21/2019
by   Efrén Cruz Cortés, et al.
0

We propose the use of Agent Based Models (ABMs) inside a reinforcement learning framework in order to better understand the relationship between automated decision making tools, fairness-inspired statistical constraints, and the social phenomena giving rise to discrimination towards sensitive groups. There have been many instances of discrimination occurring due to the applications of algorithmic tools by public and private institutions. Until recently, these practices have mostly gone unchecked. Given the large-scale transformation these new technologies elicit, a joint effort of social sciences and machine learning researchers is necessary. Much of the research has been done on determining statistical properties of such algorithms and the data they are trained on. We aim to complement that approach by studying the social dynamics in which these algorithms are implemented. We show how bias can be accumulated and reinforced through automated decision making, and the possibility of finding a fairness inducing policy. We focus on the case of recidivism risk assessment by considering simplified models of arrest. We find that if we limit our attention to what is observed and manipulated by these algorithmic tools, we may determine some blatantly unfair practices as fair, illustrating the advantage of analyzing the otherwise elusive property with a system-wide model. We expect the introduction of agent based simulation techniques will strengthen collaboration with social scientists, arriving at a better understanding of the social systems affected by technology and to hopefully lead to concrete policy proposals that can be presented to policymakers for a true systemic transformation.

READ FULL TEXT
research
09/27/2021

A Sociotechnical View of Algorithmic Fairness

Algorithmic fairness has been framed as a newly emerging technology that...
research
02/26/2018

Human Perceptions of Fairness in Algorithmic Decision Making: A Case Study of Criminal Risk Prediction

As algorithms are increasingly used to make important decisions that aff...
research
10/21/2021

Statistical discrimination in learning agents

Undesired bias afflicts both human and algorithmic decision making, and ...
research
10/05/2022

Equalizing Credit Opportunity in Algorithms: Aligning Algorithmic Fairness Research with U.S. Fair Lending Regulation

Credit is an essential component of financial wellbeing in America, and ...
research
11/08/2022

Reinforcement Learning with Stepwise Fairness Constraints

AI methods are used in societally important settings, ranging from credi...
research
09/20/2023

Using Property Elicitation to Understand the Impacts of Fairness Constraints

Predictive algorithms are often trained by optimizing some loss function...
research
10/18/2019

Optimization Hierarchy for Fair Statistical Decision Problems

Data-driven decision-making has drawn scrutiny from policy makers due to...

Please sign up or login with your details

Forgot password? Click here to reset