Runtime Monitoring of Dynamic Fairness Properties

05/08/2023
by   Thomas A. Henzinger, et al.
0

A machine-learned system that is fair in static decision-making tasks may have biased societal impacts in the long-run. This may happen when the system interacts with humans and feedback patterns emerge, reinforcing old biases in the system and creating new biases. While existing works try to identify and mitigate long-run biases through smart system design, we introduce techniques for monitoring fairness in real time. Our goal is to build and deploy a monitor that will continuously observe a long sequence of events generated by the system in the wild, and will output, with each event, a verdict on how fair the system is at the current point in time. The advantages of monitoring are two-fold. Firstly, fairness is evaluated at run-time, which is important because unfair behaviors may not be eliminated a priori, at design-time, due to partial knowledge about the system and the environment, as well as uncertainties and dynamic changes in the system and the environment, such as the unpredictability of human behavior. Secondly, monitors are by design oblivious to how the monitored system is constructed, which makes them suitable to be used as trusted third-party fairness watchdogs. They function as computationally lightweight statistical estimators, and their correctness proofs rely on the rigorous analysis of the stochastic process that models the assumptions about the underlying dynamics of the system. We show, both in theory and experiments, how monitors can warn us (1) if a bank's credit policy over time has created an unfair distribution of credit scores among the population, and (2) if a resource allocator's allocation policy over time has made unfair allocations. Our experiments demonstrate that the monitors introduce very low overhead. We believe that runtime monitoring is an important and mathematically rigorous new addition to the fairness toolbox.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/01/2023

Monitoring Algorithmic Fairness under Partial Observations

As AI and machine-learned software are used increasingly for making deci...
research
05/25/2023

Monitoring Algorithmic Fairness

Machine-learned systems are in widespread use for making decisions about...
research
06/29/2021

Non-Comparative Fairness for Human-Auditing and Its Relation to Traditional Fairness Notions

Bias evaluation in machine-learning based services (MLS) based on tradit...
research
03/02/2021

Fairness in Credit Scoring: Assessment, Implementation and Profit Implications

The rise of algorithmic decision-making has spawned much research on fai...
research
06/12/2020

Fairness in Forecasting and Learning Linear Dynamical Systems

As machine learning becomes more pervasive, the urgency of assuring its ...
research
10/13/2022

Equal Improvability: A New Fairness Notion Considering the Long-term Impact

Devising a fair classifier that does not discriminate against different ...
research
11/30/2017

Keep it Fair: Equivalences

For models of concurrent and distributed systems, it is important and al...

Please sign up or login with your details

Forgot password? Click here to reset