Group-based Fair Learning Leads to Counter-intuitive Predictions

10/04/2019
by   Ofir Nachum, et al.
0

A number of machine learning (ML) methods have been proposed recently to maximize model predictive accuracy while enforcing notions of group parity or fairness across sub-populations. We propose a desirable property for these procedures, slack-consistency: For any individual, the predictions of the model should be monotonic with respect to allowed slack (i.e., maximum allowed group-parity violation). Such monotonicity can be useful for individuals to understand the impact of enforcing fairness on their predictions. Surprisingly, we find that standard ML methods for enforcing fairness violate this basic property. Moreover, this undesirable behavior arises in situations agnostic to the complexity of the underlying model or approximate optimizations, suggesting that the simple act of incorporating a constraint can lead to drastically unintended behavior in ML. We present a simple theoretical method for enforcing slack-consistency, while encouraging further discussions on the unintended behaviors potentially induced when enforcing group-based parity.

READ FULL TEXT
research
03/13/2021

OmniFair: A Declarative System for Model-Agnostic Group Fairness in Machine Learning

Machine learning (ML) is increasingly being used to make decisions in ou...
research
06/05/2022

Enforcing Group Fairness in Algorithmic Decision Making: Utility Maximization Under Sufficiency

Binary decision making classifiers are not fair by default. Fairness req...
research
12/29/2021

EiFFFeL: Enforcing Fairness in Forests by Flipping Leaves

Nowadays Machine Learning (ML) techniques are extensively adopted in man...
research
04/12/2023

Auditing ICU Readmission Rates in an Clinical Database: An Analysis of Risk Factors and Clinical Outcomes

This study presents a machine learning (ML) pipeline for clinical data c...
research
09/03/2019

Quantifying Infra-Marginality and Its Trade-off with Group Fairness

In critical decision-making scenarios, optimizing accuracy can lead to a...
research
06/16/2022

Active Fairness Auditing

The fast spreading adoption of machine learning (ML) by companies across...
research
02/25/2020

Teaching the Old Dog New Tricks: Supervised Learning with Constraints

Methods for taking into account external knowledge in Machine Learning m...

Please sign up or login with your details

Forgot password? Click here to reset