Consistent Range Approximation for Fair Predictive Modeling

12/21/2022
by   Jiongli Zhu, et al.
6

This paper proposes a novel framework for certifying the fairness of predictive models trained on biased data. It draws from query answering for incomplete and inconsistent databases to formulate the problem of consistent range approximation (CRA) of fairness queries for a predictive model on a target population. The framework employs background knowledge of the data collection process and biased data, working with or without limited statistics about the target population, to compute a range of answers for fairness queries. Using CRA, the framework builds predictive models that are certifiably fair on the target population, regardless of the availability of external data during training. The framework's efficacy is demonstrated through evaluations on real data, showing substantial improvement over existing state-of-the-art methods.

READ FULL TEXT
research
06/07/2018

Residual Unfairness in Fair Machine Learning from Prejudiced Data

Recent work in fairness in machine learning has proposed adjusting for f...
research
04/10/2022

ProFairRec: Provider Fairness-aware News Recommendation

News recommendation aims to help online news platform users find their p...
research
02/27/2023

Domain Adaptive Decision Trees: Implications for Accuracy and Fairness

In uses of pre-trained machine learning models, it is a known issue that...
research
11/15/2020

FAIR: Fair Adversarial Instance Re-weighting

With growing awareness of societal impact of artificial intelligence, fa...
research
08/12/2021

Fair Decision-Making for Food Inspections

We revisit the application of predictive models by the Chicago Departmen...
research
03/17/2020

Fair inference on error-prone outcomes

Fair inference in supervised learning is an important and active area of...

Please sign up or login with your details

Forgot password? Click here to reset