Individually Fair Gradient Boosting

03/31/2021
by   Alexander Vargo, et al.
0

We consider the task of enforcing individual fairness in gradient boosting. Gradient boosting is a popular method for machine learning from tabular data, which arise often in applications where algorithmic fairness is a concern. At a high level, our approach is a functional gradient descent on a (distributionally) robust loss function that encodes our intuition of algorithmic fairness for the ML task at hand. Unlike prior approaches to individual fairness that only work with smooth ML models, our approach also works with non-smooth models such as decision trees. We show that our algorithm converges globally and generalizes. We also demonstrate the efficacy of our algorithm on three ML problems susceptible to algorithmic bias.

READ FULL TEXT
research
06/25/2020

SenSeI: Sensitive Set Invariance for Enforcing Individual Fairness

In this paper, we cast fair machine learning as invariant machine learni...
research
09/16/2022

FairGBM: Gradient Boosting with Fairness Constraints

Machine Learning (ML) algorithms based on gradient boosted decision tree...
research
11/13/2019

Fair Adversarial Gradient Tree Boosting

Fair classification has become an important topic in machine learning re...
research
06/19/2020

Two Simple Ways to Learn Individual Fairness Metrics from Data

Individual fairness is an intuitive definition of algorithmic fairness t...
research
02/16/2023

Individual Fairness Guarantee in Learning with Censorship

Algorithmic fairness, studying how to make machine learning (ML) algorit...
research
06/09/2020

Fair Bayesian Optimization

Given the increasing importance of machine learning (ML) in our lives, a...
research
10/31/2017

Compact Multi-Class Boosted Trees

Gradient boosted decision trees are a popular machine learning technique...

Please sign up or login with your details

Forgot password? Click here to reset