A Novel Regularization Approach to Fair ML

08/13/2022
by   Norman Matloff, et al.
10

A number of methods have been introduced for the fair ML issue, most of them complex and many of them very specific to the underlying ML moethodology. Here we introduce a new approach that is simple, easily explained, and potentially applicable to a number of standard ML algorithms. Explicitly Deweighted Features (EDF) reduces the impact of each feature among the proxies of sensitive variables, allowing a different amount of deweighting applied to each such feature. The user specifies the deweighting hyperparameters, to achieve a given point in the Utility/Fairness tradeoff spectrum. We also introduce a new, simple criterion for evaluating the degree of protection afforded by any fair ML method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/09/2020

Fair Bayesian Optimization

Given the increasing importance of machine learning (ML) in our lives, a...
research
06/25/2020

SenSeI: Sensitive Set Invariance for Enforcing Individual Fairness

In this paper, we cast fair machine learning as invariant machine learni...
research
06/19/2020

Two Simple Ways to Learn Individual Fairness Metrics from Data

Individual fairness is an intuitive definition of algorithmic fairness t...
research
12/08/2022

Towards Understanding Fairness and its Composition in Ensemble Machine Learning

Machine Learning (ML) software has been widely adopted in modern society...
research
08/17/2021

InfoGram and Admissible Machine Learning

We have entered a new era of machine learning (ML), where the most accur...
research
06/17/2020

Extending the Machine Learning Abstraction Boundary: A Complex Systems Approach to Incorporate Societal Context

Machine learning (ML) fairness research tends to focus primarily on math...
research
07/01/2018

Gradient Reversal Against Discrimination

No methods currently exist for making arbitrary neural networks fair. In...

Please sign up or login with your details

Forgot password? Click here to reset