FaiR-N: Fair and Robust Neural Networks for Structured Data

10/13/2020
by   Shubham Sharma, et al.
42

Fairness in machine learning is crucial when individuals are subject to automated decisions made by models in high-stake domains. Organizations that employ these models may also need to satisfy regulations that promote responsible and ethical A.I. While fairness metrics relying on comparing model error rates across subpopulations have been widely investigated for the detection and mitigation of bias, fairness in terms of the equalized ability to achieve recourse for different protected attribute groups has been relatively unexplored. We present a novel formulation for training neural networks that considers the distance of data points to the decision boundary such that the new objective: (1) reduces the average distance to the decision boundary between two groups for individuals subject to a negative outcome in each group, i.e. the network is more fair with respect to the ability to obtain recourse, and (2) increases the average distance of data points to the boundary to promote adversarial robustness. We demonstrate that training with this loss yields more fair and robust neural networks with similar accuracies to models trained without it. Moreover, we qualitatively motivate and empirically show that reducing recourse disparity across groups also improves fairness measures that rely on error rates. To the best of our knowledge, this is the first time that recourse capabilities across groups are considered to train fairer neural networks, and a relation between error rates based fairness and recourse based fairness is investigated.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

page 7

08/24/2019

Fairness Warnings and Fair-MAML: Learning Fairly with Minimal Data

In this paper, we advocate for the study of fairness techniques in low d...
09/07/2019

Equalizing Recourse across Groups

The rise in machine learning-assisted decision-making has led to concern...
06/15/2020

On Adversarial Bias and the Robustness of Fair Machine Learning

Optimizing prediction accuracy can come at the expense of fairness. Towa...
11/11/2019

Fairness through Equality of Effort

Fair machine learning is receiving an increasing attention in machine le...
06/19/2019

Inherent Tradeoffs in Learning Fair Representation

With the prevalence of machine learning in high-stakes applications, esp...
10/31/2020

Fair Classification with Group-Dependent Label Noise

This work examines how to train fair classifiers in settings where train...
02/18/2020

A Resolution in Algorithmic Fairness: Calibrated Scores for Fair Classifications

Calibration and equal error rates are fundamental conditions for algorit...

Code Repositories

FaiR-N

"FaiR-N: Fair and Robust Neural Networks for Structured Data". Presented at the 2021 AAAI/ACM Conference on AI, Ethics, and Society (AIES ’21).


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.