Ethical Adversaries: Towards Mitigating Unfairness with Adversarial Machine Learning

05/14/2020
by   Pieter Delobelle, et al.
32

Machine learning is being integrated into a growing number of critical systems with far-reaching impacts on society. Unexpected behaviour and unfair decision processes are coming under increasing scrutiny due to this widespread use and also due to theoretical considerations. Individuals, as well as organisations, notice, test, and criticize unfair results to hold model designers and deployers accountable. This requires transparency and the possibility to describe, measure and, ideally, prove the 'fairness' of a system. This involves concepts such as fairness, transparency and accountability that will hopefully make machine learning more amenable to criticism and improvement proposals towards the fulfilment of societal goals. We concentrate on fairness, taking into account that both the transparency of the neural networks and accountability of actors and systems will require further methods. We offer a new framework that assists in mitigating unfair representations in the dataset used for training. Our framework relies on adversaries to improve fairness. First, it evaluates a model for unfairness w.r.t. protected attributes and ensures that an adversary cannot guess such attributes for a given outcome, by optimizing the model's parameters for fairness while limiting utility losses. Second, the framework leverages evasion attacks from adversarial machine learning to perform adversarial retraining with new examples unseen by the model. These two steps are iteratively applied until a significant improvement in fairness is obtained. We evaluated our framework on well-studied datasets in the fairness literature-including COMPAS-where it can surpass other approaches concerning demographic parity, equality of opportunity and also the model's utility. We also illustrate our findings on the subtle difficulties when mitigating unfairness and highlight how our framework can help model designers.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/30/2019

Fairness-Aware Ranking in Search & Recommendation Systems with Application to LinkedIn Talent Search

Recently, policymakers, regulators, and advocates have raised awareness ...
research
04/06/2022

FairNeuron: Improving Deep Neural Network Fairness with Adversary Games on Selective Neurons

With Deep Neural Network (DNN) being integrated into a growing number of...
research
05/24/2018

Fairness GAN

In this paper, we introduce the Fairness GAN, an approach for generating...
research
06/19/2019

Agnostic data debiasing through a local sanitizer learnt from an adversarial network approach

The widespread use of automated decision processes in many areas of our ...
research
05/18/2022

Multi-disciplinary fairness considerations in machine learning for clinical trials

While interest in the application of machine learning to improve healthc...
research
01/22/2018

Mitigating Unwanted Biases with Adversarial Learning

Machine learning is a tool for building models that accurately represent...
research
10/12/2018

Interpretable Fairness via Target Labels in Gaussian Process Models

Addressing fairness in machine learning models has recently attracted a ...

Please sign up or login with your details

Forgot password? Click here to reset