Fair Adversarial Networks

02/23/2020
by   George Cevora, et al.
0

The influence of human judgement is ubiquitous in datasets used across the analytics industry, yet humans are known to be sub-optimal decision makers prone to various biases. Analysing biased datasets then leads to biased outcomes of the analysis. Bias by protected characteristics (e.g. race) is of particular interest as it may not only make the output of analytical process sub-optimal, but also illegal. Countering the bias by constraining the analytical outcomes to be fair is problematic because A) fairness lacks a universally accepted definition, while at the same time some definitions are mutually exclusive, and B) the use of optimisation constraints ensuring fairness is incompatible with most analytical pipelines. Both problems are solved by methods which remove bias from the data and returning an altered dataset. This approach aims to not only remove the actual bias variable (e.g. race), but also alter all proxy variables (e.g. postcode) so the bias variable is not detectable from the rest of the data. The advantage of using this approach is that the definition of fairness as a lack of detectable bias in the data (as opposed to the output of analysis) is universal and therefore solves problem (A). Furthermore, as the data is altered to remove bias the problem (B) disappears because the analytical pipelines can remain unchanged. This approach has been adopted by several technical solutions. None of them, however, seems to be satisfactory in terms of ability to remove multivariate, non-linear and non-binary biases. Therefore, in this paper I propose the concept of Fair Adversarial Networks as an easy-to-implement general method for removing bias from data. This paper demonstrates that Fair Adversarial Networks achieve this aim.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/28/2020

Demonstrating Rosa: the fairness solution for any Data Analytic pipeline

Most datasets of interest to the analytics industry are impacted by vari...
research
10/25/2016

A statistical framework for fair predictive algorithms

Predictive modeling is increasingly being employed to assist human decis...
research
05/02/2022

A Novel Approach to Fairness in Automated Decision-Making using Affective Normalization

Any decision, such as one about who to hire, involves two components. Fi...
research
05/31/2022

Social Bias Meets Data Bias: The Impacts of Labeling and Measurement Errors on Fairness Criteria

Although many fairness criteria have been proposed to ensure that machin...
research
04/03/2020

FairALM: Augmented Lagrangian Method for Training Fair Models with Little Regret

Algorithmic decision making based on computer vision and machine learnin...
research
05/31/2019

Principal Fairness: Removing Bias via Projections

Reducing hidden bias in the data and ensuring fairness in algorithmic da...
research
08/03/2023

NBIAS: A Natural Language Processing Framework for Bias Identification in Text

Bias in textual data can lead to skewed interpretations and outcomes whe...

Please sign up or login with your details

Forgot password? Click here to reset