A Novel Approach to Fairness in Automated Decision-Making using Affective Normalization

05/02/2022
by   Jesse Hoey, et al.
0

Any decision, such as one about who to hire, involves two components. First, a rational component, i.e., they have a good education, they speak clearly. Second, an affective component, based on observables such as visual features of race and gender, and possibly biased by stereotypes. Here we propose a method for measuring the affective, socially biased, component, thus enabling its removal. That is, given a decision-making process, these affective measurements remove the affective bias in the decision, rendering it fair across a set of categories defined by the method itself. We thus propose that this may solve three key problems in intersectional fairness: (1) the definition of categories over which fairness is a consideration; (2) an infinite regress into smaller and smaller groups; and (3) ensuring a fair distribution based on basic human rights or other prior information. The primary idea in this paper is that fairness biases can be measured using affective coherence, and that this can be used to normalize outcome mappings. We aim for this conceptual work to expose a novel method for handling fairness problems that uses emotional coherence as an independent measure of bias that goes beyond statistical parity.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/19/2016

Fairness as a Program Property

We explore the following question: Is a decision-making program fair, fo...
research
02/23/2020

Fair Adversarial Networks

The influence of human judgement is ubiquitous in datasets used across t...
research
11/14/2018

Aequitas: A Bias and Fairness Audit Toolkit

Recent work has raised concerns on the risk of unintended bias in algori...
research
02/01/2019

Dynamic fairness - Breaking vicious cycles in automatic decision making

In recent years, machine learning techniques have been increasingly appl...
research
05/10/2022

Don't Throw it Away! The Utility of Unlabeled Data in Fair Decision Making

Decision making algorithms, in practice, are often trained on data that ...
research
03/12/2018

Bias in OLAP Queries: Detection, Explanation, and Removal

On line analytical processing (OLAP) is an essential element of decision...
research
06/05/2019

Fair Distributions from Biased Samples: A Maximum Entropy Optimization Framework

One reason for the emergence of bias in AI systems is biased data -- dat...

Please sign up or login with your details

Forgot password? Click here to reset