Biased Programmers? Or Biased Data? A Field Experiment in Operationalizing AI Ethics

12/04/2020
by   Bo Cowgill, et al.
0

Why do biased predictions arise? What interventions can prevent them? We evaluate 8.2 million algorithmic predictions of math performance from ≈400 AI engineers, each of whom developed an algorithm under a randomly assigned experimental condition. Our treatment arms modified programmers' incentives, training data, awareness, and/or technical knowledge of AI ethics. We then assess out-of-sample predictions from their algorithms using randomized audit manipulations of algorithm inputs and ground-truth math performance for 20K subjects. We find that biased predictions are mostly caused by biased training data. However, one-third of the benefit of better training data comes through a novel economic mechanism: Engineers exert greater effort and are more responsive to incentives when given better training data. We also assess how performance varies with programmers' demographic characteristics, and their performance on a psychological test of implicit bias (IAT) concerning gender and careers. We find no evidence that female, minority and low-IAT engineers exhibit lower bias or discrimination in their code. However, we do find that prediction errors are correlated within demographic groups, which creates performance improvements through cross-demographic averaging. Finally, we quantify the benefits and tradeoffs of practical managerial or policy interventions such as technical advice, simple reminders, and improved incentives for decreasing algorithmic bias.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/18/2019

Bias In, Bias Out? Evaluating the Folk Wisdom

We evaluate the folk wisdom that algorithms trained on data produced by ...
research
12/02/2019

Recovering from Biased Data: Can Fairness Constraints Improve Accuracy?

Multiple fairness constraints have been proposed in the literature, moti...
research
03/23/2020

Fairway: SE Principles for Building Fairer Software

Machine learning software is increasingly being used to make decisions t...
research
05/06/2021

The Authors Matter: Understanding and Mitigating Implicit Bias in Deep Text Classification

It is evident that deep text classification models trained on human data...
research
12/16/2019

Algorithms that "Don't See Color": Comparing Biases in Lookalike and Special Ad Audiences

Today, algorithmic models are shaping important decisions in domains suc...
research
08/28/2023

Gender bias and stereotypes in Large Language Models

Large Language Models (LLMs) have made substantial progress in the past ...
research
08/16/2022

Ex-Ante Assessment of Discrimination in Dataset

Data owners face increasing liability for how the use of their data coul...

Please sign up or login with your details

Forgot password? Click here to reset