Identifying Bias in AI using Simulation

09/30/2018
by   Daniel McDuff, et al.
0

Machine learned models exhibit bias, often because the datasets used to train them are biased. This presents a serious problem for the deployment of such technology, as the resulting models might perform poorly on populations that are minorities within the training set and ultimately present higher risks to them. We propose to use high-fidelity computer simulations to interrogate and diagnose biases within ML classifiers. We present a framework that leverages Bayesian parameter search to efficiently characterize the high dimensional feature space and more quickly identify weakness in performance. We apply our approach to an example domain, face detection, and show that it can be used to help identify demographic biases in commercial face application programming interfaces (APIs).

READ FULL TEXT

page 3

page 6

research
04/16/2022

De-biasing facial detection system using VAE

Bias in AI/ML-based systems is a ubiquitous problem and bias in AI/ML sy...
research
06/25/2022

Visual Auditor: Interactive Visualization for Detection and Summarization of Model Biases

As machine learning (ML) systems become increasingly widespread, it is n...
research
05/18/2022

"I'm sorry to hear that": finding bias in language models with a holistic descriptor dataset

As language models grow in popularity, their biases across all possible ...
research
07/02/2019

Quantifying Algorithmic Biases over Time

Algorithms now permeate multiple aspects of human lives and multiple rec...
research
01/25/2022

Are Commercial Face Detection Models as Biased as Academic Models?

As facial recognition systems are deployed more widely, scholars and act...
research
09/15/2023

Toward responsible face datasets: modeling the distribution of a disentangled latent space for sampling face images from demographic groups

Recently, it has been exposed that some modern facial recognition system...
research
04/28/2022

Learning to Split for Automatic Bias Detection

Classifiers are biased when trained on biased datasets. As a remedy, we ...

Please sign up or login with your details

Forgot password? Click here to reset