ViBE: A Tool for Measuring and Mitigating Bias in Image Datasets

04/16/2020
by   Angelina Wang, et al.
0

Machine learning models are known to perpetuate the biases present in the data, but oftentimes these biases aren't known until after the models are deployed. We present the Visual Bias Extraction (ViBE) Tool that assists in the investigation of a visual dataset, surfacing potential dataset biases along three dimensions: (1) object-based, (2) gender-based, and (3) geography-based. Object-based biases relate to things like size, context, or diversity of object representation in the dataset; gender-based metrics aim to reveal the stereotypical portrayal of people of different genders within the dataset, with future iterations of our tool extending the analysis to additional axes of identity; geography-based analysis considers the representation of different geographic locations. Our tool is designed to shed light on the dataset along these three axes, allowing both dataset creators and users to gain a better understanding of what exactly is portrayed in their dataset. The responsibility then lies with the tool user to determine which of the revealed biases may be problematic, taking into account the cultural and historical context, as this is difficult to determine automatically. Nevertheless, the tool also provides actionable insights that may be helpful for mitigating the revealed concerns. Overall, our work allows for the machine learning bias problem to be addressed early in the pipeline at the dataset stage. ViBE is available at https://github.com/princetonvisualai/vibe-tool.

READ FULL TEXT

page 11

page 19

page 20

research
01/19/2022

Grep-BiasIR: A Dataset for Investigating Gender Representation-Bias in Information Retrieval Results

The provided contents by information retrieval (IR) systems can reflect ...
research
10/30/2019

Toward Gender-Inclusive Coreference Resolution

Correctly resolving textual mentions of people fundamentally entails mak...
research
09/08/2022

Data Feedback Loops: Model-driven Amplification of Dataset Biases

Datasets scraped from the internet have been critical to the successes o...
research
04/16/2019

REPAIR: Removing Representation Bias by Dataset Resampling

Modern machine learning datasets can have biases for certain representat...
research
05/26/2023

Finspector: A Human-Centered Visual Inspection Tool for Exploring and Comparing Biases among Foundation Models

Pre-trained transformer-based language models are becoming increasingly ...
research
12/04/2020

A Note on Data Biases in Generative Models

It is tempting to think that machines are less prone to unfairness and p...
research
04/12/2022

VisCUIT: Visual Auditor for Bias in CNN Image Classifier

CNN image classifiers are widely used, thanks to their efficiency and ac...

Please sign up or login with your details

Forgot password? Click here to reset