Image Representations Learned With Unsupervised Pre-Training Contain Human-like Biases

10/28/2020
by   Ryan Steed, et al.
0

Recent advances in machine learning leverage massive datasets of unlabeled images from the web to learn general-purpose image representations for tasks from image classification to face recognition. But do unsupervised computer vision models automatically learn implicit patterns and embed social biases that could have harmful downstream effects? For the first time, we develop a novel method for quantifying biased associations between representations of social concepts and attributes in images. We find that state-of-the-art unsupervised models trained on ImageNet, a popular benchmark image dataset curated from internet images, automatically learn racial, gender, and intersectional biases. We replicate 8 of 15 documented human biases from social psychology, from the innocuous, as with insects and flowers, to the potentially harmful, as with race and gender. For the first time in the image domain, we replicate human-like biases about skin-tone and weight. Our results also closely match three hypotheses about intersectional bias from social psychology. When compared with statistical patterns in online image datasets, our findings suggest that machine learning models can automatically learn bias from the way people are stereotypically portrayed on the web.

READ FULL TEXT
research
03/14/2023

Variation of Gender Biases in Visual Recognition Models Before and After Finetuning

We introduce a framework to measure how biases change before and after f...
research
10/08/2019

Bias-Resilient Neural Network

Presence of bias and confounding effects is inarguably one of the most c...
research
04/17/2020

Unsupervised Discovery of Implicit Gender Bias

Despite their prevalence in society, social biases are difficult to defi...
research
11/15/2021

Assessing gender bias in medical and scientific masked language models with StereoSet

NLP systems use language models such as Masked Language Models (MLMs) th...
research
04/26/2023

Multimodal Composite Association Score: Measuring Gender Bias in Generative Multimodal Models

Generative multimodal models based on diffusion models have seen tremend...
research
11/15/2022

Scalar Invariant Networks with Zero Bias

Just like weights, bias terms are the learnable parameters of many popul...
research
12/21/2022

Contrastive Language-Vision AI Models Pretrained on Web-Scraped Multimodal Data Exhibit Sexual Objectification Bias

Nine language-vision AI models trained on web scrapes with the Contrasti...

Please sign up or login with your details

Forgot password? Click here to reset