Analysing Gender Bias in Text-to-Image Models using Object Detection

07/16/2023
by   Harvey Mannering, et al.
0

This work presents a novel strategy to measure bias in text-to-image models. Using paired prompts that specify gender and vaguely reference an object (e.g. "a man/woman holding an item") we can examine whether certain objects are associated with a certain gender. In analysing results from Stable Diffusion, we observed that male prompts generated objects such as ties, knives, trucks, baseball bats, and bicycles more frequently. On the other hand, female prompts were more likely to generate objects such as handbags, umbrellas, bowls, bottles, and cups. We hope that the method outlined here will be a useful tool for examining bias in text-to-image models.

READ FULL TEXT
research
05/01/2020

Multi-Dimensional Gender Bias Classification

Machine learning models are trained to find patterns in data. NLP models...
research
04/05/2019

Identifying and Reducing Gender Bias in Word-Level Language Models

Many text corpora exhibit socially problematic biases, which can be prop...
research
09/10/2023

Gender Bias in Multimodal Models: A Transnational Feminist Approach Considering Geographical Region and Culture

Deep learning based visual-linguistic multimodal models such as Contrast...
research
08/01/2023

The Bias Amplification Paradox in Text-to-Image Generation

Bias amplification is a phenomenon in which models increase imbalances p...
research
04/11/2018

Generating Clues for Gender based Occupation De-biasing in Text

Vast availability of text data has enabled widespread training and use o...
research
03/20/2023

Stable Bias: Analyzing Societal Representations in Diffusion Models

As machine learning-enabled Text-to-Image (TTI) systems are becoming inc...
research
02/07/2023

Auditing Gender Presentation Differences in Text-to-Image Models

Text-to-image models, which can generate high-quality images based on te...

Please sign up or login with your details

Forgot password? Click here to reset