A Friendly Face: Do Text-to-Image Systems Rely on Stereotypes when the Input is Under-Specified?

02/14/2023
by   Kathleen C. Fraser, et al.
0

As text-to-image systems continue to grow in popularity with the general public, questions have arisen about bias and diversity in the generated images. Here, we investigate properties of images generated in response to prompts which are visually under-specified, but contain salient social attributes (e.g., 'a portrait of a threatening person' versus 'a portrait of a friendly person'). Grounding our work in social cognition theory, we find that in many cases, images contain similar demographic biases to those reported in the stereotype literature. However, trends are inconsistent across different models and further investigation is warranted.

READ FULL TEXT

page 16

page 17

research
03/20/2023

Stable Bias: Analyzing Societal Representations in Diffusion Models

As machine learning-enabled Text-to-Image (TTI) systems are becoming inc...
research
05/01/2020

Towards Controllable Biases in Language Generation

We present a general approach towards controllable societal biases in na...
research
09/17/2020

Counterfactual Generation and Fairness Evaluation Using Adversarially Learned Inference

Recent studies have reported biases in machine learning image classifier...
research
09/03/2019

The Woman Worked as a Babysitter: On Biases in Language Generation

We present a systematic study of biases in natural language generation (...
research
05/25/2023

Uncovering and Categorizing Social Biases in Text-to-SQL

Content Warning: This work contains examples that potentially implicate ...
research
02/01/2023

Trash to Treasure: Using text-to-image models to inform the design of physical artefacts

Text-to-image generative models have recently exploded in popularity and...

Please sign up or login with your details

Forgot password? Click here to reset