Smiling Women Pitching Down: Auditing Representational and Presentational Gender Biases in Image Generative AI

05/17/2023
by   Luhang Sun, et al.
0

Generative AI models like DALL-E 2 can interpret textual prompts and generate high-quality images exhibiting human creativity. Though public enthusiasm is booming, systematic auditing of potential gender biases in AI-generated images remains scarce. We addressed this gap by examining the prevalence of two occupational gender biases (representational and presentational biases) in 15,300 DALL-E 2 images spanning 153 occupations, and assessed potential bias amplification by benchmarking against 2021 census labor statistics and Google Images. Our findings reveal that DALL-E 2 underrepresents women in male-dominated fields while overrepresenting them in female-dominated occupations. Additionally, DALL-E 2 images tend to depict more women than men with smiling faces and downward-pitching heads, particularly in female-dominated (vs. male-dominated) occupations. Our computational algorithm auditing study demonstrates more pronounced representational and presentational biases in DALL-E 2 compared to Google Images and calls for feminist interventions to prevent such bias-laden AI-generated images to feedback into the media ecology.

READ FULL TEXT
research
07/01/2021

Identifying the Prevalence of Gender Biases among the Computing Organizations

We have designed an online survey to understand the status quo of four d...
research
09/18/2023

Bias of AI-Generated Content: An Examination of News Produced by Large Language Models

Large language models (LLMs) have the potential to transform our lives a...
research
08/08/2023

Unmasking Nationality Bias: A Study of Human Perception of Nationalities in AI-Generated Articles

We investigate the potential for nationality biases in natural language ...
research
09/06/2018

Assessing Gender Bias in Machine Translation -- A Case Study with Google Translate

Recently there has been a growing concern about machine bias, where trai...
research
01/07/2020

Revealing Neural Network Bias to Non-Experts Through Interactive Counterfactual Examples

AI algorithms are not immune to biases. Traditionally, non-experts have ...
research
07/19/2023

Unmaking AI Imagemaking: A Methodological Toolkit for Critical Investigation

AI image models are rapidly evolving, disrupting aesthetic production in...
research
04/26/2023

Multimodal Composite Association Score: Measuring Gender Bias in Generative Multimodal Models

Generative multimodal models based on diffusion models have seen tremend...

Please sign up or login with your details

Forgot password? Click here to reset