Multimodal Composite Association Score: Measuring Gender Bias in Generative Multimodal Models

04/26/2023
by   Abhishek Mandal, et al.
0

Generative multimodal models based on diffusion models have seen tremendous growth and advances in recent years. Models such as DALL-E and Stable Diffusion have become increasingly popular and successful at creating images from texts, often combining abstract ideas. However, like other deep learning models, they also reflect social biases they inherit from their training data, which is often crawled from the internet. Manually auditing models for biases can be very time and resource consuming and is further complicated by the unbounded and unconstrained nature of inputs these models can take. Research into bias measurement and quantification has generally focused on small single-stage models working on a single modality. Thus the emergence of multistage multimodal models requires a different approach. In this paper, we propose Multimodal Composite Association Score (MCAS) as a new method of measuring gender bias in multimodal generative models. Evaluating both DALL-E 2 and Stable Diffusion using this approach uncovered the presence of gendered associations of concepts embedded within the models. We propose MCAS as an accessible and scalable method of quantifying potential bias for models with different modalities and a range of potential biases.

READ FULL TEXT

page 3

page 7

research
09/10/2023

Gender Bias in Multimodal Models: A Transnational Feminist Approach Considering Geographical Region and Culture

Deep learning based visual-linguistic multimodal models such as Contrast...
research
03/20/2023

Stable Bias: Analyzing Societal Representations in Diffusion Models

As machine learning-enabled Text-to-Image (TTI) systems are becoming inc...
research
02/12/2021

Multimodal data visualization, denoising and clustering with integrated diffusion

We propose a method called integrated diffusion for combining multimodal...
research
10/28/2020

Image Representations Learned With Unsupervised Pre-Training Contain Human-like Biases

Recent advances in machine learning leverage massive datasets of unlabel...
research
08/01/2023

The Bias Amplification Paradox in Text-to-Image Generation

Bias amplification is a phenomenon in which models increase imbalances p...
research
10/08/2019

Bias-Resilient Neural Network

Presence of bias and confounding effects is inarguably one of the most c...
research
05/17/2023

Smiling Women Pitching Down: Auditing Representational and Presentational Gender Biases in Image Generative AI

Generative AI models like DALL-E 2 can interpret textual prompts and gen...

Please sign up or login with your details

Forgot password? Click here to reset