Fair Diffusion: Instructing Text-to-Image Generation Models on Fairness

02/07/2023
by   Felix Friedrich, et al.
0

Generative AI models have recently achieved astonishing results in quality and are consequently employed in a fast-growing number of applications. However, since they are highly data-driven, relying on billion-sized datasets randomly scraped from the internet, they also suffer from degenerated and biased human behavior, as we demonstrate. In fact, they may even reinforce such biases. To not only uncover but also combat these undesired effects, we present a novel strategy, called Fair Diffusion, to attenuate biases after the deployment of generative text-to-image models. Specifically, we demonstrate shifting a bias, based on human instructions, in any direction yielding arbitrarily new proportions for, e.g., identity groups. As our empirical evaluation demonstrates, this introduced control enables instructing generative image models on fairness, with no data filtering and additional training required.

READ FULL TEXT
research
11/09/2022

Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models

Text-conditioned image generation models have recently achieved astonish...
research
05/28/2023

Mitigating Inappropriateness in Image Generation: Can there be Value in Reflecting the World's Ugliness?

Text-conditioned image generation models have recently achieved astonish...
research
09/20/2023

Distilling Adversarial Prompts from Safety Benchmarks: Report for the Adversarial Nibbler Challenge

Text-conditioned image generation models have recently achieved astonish...
research
09/18/2023

What is a Fair Diffusion Model? Designing Generative Text-To-Image Models to Incorporate Various Worldviews

Generative text-to-image (GTI) models produce high-quality images from s...
research
06/13/2023

Adding guardrails to advanced chatbots

Generative AI models continue to become more powerful. The launch of Cha...
research
01/31/2023

Debiasing Vision-Language Models via Biased Prompts

Machine learning models have been shown to inherit biases from their tra...
research
07/19/2023

Unmaking AI Imagemaking: A Methodological Toolkit for Critical Investigation

AI image models are rapidly evolving, disrupting aesthetic production in...

Please sign up or login with your details

Forgot password? Click here to reset