Uncovering Bias in Face Generation Models

02/22/2023
by   Cristian Muñoz, et al.
0

Recent advancements in GANs and diffusion models have enabled the creation of high-resolution, hyper-realistic images. However, these models may misrepresent certain social groups and present bias. Understanding bias in these models remains an important research question, especially for tasks that support critical decision-making and could affect minorities. The contribution of this work is a novel analysis covering architectures and embedding spaces for fine-grained understanding of bias over three approaches: generators, attribute modifier, and post-processing bias mitigators. This work shows that generators suffer from bias across all social groups with attribute preferences such as between 75 trained CelebA models) and low probabilities of generating children and older men. Modifier and mitigators work as post-processor and change the generator performance. For instance, attribute channel perturbation strategies modify the embedding spaces. We quantify the influence of this change on group fairness by measuring the impact on image quality and group features. Specifically, we use the Fréchet Inception Distance (FID), the Face Matching Error and the Self-Similarity score. For Interfacegan, we analyze one and two attribute channel perturbations and examine the effect on the fairness distribution and the quality of the image. Finally, we analyzed the post-processing bias mitigators, which are the fastest and most computationally efficient way to mitigate bias. We find that these mitigation techniques show similar results on KL divergence and FID score, however, self-similarity scores show a different feature concentration on the new groups of the data distribution. The weaknesses and ongoing challenges described in this work must be considered in the pursuit of creating fair and unbiased face generation models.

READ FULL TEXT

page 6

page 8

page 9

page 14

research
08/25/2021

Social Norm Bias: Residual Harms of Fairness-Aware Algorithms

Many modern learning algorithms mitigate bias by enforcing fairness acro...
research
01/31/2021

Priority-based Post-Processing Bias Mitigation for Individual and Group Fairness

Previous post-processing bias mitigation algorithms on both group and in...
research
12/20/2020

Biased Models Have Biased Explanations

We study fairness in Machine Learning (FairML) through the lens of attri...
research
09/02/2023

Bias and Fairness in Large Language Models: A Survey

Rapid advancements of large language models (LLMs) have enabled the proc...
research
11/19/2021

Model-agnostic bias mitigation methods with regressor distribution control for Wasserstein-based fairness metrics

This article is a companion paper to our earlier work Miroshnikov et al....
research
10/24/2020

Fair Hate Speech Detection through Evaluation of Social Group Counterfactuals

Approaches for mitigating bias in supervised models are designed to redu...
research
12/13/2022

Simplicity Bias Leads to Amplified Performance Disparities

The simple idea that not all things are equally difficult has surprising...

Please sign up or login with your details

Forgot password? Click here to reset