Exploring Biases and Prejudice of Facial Synthesis via Semantic Latent Space

08/23/2021
by   Xuyang Shen, et al.
0

Deep learning (DL) models are widely used to provide a more convenient and smarter life. However, biased algorithms will negatively influence us. For instance, groups targeted by biased algorithms will feel unfairly treated and even fearful of negative consequences of these biases. This work targets biased generative models' behaviors, identifying the cause of the biases and eliminating them. We can (as expected) conclude that biased data causes biased predictions of face frontalization models. Varying the proportions of male and female faces in the training data can have a substantial effect on behavior on the test data: we found that the seemingly obvious choice of 50:50 proportions was not the best for this dataset to reduce biased behavior on female faces, which was 71 generation and generating incorrect gender faces are two behaviors of these models. In addition, only some layers in face frontalization models are vulnerable to biased datasets. Optimizing the skip-connections of the generator in face frontalization models can make models less biased. We conclude that it is likely to be impossible to eliminate all training bias without an unlimited size dataset, and our experiments show that the bias can be reduced and quantified. We believe the next best to a perfect unbiased predictor is one that has minimized the remaining known bias.

READ FULL TEXT

page 1

page 4

research
10/07/2019

Learning De-biased Representations with Biased Representations

Many machine learning algorithms are trained and evaluated by splitting ...
research
05/03/2021

Algorithms are not neutral: Bias in collaborative filtering

Discussions of algorithmic bias tend to focus on examples where either t...
research
11/09/2021

Ethically aligned Deep Learning: Unbiased Facial Aesthetic Prediction

Facial beauty prediction (FBP) aims to develop a machine that automatica...
research
04/27/2020

Optimal Decisions of a Rational Agent in the Presence of Biased Information Providers

We consider information networks whereby multiple biased-information-pro...
research
04/16/2022

De-biasing facial detection system using VAE

Bias in AI/ML-based systems is a ubiquitous problem and bias in AI/ML sy...
research
08/31/2021

A Generative Approach for Mitigating Structural Biases in Natural Language Inference

Many natural language inference (NLI) datasets contain biases that allow...
research
08/28/2023

Gender bias and stereotypes in Large Language Models

Large Language Models (LLMs) have made substantial progress in the past ...

Please sign up or login with your details

Forgot password? Click here to reset