Measuring Fairness in Generative Models

07/16/2021
by   Christopher T. H Teo, et al.
0

Deep generative models have made much progress in improving training stability and quality of generated data. Recently there has been increased interest in the fairness of deep-generated data. Fairness is important in many applications, e.g. law enforcement, as biases will affect efficacy. Central to fair data generation are the fairness metrics for the assessment and evaluation of different generative models. In this paper, we first review fairness metrics proposed in previous works and highlight potential weaknesses. We then discuss a performance benchmark framework along with the assessment of alternative metrics.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/12/2023

Deep Generative Models for Physiological Signals: A Systematic Literature Review

In this paper, we present a systematic literature review on deep generat...
research
08/16/2023

Fair GANs through model rebalancing with synthetic data

Deep generative models require large amounts of training data. This ofte...
research
12/05/2022

Breaking the Spurious Causality of Conditional Generation via Fairness Intervention with Corrective Sampling

Trying to capture the sample-label relationship, conditional generative ...
research
06/09/2023

Safety and Fairness for Content Moderation in Generative Models

With significant advances in generative AI, new technologies are rapidly...
research
02/02/2023

Uncertainty in Fairness Assessment: Maintaining Stable Conclusions Despite Fluctuations

Several recent works encourage the use of a Bayesian framework when asse...
research
09/19/2011

Learning Discriminative Metrics via Generative Models and Kernel Learning

Metrics specifying distances between data points can be learned in a dis...
research
05/19/2021

Copyright in Generative Deep Learning

Machine-generated artworks are now part of the contemporary art scene: t...

Please sign up or login with your details

Forgot password? Click here to reset