Exposing flaws of generative model evaluation metrics and their unfair treatment of diffusion models

06/07/2023
by   George Stein, et al.
0

We systematically study a wide variety of image-based generative models spanning semantically-diverse datasets to understand and improve the feature extractors and metrics used to evaluate them. Using best practices in psychophysics, we measure human perception of image realism for generated samples by conducting the largest experiment evaluating generative models to date, and find that no existing metric strongly correlates with human evaluations. Comparing to 16 modern metrics for evaluating the overall performance, fidelity, diversity, and memorization of generative models, we find that the state-of-the-art perceptual realism of diffusion models as judged by humans is not reflected in commonly reported metrics such as FID. This discrepancy is not explained by diversity in generated samples, though one cause is over-reliance on Inception-V3. We address these flaws through a study of alternative self-supervised feature extractors, find that the semantic information encoded by individual networks strongly depends on their training procedure, and show that DINOv2-ViT-L/14 allows for much richer evaluation of generative models. Next, we investigate data memorization, and find that generative models do memorize training examples on simple, smaller datasets like CIFAR10, but not necessarily on more complex datasets like ImageNet. However, our experiments show that current metrics do not properly detect memorization; none in the literature is able to separate memorization from other phenomena such as underfitting or mode shrinkage. To facilitate further development of generative models and their evaluation we release all generated image datasets, human evaluation data, and a modular library to compute 16 common metrics for 8 different encoders at https://github.com/layer6ai-labs/dgm-eval.

READ FULL TEXT

page 6

page 25

page 31

page 33

page 34

page 39

page 40

page 41

research
06/17/2022

Rarity Score : A New Metric to Evaluate the Uncommonness of Synthesized Images

Evaluation metrics in image synthesis play a key role to measure perform...
research
06/06/2021

On Training Sample Memorization: Lessons from Benchmarking Generative Modeling with a Large-scale Competition

Many recent developments on generative models for natural images have re...
research
01/23/2023

The Reasonable Effectiveness of Diverse Evaluation Data

In this paper, we present findings from an semi-experimental exploration...
research
09/04/2023

Probabilistic Precision and Recall Towards Reliable Evaluation of Generative Models

Assessing the fidelity and diversity of the generative model is a diffic...
research
02/09/2023

Feature Likelihood Score: Evaluating Generalization of Generative Models Using Samples

Deep generative models have demonstrated the ability to generate complex...
research
11/18/2022

On the Evaluation of Generative Models in High Energy Physics

There has been a recent explosion in research into machine-learning-base...
research
08/31/2023

Unsupervised evaluation of GAN sample quality: Introducing the TTJac Score

Evaluation metrics are essential for assessing the performance of genera...

Please sign up or login with your details

Forgot password? Click here to reset