Generalization and Memorization: The Bias Potential Model

11/29/2020
by   Hongkang Yang, et al.
0

Models for learning probability distributions such as generative models and density estimators behave quite differently from models for learning functions. One example is found in the memorization phenomenon, namely the ultimate convergence to the empirical distribution, that occurs in generative adversarial networks (GANs). For this reason, the issue of generalization is more subtle than that for supervised learning. For the bias potential model, we show that dimension-independent generalization accuracy is achievable if early stopping is adopted, despite that in the long term, the model either memorizes the samples or diverges.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/08/2021

Generalization Error of GAN from the Discriminator's Perspective

The generative adversarial network (GAN) is a well-known model for learn...
research
06/07/2021

Double Descent and Other Interpolation Phenomena in GANs

We study overparameterization in generative adversarial networks (GANs) ...
research
12/22/2022

A Mathematical Framework for Learning Probability Distributions

The modeling of probability distributions, specifically generative model...
research
05/22/2023

Statistical Guarantees of Group-Invariant GANs

Group-invariant generative adversarial networks (GANs) are a type of GAN...
research
05/14/2019

Learning Generative Models across Incomparable Spaces

Generative Adversarial Networks have shown remarkable success in learnin...
research
10/13/2019

Rethinking Exposure Bias In Language Modeling

Exposure bias describes the phenomenon that a language model trained und...
research
01/08/2020

Bayesian Inversion Of Generative Models For Geologic Storage Of Carbon Dioxide

Carbon capture and storage (CCS) can aid decarbonization of the atmosphe...

Please sign up or login with your details

Forgot password? Click here to reset