On Predicting Generalization using GANs

11/28/2021
by   Yi Zhang, et al.
6

Research on generalization bounds for deep networks seeks to give ways to predict test error using just the training dataset and the network parameters. While generalization bounds can give many insights about architecture design, training algorithms etc., what they do not currently do is yield good predictions for actual test error. A recently introduced Predicting Generalization in Deep Learning competition aims to encourage discovery of methods to better predict test error. The current paper investigates a simple idea: can test error be predicted using 'synthetic data' produced using a Generative Adversarial Network (GAN) that was trained on the same training dataset? Upon investigating several GAN models and architectures, we find that this turns out to be the case. In fact, using GANs pre-trained on standard datasets, the test error can be predicted without requiring any additional hyper-parameter tuning. This result is surprising because GANs have well-known limitations (e.g. mode collapse) and are known to not learn the data distribution accurately. Yet the generated samples are good enough to substitute for test data. Several additional experiments are presented to explore reasons why GANs do well at this task. In addition to a new approach for predicting generalization, the counter-intuitive phenomena presented in our work may also call for a better understanding of GANs' strengths and limitations.

READ FULL TEXT
research
10/01/2020

Tabular GANs for uneven distribution

GANs are well known for success in the realistic image generation. Howev...
research
06/03/2022

On the Privacy Properties of GAN-generated Samples

The privacy implications of generative adversarial networks (GANs) are a...
research
04/06/2021

Generalization of GANs under Lipschitz continuity and data augmentation

Generative adversarial networks (GANs) have been being widely used in va...
research
04/23/2021

Ensembles of GANs for synthetic training data generation

Insufficient training data is a major bottleneck for most deep learning ...
research
12/03/2016

Ensembles of Generative Adversarial Networks

Ensembles are a popular way to improve results of discriminative CNNs. T...
research
05/19/2020

Regularization Methods for Generative Adversarial Networks: An Overview of Recent Studies

Despite its short history, Generative Adversarial Network (GAN) has been...
research
02/21/2021

Synthesizing Irreproducibility in Deep Networks

The success and superior performance of deep networks is spreading their...

Please sign up or login with your details

Forgot password? Click here to reset