Influence Estimation for Generative Adversarial Networks

by   Naoyuki Terashita, et al.

Identifying harmful instances, whose absence in a training dataset improves model performance, is important for building better machine learning models. Although previous studies have succeeded in estimating harmful instances under supervised settings, they cannot be trivially extended to generative adversarial networks (GANs). This is because previous approaches require that (1) the absence of a training instance directly affects the loss value and that (2) the change in the loss directly measures the harmfulness of the instance for the performance of a model. In GAN training, however, neither of the requirements is satisfied. This is because, (1) the generator's loss is not directly affected by the training instances as they are not part of the generator's training steps, and (2) the values of GAN's losses normally do not capture the generative performance of a model. To this end, (1) we propose an influence estimation method that uses the Jacobian of the gradient of the generator's loss with respect to the discriminator's parameters (and vice versa) to trace how the absence of an instance in the discriminator's training affects the generator's parameters, and (2) we propose a novel evaluation scheme, in which we assess harmfulness of each training instance on the basis of how GAN evaluation metric (e.g., inception score) is expect to change due to the removal of the instance. We experimentally verified that our influence estimation method correctly inferred the changes in GAN evaluation metrics. Further, we demonstrated that the removal of the identified harmful instances effectively improved the model's generative performance with respect to various GAN evaluation metrics.


page 22

page 24


MetricGAN: Generative Adversarial Networks based Black-box Metric Scores Optimization for Speech Enhancement

Adversarial loss in a conditional generative adversarial network (GAN) i...

Generative Adversarial Networks (GANs): An Overview of Theoretical Model, Evaluation Metrics, and Recent Developments

One of the most significant challenges in statistical signal processing ...

On How Well Generative Adversarial Networks Learn Densities: Nonparametric and Parametric Results

We study in this paper the rate of convergence for learning distribution...

TorchGAN: A Flexible Framework for GAN Training and Evaluation

TorchGAN is a PyTorch based framework for writing succinct and comprehen...

The Hidden Tasks of Generative Adversarial Networks: An Alternative Perspective on GAN Training

We present an alternative perspective on the training of generative adve...

LOGAN: Latent Optimisation for Generative Adversarial Networks

Training generative adversarial networks requires balancing of delicate ...

On Self Modulation for Generative Adversarial Networks

Training Generative Adversarial Networks (GANs) is notoriously challengi...

Please sign up or login with your details

Forgot password? Click here to reset