On gradient regularizers for MMD GANs

05/29/2018
by   Michael Arbel, et al.
6

We propose a principled method for gradient-based regularization of the critic of GAN-like models trained by adversarially optimizing the kernel of a Maximum Mean Discrepancy (MMD). Our method is based on studying the behavior of the optimized MMD, and constrains the gradient based on analytical results rather than an optimization penalty. Experimental results show that the proposed regularization leads to stable training and outperforms state-of-the art methods on image generation, including on 160 × 160 CelebA and 64 × 64 ImageNet.

READ FULL TEXT

page 8

page 14

page 15

page 16

page 18

page 19

research
07/21/2018

Towards Distributed Coevolutionary GANs

Generative Adversarial Networks (GANs) have become one of the dominant m...
research
01/04/2018

Demystifying MMD GANs

We investigate the training and performance of generative adversarial ne...
research
12/03/2018

A Wasserstein GAN model with the total variational regularization

It is well known that the generative adversarial nets (GANs) are remarka...
research
07/21/2019

Improving Neural Network Classifier using Gradient-based Floating Centroid Method

Floating centroid method (FCM) offers an efficient way to solve a fixed-...
research
06/16/2022

Gradient-Based Adversarial and Out-of-Distribution Detection

We propose to utilize gradients for detecting adversarial and out-of-dis...
research
06/16/2021

Dynamically Grown Generative Adversarial Networks

Recent work introduced progressive network growing as a promising way to...
research
09/16/2019

A Characteristic Function Approach to Deep Implicit Generative Modeling

In this paper, we formulate the problem of learning an Implicit Generati...

Please sign up or login with your details

Forgot password? Click here to reset