Trust the Critics: Generatorless and Multipurpose WGANs with Initial Convergence Guarantees

11/30/2021
by   Tristan Milne, et al.
0

Inspired by ideas from optimal transport theory we present Trust the Critics (TTC), a new algorithm for generative modelling. This algorithm eliminates the trainable generator from a Wasserstein GAN; instead, it iteratively modifies the source data using gradient descent on a sequence of trained critic networks. This is motivated in part by the misalignment which we observed between the optimal transport directions provided by the gradients of the critic and the directions in which data points actually move when parametrized by a trainable generator. Previous work has arrived at similar ideas from different viewpoints, but our basis in optimal transport theory motivates the choice of an adaptive step size which greatly accelerates convergence compared to a constant step size. Using this step size rule, we prove an initial geometric convergence rate in the case of source distributions with densities. These convergence rates cease to apply only when a non-negligible set of generated data is essentially indistinguishable from real data. Resolving the misalignment issue improves performance, which we demonstrate in experiments that show that given a fixed number of training epochs, TTC produces higher quality images than a comparable WGAN, albeit at increased memory requirements. In addition, TTC provides an iterative formula for the transformed density, which traditional WGANs do not. Finally, TTC can be applied to map any source distribution onto any target; we demonstrate through experiments that TTC can obtain competitive performance in image generation, translation, and denoising without dedicated algorithms.

READ FULL TEXT

page 7

page 8

page 17

page 18

page 19

page 20

research
01/15/2020

Theoretical Interpretation of Learned Step Size in Deep-Unfolded Gradient Descent

Deep unfolding is a promising deep-learning technique in which an iterat...
research
11/02/2022

A new method for determining Wasserstein 1 optimal transport maps from Kantorovich potentials, with deep learning applications

Wasserstein 1 optimal transport maps provide a natural correspondence be...
research
06/09/2019

Accelerated Alternating Minimization

Alternating minimization (AM) optimization algorithms have been known fo...
research
05/30/2018

Regularized Kernel and Neural Sobolev Descent: Dynamic MMD Transport

We introduce Regularized Kernel and Neural Sobolev Descent for transport...
research
09/29/2018

AdaShift: Decorrelation and Convergence of Adaptive Learning Rate Methods

Adam is shown not being able to converge to the optimal solution in cert...
research
07/18/2019

A geometric approach to the transport of discontinuous densities

Different observations of a relation between inputs ("sources") and outp...

Please sign up or login with your details

Forgot password? Click here to reset