Lipschitz regularized gradient flows and latent generative particles

10/31/2022
by   Hyemin Gu, et al.
0

Lipschitz regularized f-divergences are constructed by imposing a bound on the Lipschitz constant of the discriminator in the variational representation. They interpolate between the Wasserstein metric and f-divergences and provide a flexible family of loss functions for non-absolutely continuous (e.g. empirical) distributions, possibly with heavy tails. We construct Lipschitz regularized gradient flows on the space of probability measures based on these divergences. Examples of such gradient flows are Lipschitz regularized Fokker-Planck and porous medium partial differential equations (PDEs) for the Kullback-Leibler and alpha-divergences, respectively. The regularization corresponds to imposing a Courant-Friedrichs-Lewy numerical stability condition on the PDEs. For empirical measures, the Lipschitz regularization on gradient flows induces a numerically stable transporter/discriminator particle algorithm, where the generative particles are transported along the gradient of the discriminator. The gradient structure leads to a regularized Fisher information (particle kinetic energy) used to track the convergence of the algorithm. The Lipschitz regularized discriminator can be implemented via neural network spectral normalization and the particle algorithm generates approximate samples from possibly high-dimensional distributions known only from data. Notably, our particle algorithm can generate synthetic data even in small sample size regimes. A new data processing inequality for the regularized divergence allows us to combine our particle algorithm with representation learning, e.g. autoencoder architectures. The resulting algorithm yields markedly improved generative properties in terms of efficiency and quality of the synthetic samples. From a statistical mechanics perspective the encoding can be interpreted dynamically as learning a better mobility for the generative particles.

READ FULL TEXT

page 17

page 18

page 33

research
08/28/2018

Lipschitz regularized Deep Neural Networks converge and generalize

Lipschitz regularized neural networks augment the usual fidelity term us...
research
02/09/2023

Efficient displacement convex optimization with particle gradient descent

Particle gradient descent, which uses particles to represent a probabili...
research
04/02/2019

Towards Efficient and Unbiased Implementation of Lipschitz Continuity in GANs

Lipschitz continuity recently becomes popular in generative adversarial ...
research
11/30/2021

Robust and Provably Monotonic Networks

The Lipschitz constant of the map between the input and output space rep...
research
05/24/2022

Data driven gradient flows

We present a framework enabling variational data assimilation for gradie...
research
10/10/2022

Function-space regularized Rényi divergences

We propose a new family of regularized Rényi divergences parametrized no...
research
09/16/2022

Solving Fredholm Integral Equations of the First Kind via Wasserstein Gradient Flows

Solving Fredholm equations of the first kind is crucial in many areas of...

Please sign up or login with your details

Forgot password? Click here to reset