Adversarial Learning of a Sampler Based on an Unnormalized Distribution

01/03/2019
by   Chunyuan Li, et al.
14

We investigate adversarial learning in the case when only an unnormalized form of the density can be accessed, rather than samples. With insights so garnered, adversarial learning is extended to the case for which one has access to an unnormalized form u(x) of the target density function, but no samples. Further, new concepts in GAN regularization are developed, based on learning from samples or from u(x). The proposed method is compared to alternative approaches, with encouraging results demonstrated across a range of applications, including deep soft Q-learning.

READ FULL TEXT
research
11/06/2016

Learning to Draw Samples: With Application to Amortized MLE for Generative Adversarial Learning

We propose a simple algorithm to train stochastic neural networks to dra...
research
07/13/2021

Generative Adversarial Learning via Kernel Density Discrimination

We introduce Kernel Density Discrimination GAN (KDD GAN), a novel method...
research
09/06/2017

Symmetric Variational Autoencoder and Connections to Adversarial Learning

A new form of the variational autoencoder (VAE) is proposed, based on th...
research
02/07/2016

The IMP game: Learnability, approximability and adversarial learning beyond Σ^0_1

We introduce a problem set-up we call the Iterated Matching Pennies (IMP...
research
02/27/2018

Networking the Boids is More Robust Against Adversarial Learning

Swarm behavior using Boids-like models has been studied primarily using ...
research
02/03/2020

Adversarial-based neural network for affect estimations in the wild

There is a growing interest in affective computing research nowadays giv...
research
07/13/2020

Bridging Maximum Likelihood and Adversarial Learning via α-Divergence

Maximum likelihood (ML) and adversarial learning are two popular approac...

Please sign up or login with your details

Forgot password? Click here to reset