Generative Models as Distributions of Functions

by   Emilien Dupont, et al.

Generative models are typically trained on grid-like data such as images. As a result, the size of these models usually scales directly with the underlying grid resolution. In this paper, we abandon discretized grids and instead parameterize individual data points by continuous functions. We then build generative models by learning distributions over such functions. By treating data points as functions, we can abstract away from the specific type of data we train on and construct models that scale independently of signal resolution and dimension. To train our model, we use an adversarial approach with a discriminator that acts directly on continuous signals. Through experiments on both images and 3D shapes, we demonstrate that our model can learn rich distributions of functions independently of data type and resolution.


page 2

page 6

page 7

page 8

page 14

page 15

page 20

page 21


Flexible Prior Distributions for Deep Generative Models

We consider the problem of training generative models with deep neural n...

Generator Reversal

We consider the problem of training generative models with deep neural n...

Curriculum Learning for Deep Generative Models with Clustering

Training generative models like generative adversarial networks (GANs) a...

From data to functa: Your data point is a function and you should treat it like one

It is common practice in deep learning to represent a measurement of the...

Intrinsic Multi-scale Evaluation of Generative Models

Generative models are often used to sample high-dimensional data points ...

Interpreting Spatially Infinite Generative Models

Traditional deep generative models of images and other spatial modalitie...

Learning Implicit Generative Models by Matching Perceptual Features

Perceptual features (PFs) have been used with great success in tasks suc...