High-Dimensional Distribution Generation Through Deep Neural Networks

07/26/2021
by   Dmytro Perekrestenko, et al.
249

We show that every d-dimensional probability distribution of bounded support can be generated through deep ReLU networks out of a 1-dimensional uniform input distribution. What is more, this is possible without incurring a cost - in terms of approximation error measured in Wasserstein-distance - relative to generating the d-dimensional target distribution from d independent random variables. This is enabled by a vast generalization of the space-filling approach discovered in (Bailey Telgarsky, 2018). The construction we propose elicits the importance of network depth in driving the Wasserstein distance between the target distribution and its neural network approximation to zero. Finally, we find that, for histogram target distributions, the number of bits needed to encode the corresponding generative network equals the fundamental limit for encoding probability distributions as dictated by quantization theory.

READ FULL TEXT

page 4

page 9

page 12

06/30/2020

Constructive Universal High-Dimensional Distribution Generation through Deep ReLU Networks

We present an explicit deep neural network construction that transforms ...
01/25/2021

Approximating Probability Distributions by ReLU Networks

How many neurons are needed to approximate a target probability distribu...
10/26/2018

Size-Noise Tradeoffs in Generative Networks

This paper investigates the ability of generative networks to convert th...
01/06/2018

Generating Neural Networks with Neural Networks

Hypernetworks are neural networks that transform a random input vector i...
04/19/2020

A Universal Approximation Theorem of Deep Neural Networks for Expressing Distributions

This paper studies the universal approximation property of deep neural n...
03/23/2021

Depth-based pseudo-metrics between probability distributions

Data depth is a non parametric statistical tool that measures centrality...
10/24/2021

Non-Asymptotic Error Bounds for Bidirectional GANs

We derive nearly sharp bounds for the bidirectional GAN (BiGAN) estimati...