DeepAI AI Chat
Log In Sign Up

Random Matrix Theory Proves that Deep Learning Representations of GAN-data Behave as Gaussian Mixtures

by   Mohamed El Amine Seddik, et al.

This paper shows that deep learning (DL) representations of data produced by generative adversarial nets (GANs) are random vectors which fall within the class of so-called concentrated random vectors. Further exploiting the fact that Gram matrices, of the type G = X^T X with X=[x_1,...,x_n]∈R^p× n and x_i independent concentrated random vectors from a mixture model, behave asymptotically (as n,p→∞) as if the x_i were drawn from a Gaussian mixture, suggests that DL representations of GAN-data can be fully described by their first two statistical moments for a wide range of standard classifiers. Our theoretical findings are validated by generating images with the BigGAN model and across different popular deep representation networks.


Deep Gaussian Mixture Models

Deep learning is a hierarchical inference method formed by subsequent mu...

Mixture Density Conditional Generative Adversarial Network Models (MD-CGAN)

Generative Adversarial Networks (GANs) have gained significant attention...

GAT-GMM: Generative Adversarial Training for Gaussian Mixture Models

Generative adversarial networks (GANs) learn the distribution of observe...

BourGAN: Generative Networks with Metric Embeddings

This paper addresses the mode collapse for generative adversarial networ...

Gaussian Approximations for Maxima of Random Vectors under (2+ι)-th Moments

We derive a Gaussian approximation result for the maximum of a sum of ra...

On the modern deep learning approaches for precipitation downscaling

Deep Learning (DL) based downscaling has become a popular tool in earth ...