Sparsity in Variational Autoencoders

12/18/2018
by   Andrea Asperti, et al.
0

Working in high-dimensional latent spaces, the internal encoding of data in Variational Autoencoders becomes - unexpectedly - sparse. We highlight and investigate this phenomenon, that seems to suggest that, at least for a given architecture, there exists an intrinsic internal dimension of data. This can be used both to understand if the network has sufficient internal capacity, augmenting it to attain sparsity, or conversely to reduce the dimension of the network removing links to zeroed out neurons. Sparsity does also explain the reduced variability in random generative sampling from the latent space one may sometimes observe with variational autoencoders.

READ FULL TEXT

page 4

page 5

page 6

research
07/05/2023

On the Adversarial Robustness of Generative Autoencoders in the Latent Space

The generative autoencoders, such as the variational autoencoders or the...
research
06/06/2019

An Introduction to Variational Autoencoders

Variational autoencoders provide a principled framework for learning dee...
research
04/16/2021

Better Latent Spaces for Better Autoencoders

Autoencoders as tools behind anomaly searches at the LHC have the struct...
research
01/06/2021

Attention-based Convolutional Autoencoders for 3D-Variational Data Assimilation

We propose a new 'Bi-Reduced Space' approach to solving 3D Variational D...
research
10/19/2020

Evidential Sparsification of Multimodal Latent Spaces in Conditional Variational Autoencoders

Discrete latent spaces in variational autoencoders have been shown to ef...
research
08/17/2023

Conditional Sampling of Variational Autoencoders via Iterated Approximate Ancestral Sampling

Conditional sampling of variational autoencoders (VAEs) is needed in var...
research
02/20/2023

Variational Autoencoding Neural Operators

Unsupervised learning with functional data is an emerging paradigm of ma...

Please sign up or login with your details

Forgot password? Click here to reset