DeepAI AI Chat
Log In Sign Up

Sparsity in Variational Autoencoders

12/18/2018
by   Andrea Asperti, et al.
0

Working in high-dimensional latent spaces, the internal encoding of data in Variational Autoencoders becomes - unexpectedly - sparse. We highlight and investigate this phenomenon, that seems to suggest that, at least for a given architecture, there exists an intrinsic internal dimension of data. This can be used both to understand if the network has sufficient internal capacity, augmenting it to attain sparsity, or conversely to reduce the dimension of the network removing links to zeroed out neurons. Sparsity does also explain the reduced variability in random generative sampling from the latent space one may sometimes observe with variational autoencoders.

READ FULL TEXT

page 4

page 5

page 6

06/06/2019

An Introduction to Variational Autoencoders

Variational autoencoders provide a principled framework for learning dee...
04/16/2021

Better Latent Spaces for Better Autoencoders

Autoencoders as tools behind anomaly searches at the LHC have the struct...
01/24/2019

On the Transformation of Latent Space in Autoencoders

Noting the importance of the latent variables in inference and learning,...
01/06/2021

Attention-based Convolutional Autoencoders for 3D-Variational Data Assimilation

We propose a new 'Bi-Reduced Space' approach to solving 3D Variational D...
10/19/2020

Evidential Sparsification of Multimodal Latent Spaces in Conditional Variational Autoencoders

Discrete latent spaces in variational autoencoders have been shown to ef...
07/10/2019

Perturbation theory approach to study the latent space degeneracy of Variational Autoencoders

The use of Variational Autoencoders in different Machine Learning tasks ...
04/15/2019

Processsing Simple Geometric Attributes with Autoencoders

Image synthesis is a core problem in modern deep learning, and many rece...