DeepAI AI Chat
Log In Sign Up

Autocodificadores Variacionales (VAE) Fundamentos Teóricos y Aplicaciones

by   Jordi de la Torre, et al.

VAEs are probabilistic graphical models based on neural networks that allow the coding of input data in a latent space formed by simpler probability distributions and the reconstruction, based on such latent variables, of the source data. After training, the reconstruction network, called decoder, is capable of generating new elements belonging to a close distribution, ideally equal to the original one. This article has been written in Spanish to facilitate the arrival of this scientific knowledge to the Spanish-speaking community.


page 1

page 2

page 3

page 4


AI Discovering a Coordinate System of Chemical Elements: Dual Representation by Variational Autoencoders

The periodic table is a fundamental representation of chemical elements ...

Training β-VAE by Aggregating a Learned Gaussian Posterior with a Decoupled Decoder

The reconstruction loss and the Kullback-Leibler divergence (KLD) loss i...

Beta-VAE has 2 Behaviors: PCA or ICA?

Beta-VAE is a very classical model for disentangled representation learn...

AMIDST: a Java Toolbox for Scalable Probabilistic Machine Learning

The AMIDST Toolbox is a software for scalable probabilistic machine lear...

Concept Formation and Dynamics of Repeated Inference in Deep Generative Models

Deep generative models are reported to be useful in broad applications i...

Convolutional Factor Graphs as Probabilistic Models

Based on a recent development in the area of error control coding, we in...

Latent Racial Bias – Evaluating Racism in Police Stop-and-Searches

In this paper, we introduce the latent racial bias, a metric and method ...