Autocodificadores Variacionales (VAE) Fundamentos Teóricos y Aplicaciones

02/18/2023
by   Jordi de la Torre, et al.
0

VAEs are probabilistic graphical models based on neural networks that allow the coding of input data in a latent space formed by simpler probability distributions and the reconstruction, based on such latent variables, of the source data. After training, the reconstruction network, called decoder, is capable of generating new elements belonging to a close distribution, ideally equal to the original one. This article has been written in Spanish to facilitate the arrival of this scientific knowledge to the Spanish-speaking community.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/24/2020

AI Discovering a Coordinate System of Chemical Elements: Dual Representation by Variational Autoencoders

The periodic table is a fundamental representation of chemical elements ...
research
09/29/2022

Training β-VAE by Aggregating a Learned Gaussian Posterior with a Decoupled Decoder

The reconstruction loss and the Kullback-Leibler divergence (KLD) loss i...
research
03/25/2023

Beta-VAE has 2 Behaviors: PCA or ICA?

Beta-VAE is a very classical model for disentangled representation learn...
research
04/04/2017

AMIDST: a Java Toolbox for Scalable Probabilistic Machine Learning

The AMIDST Toolbox is a software for scalable probabilistic machine lear...
research
12/12/2017

Concept Formation and Dynamics of Repeated Inference in Deep Generative Models

Deep generative models are reported to be useful in broad applications i...
research
07/11/2012

Convolutional Factor Graphs as Probabilistic Models

Based on a recent development in the area of error control coding, we in...
research
05/08/2020

Latent Racial Bias – Evaluating Racism in Police Stop-and-Searches

In this paper, we introduce the latent racial bias, a metric and method ...

Please sign up or login with your details

Forgot password? Click here to reset