Using VAEs to Learn Latent Variables: Observations on Applications in cryo-EM

03/13/2023
by   Daniel G. Edelberg, et al.
0

Variational autoencoders (VAEs) are a popular generative model used to approximate distributions. The encoder part of the VAE is used in amortized learning of latent variables, producing a latent representation for data samples. Recently, VAEs have been used to characterize physical and biological systems. In this case study, we qualitatively examine the amortization properties of a VAE used in biological applications. We find that in this application the encoder bears a qualitative resemblance to more traditional explicit representation of latent variables.

READ FULL TEXT

page 19

page 21

page 23

page 25

page 26

page 28

research
05/12/2020

Jigsaw-VAE: Towards Balancing Features in Variational Autoencoders

The latent variables learned by VAEs have seen considerable interest as ...
research
07/19/2018

Bounded Information Rate Variational Autoencoders

This paper introduces a new member of the family of Variational Autoenco...
research
04/23/2022

SIReN-VAE: Leveraging Flows and Amortized Inference for Bayesian Networks

Initial work on variational autoencoders assumed independent latent vari...
research
02/01/2019

A Classification Supervised Auto-Encoder Based on Predefined Evenly-Distributed Class Centroids

Classic Autoencoders and variational autoencoders are used to learn comp...
research
06/17/2020

Rethinking Semi-Supervised Learning in VAEs

We present an alternative approach to semi-supervision in variational au...
research
02/06/2023

Proposing Novel Extrapolative Compounds by Nested Variational Autoencoders

Materials informatics (MI), which uses artificial intelligence and data ...
research
03/25/2023

Beta-VAE has 2 Behaviors: PCA or ICA?

Beta-VAE is a very classical model for disentangled representation learn...

Please sign up or login with your details

Forgot password? Click here to reset