Analytical Probability Distributions and EM-Learning for Deep Generative Networks

06/17/2020
by   Randall Balestriero, et al.
0

Deep Generative Networks (DGNs) with probabilistic modeling of their output and latent space are currently trained via Variational Autoencoders (VAEs). In the absence of a known analytical form for the posterior and likelihood expectation, VAEs resort to approximations, including (Amortized) Variational Inference (AVI) and Monte-Carlo (MC) sampling. We exploit the Continuous Piecewise Affine (CPA) property of modern DGNs to derive their posterior and marginal distributions as well as the latter's first moments. These findings enable us to derive an analytical Expectation-Maximization (EM) algorithm that enables gradient-free DGN learning. We demonstrate empirically that EM training of DGNs produces greater likelihood than VAE training. Our findings will guide the design of new VAE AVI that better approximate the true posterior and open avenues to apply standard statistical tools for model comparison, anomaly detection, and missing data imputation.

READ FULL TEXT

page 16

page 31

research
06/13/2019

Reweighted Expectation Maximization

Training deep generative models with maximum likelihood remains a challe...
research
09/27/2019

Identifying through Flows for Recovering Latent Representations

Identifiability, or recovery of the true latent representations from whi...
research
02/09/2022

Missing Data Imputation and Acquisition with Deep Hierarchical Models and Hamiltonian Monte Carlo

Variational Autoencoders (VAEs) have recently been highly successful at ...
research
02/13/2023

GFlowNet-EM for learning compositional latent variable models

Latent variable models (LVMs) with discrete compositional latents are an...
research
05/27/2022

MissDAG: Causal Discovery in the Presence of Missing Data with Continuous Additive Noise Models

State-of-the-art causal discovery methods usually assume that the observ...
research
03/23/2020

Deterministic Approximate EM Algorithm; Application to the Riemann Approximation EM and the Tempered EM

The Expectation Maximisation (EM) algorithm is widely used to optimise n...
research
10/10/2016

Truncated Variational Expectation Maximization

We derive a novel variational expectation maximization approach based on...

Please sign up or login with your details

Forgot password? Click here to reset