Inducing Interpretable Representations with Variational Autoencoders

11/22/2016
by   N. Siddharth, et al.
0

We develop a framework for incorporating structured graphical models in the encoders of variational autoencoders (VAEs) that allows us to induce interpretable representations through approximate variational inference. This allows us to both perform reasoning (e.g. classification) under the structural constraints of a given graphical model, and use deep generative models to deal with messy, high-dimensional domains where it is often difficult to model all the variation. Learning in this framework is carried out end-to-end with a variational objective, applying to both unsupervised and semi-supervised schemes.

READ FULL TEXT
research
02/07/2018

Semi-Amortized Variational Autoencoders

Amortized variational inference (AVI) replaces instance-specific local i...
research
06/14/2023

Unbiased Learning of Deep Generative Models with Structured Discrete Representations

By composing graphical models with deep learning architectures, we learn...
research
06/01/2017

Learning Disentangled Representations with Semi-Supervised Deep Generative Models

Variational autoencoders (VAEs) learn representations of data by jointly...
research
11/15/2018

Concept-Oriented Deep Learning: Generative Concept Representations

Generative concept representations have three major advantages over disc...
research
10/16/2012

Belief Propagation for Structured Decision Making

Variational inference algorithms such as belief propagation have had tre...
research
04/18/2019

Design of Communication Systems using Deep Learning: A Variational Inference Perspective

An approach to design end to end communication system using deep learnin...
research
01/17/2022

Alleviating Cold-start Problem in CTR Prediction with A Variational Embedding Learning Framework

We propose a general Variational Embedding Learning Framework (VELF) for...

Please sign up or login with your details

Forgot password? Click here to reset