Inverting Variational Autoencoders for Improved Generative Accuracy

08/21/2016
by   Ian Gemp, et al.
0

Recent advances in semi-supervised learning with deep generative models have shown promise in generalizing from small labeled datasets (x,y) to large unlabeled ones (x). In the case where the codomain has known structure, a large unfeatured dataset (y) is potentially available. We develop a parameter-efficient, deep semi-supervised generative model for the purpose of exploiting this untapped data source. Empirical results show improved performance in disentangling latent variable semantics as well as improved discriminative prediction on Martian spectroscopic and handwritten digit domains.

READ FULL TEXT

page 2

page 3

page 4

page 6

research
11/22/2016

Max-Margin Deep Generative Models for (Semi-)Supervised Learning

Deep generative models (DGMs) are effective on learning multilayered rep...
research
06/23/2019

Variational Sequential Labelers for Semi-Supervised Learning

We introduce a family of multitask variational methods for semi-supervis...
research
12/12/2020

Learning Consistent Deep Generative Models from Sparse Data via Prediction Constraints

We develop a new framework for learning variational autoencoders and oth...
research
09/23/2016

Language as a Latent Variable: Discrete Generative Models for Sentence Compression

In this work we explore deep generative models of text in which the late...
research
06/29/2017

Bayesian Semisupervised Learning with Deep Generative Models

Neural network based generative models with discriminative components ar...
research
06/23/2022

Few-Shot Non-Parametric Learning with Deep Latent Variable Model

Most real-world problems that machine learning algorithms are expected t...
research
03/23/2020

ScrabbleGAN: Semi-Supervised Varying Length Handwritten Text Generation

Optical character recognition (OCR) systems performance have improved si...

Please sign up or login with your details

Forgot password? Click here to reset