Boosted Generative Models

02/27/2017
by   Aditya Grover, et al.
0

We propose a new approach for using unsupervised boosting to create an ensemble of generative models, where models are trained in sequence to correct earlier mistakes. Our meta-algorithmic framework can leverage any existing base learner that permits likelihood evaluation, including recent latent variable models. Further, our approach allows the ensemble to include discriminative models trained to distinguish real data from model-generated data. We show theoretical conditions under which incorporating a new model in the ensemble will improve the fit and empirically demonstrate the effectiveness of boosting on density estimation and sample generation on synthetic and benchmark real datasets.

READ FULL TEXT

page 9

page 10

page 18

research
05/11/2019

Boosting Generative Models by Leveraging Cascaded Meta-Models

Deep generative models are effective methods of modeling data. However, ...
research
06/23/2022

LED: Latent Variable-based Estimation of Density

Modern generative models are roughly divided into two main categories: (...
research
10/04/2021

A moment-matching metric for latent variable generative models

It can be difficult to assess the quality of a fitted model when facing ...
research
07/14/2022

Causal Graphs Underlying Generative Models: Path to Learning with Limited Data

Training generative models that capture rich semantics of the data and i...
research
05/06/2021

Machine Collaboration

We propose a new ensemble framework for supervised learning, named machi...
research
08/07/2023

Generative Forests

Tabular data represents one of the most prevalent form of data. When it ...
research
06/16/2020

RaSE: Random Subspace Ensemble Classification

We propose a new model-free ensemble classification framework, Random Su...

Please sign up or login with your details

Forgot password? Click here to reset