Adversarially Learned Mixture Model

07/14/2018
by   Andrew Jesson, et al.
0

The Adversarially Learned Mixture Model (AMM) is a generative model for unsupervised or semi-supervised data clustering. The AMM is the first adversarially optimized method to model the conditional dependence between inferred continuous and categorical latent variables. Experiments on the MNIST and SVHN datasets show that the AMM allows for semantic separation of complex data when little or no labeled data is available. The AMM achieves a state-of-the-art unsupervised clustering error rate of 2.86 dataset. A semi-supervised extension of the AMM yields competitive results on the SVHN dataset.

READ FULL TEXT

page 6

page 7

page 8

research
12/06/2016

Semi-Supervised Learning with the Deep Rendering Mixture Model

Semi-supervised learning algorithms reduce the high cost of acquiring la...
research
10/24/2019

A study of semi-supervised speaker diarization system using gan mixture model

We propose a new speaker diarization system based on a recently introduc...
research
04/19/2018

Unsupervised Representation Adversarial Learning Network: from Reconstruction to Generation

A good representation for arbitrarily complicated data should have the c...
research
12/17/2021

Semi-Supervised Clustering via Markov Chain Aggregation

We connect the problem of semi-supervised clustering to constrained Mark...
research
02/04/2018

Hierarchical Adversarially Learned Inference

We propose a novel hierarchical generative model with a simple Markovian...
research
02/03/2015

Hybrid Orthogonal Projection and Estimation (HOPE): A New Framework to Probe and Learn Neural Networks

In this paper, we propose a novel model for high-dimensional data, calle...
research
02/13/2018

Clustering and Semi-Supervised Classification for Clickstream Data via Mixture Models

Finite mixture models have been used for unsupervised learning for over ...

Please sign up or login with your details

Forgot password? Click here to reset