Variational Classification

05/17/2023
by   Shehzaad Dhuliawala, et al.
0

We present a novel extension of the traditional neural network approach to classification tasks, referred to as variational classification (VC). By incorporating latent variable modeling, akin to the relationship between variational autoencoders and traditional autoencoders, we derive a training objective based on the evidence lower bound (ELBO), optimized using an adversarial approach. Our VC model allows for more flexibility in design choices, in particular class-conditional latent priors, in place of the implicit assumptions made in off-the-shelf softmax classifiers. Empirical evaluation on image and text classification datasets demonstrates the effectiveness of our approach in terms of maintaining prediction accuracy while improving other desirable properties such as calibration and adversarial robustness, even when applied to out-of-domain data.

READ FULL TEXT

page 7

page 13

research
06/06/2019

An Introduction to Variational Autoencoders

Variational autoencoders provide a principled framework for learning dee...
research
04/10/2017

Reinterpreting Importance-Weighted Autoencoders

The standard interpretation of importance-weighted autoencoders is that ...
research
02/14/2018

DVAE++: Discrete Variational Autoencoders with Overlapping Transformations

Training of discrete latent variable models remains challenging because ...
research
06/01/2019

Cooperative neural networks (CoNN): Exploiting prior independence structure for improved classification

We propose a new approach, called cooperative neural networks (CoNN), wh...
research
09/27/2021

Challenging the Semi-Supervised VAE Framework for Text Classification

Semi-Supervised Variational Autoencoders (SSVAEs) are widely used models...
research
08/23/2019

Increasing the Generalisaton Capacity of Conditional VAEs

We address the problem of one-to-many mappings in supervised learning, w...

Please sign up or login with your details

Forgot password? Click here to reset