Variational Autoencoders for Semi-supervised Text Classification

03/08/2016
by   Weidi Xu, et al.
0

Although semi-supervised variational autoencoder (SemiVAE) works in image classification task, it fails in text classification task if using vanilla LSTM as its decoder. From a perspective of reinforcement learning, it is verified that the decoder's capability to distinguish between different categorical labels is essential. Therefore, Semi-supervised Sequential Variational Autoencoder (SSVAE) is proposed, which increases the capability by feeding label into its decoder RNN at each time-step. Two specific decoder structures are investigated and both of them are verified to be effective. Besides, in order to reduce the computational complexity in training, a novel optimization method is proposed, which estimates the gradient of the unlabeled objective function by sampling, along with two variance reduction techniques. Experimental results on Large Movie Review Dataset (IMDB) and AG's News corpus show that the proposed approach significantly improves the classification accuracy compared with pure-supervised classifiers, and achieves competitive performance against previous advanced methods. State-of-the-art results can be obtained by integrating other pretraining-based methods.

READ FULL TEXT
research
09/08/2020

Revisiting LSTM Networks for Semi-Supervised Text Classification via Mixed Objective Function

In this paper, we study bidirectional LSTM network for the task of text ...
research
06/05/2019

Variational Pretraining for Semi-supervised Text Classification

We introduce VAMPIRE, a lightweight pretraining framework for effective ...
research
10/24/2018

Semi-supervised Target-level Sentiment Analysis via Variational Autoencoder

Target-level aspect-based sentiment analysis (TABSA) is a long-standing ...
research
08/08/2019

One Model To Rule Them All

We present a new flavor of Variational Autoencoder (VAE) that interpolat...
research
01/04/2023

Semi-MAE: Masked Autoencoders for Semi-supervised Vision Transformers

Vision Transformer (ViT) suffers from data scarcity in semi-supervised l...
research
01/09/2019

Dirichlet Variational Autoencoder

This paper proposes Dirichlet Variational Autoencoder (DirVAE) using a D...
research
04/05/2019

Combining Sentiment Lexica with a Multi-View Variational Autoencoder

When assigning quantitative labels to a dataset, different methodologies...

Please sign up or login with your details

Forgot password? Click here to reset