Virtual Adversarial Ladder Networks For Semi-supervised Learning

11/20/2017
by   Saki Shinoda, et al.
0

Semi-supervised learning (SSL) partially circumvents the high cost of labeling data by augmenting a small labeled dataset with a large and relatively cheap unlabeled dataset drawn from the same distribution. This paper offers a novel interpretation of two deep learning-based SSL approaches, ladder networks and virtual adversarial training (VAT), as applying distributional smoothing to their respective latent spaces. We propose a class of models that fuse these approaches. We achieve near-supervised accuracy with high consistency on the MNIST dataset using just 5 labels per class: our best model, ladder with layer-wise virtual adversarial noise (LVAN-LW), achieves 1.42 error rate on the MNIST test set, in comparison with 1.62 for the ladder network. On adversarial examples generated with L2-normalized fast gradient method, LVAN-LW trained with 5 examples per class achieves average error rate 2.4 network and 9.9

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/02/2015

Distributional Smoothing with Virtual Adversarial Training

We propose local distributional smoothness (LDS), a new notion of smooth...
research
12/22/2019

Adversarial Feature Distribution Alignment for Semi-Supervised Learning

Training deep neural networks with only a few labeled samples can lead t...
research
06/14/2018

Improving Consistency-Based Semi-Supervised Learning with Weight Averaging

Recent advances in deep unsupervised learning have renewed interest in s...
research
02/10/2020

Semi-Supervised Class Discovery

One promising approach to dealing with datapoints that are outside of th...
research
11/12/2017

Semi-Supervised Learning via New Deep Network Inversion

We exploit a recently derived inversion scheme for arbitrary deep neural...
research
07/12/2017

Adversarial Dropout for Supervised and Semi-supervised Learning

Recently, the training with adversarial examples, which are generated by...
research
05/23/2018

Input and Weight Space Smoothing for Semi-supervised Learning

We propose regularizing the empirical loss for semi-supervised learning ...

Please sign up or login with your details

Forgot password? Click here to reset