Distributional Smoothing with Virtual Adversarial Training

07/02/2015
by   Takeru Miyato, et al.
0

We propose local distributional smoothness (LDS), a new notion of smoothness for statistical model that can be used as a regularization term to promote the smoothness of the model distribution. We named the LDS based regularization as virtual adversarial training (VAT). The LDS of a model at an input datapoint is defined as the KL-divergence based robustness of the model distribution against local perturbation around the datapoint. VAT resembles adversarial training, but distinguishes itself in that it determines the adversarial direction from the model distribution alone without using the label information, making it applicable to semi-supervised learning. The computational cost for VAT is relatively low. For neural network, the approximated gradient of the LDS can be computed with no more than three pairs of forward and back propagations. When we applied our technique to supervised and semi-supervised learning for the MNIST dataset, it outperformed all the training methods other than the current state of the art method, which is based on a highly advanced generative model. We also applied our method to SVHN and NORB, and confirmed our method's superior performance over the current state of the art semi-supervised method applied to these datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/13/2017

Virtual Adversarial Training: a Regularization Method for Supervised and Semi-supervised Learning

We propose a new regularization method based on virtual adversarial loss...
research
11/20/2017

Virtual Adversarial Ladder Networks For Semi-supervised Learning

Semi-supervised learning (SSL) partially circumvents the high cost of la...
research
07/16/2018

Manifold Adversarial Learning

The recently proposed adversarial training methods show the robustness t...
research
10/23/2020

Posterior Differential Regularization with f-divergence for Improving Model Robustness

We address the problem of enhancing model robustness through regularizat...
research
11/21/2019

Patch-level Neighborhood Interpolation: A General and Effective Graph-based Regularization Strategy

Regularization plays a crucial role in machine learning models, especial...
research
08/20/2022

Adversarial contamination of networks in the setting of vertex nomination: a new trimming method

As graph data becomes more ubiquitous, the need for robust inferential g...
research
09/15/2019

Understanding and Improving Virtual Adversarial Training

In semi-supervised learning, virtual adversarial training (VAT) approach...

Please sign up or login with your details

Forgot password? Click here to reset