Manifold Adversarial Learning

07/16/2018
by   Shufei Zhang, et al.
8

The recently proposed adversarial training methods show the robustness to both adversarial and original examples and achieve state-of-the-art results in supervised and semi-supervised learning. All the existing adversarial training methods con- sider only how the worst perturbed examples (i.e., adversarial examples) could affect the model output. Despite their success, we argue that such setting may be in lack of generalization, since the output space (or label space) is apparently less informative. In this paper, we propose a novel method, called Manifold Adver- sarial Training (MAT). MAT manages to build an adversarial framework based on how the worst perturbation could affect the distributional manifold rather than the output space. Particularly, a latent data space with the Gaussian Mixture Model (GMM) will be first derived. On one hand, MAT tries to perturb the input samples in the way that would rough the distributional manifold the worst. On the other hand, the deep learning model is trained trying to promote in the latent space the manifold smoothness, measured by the variation of Gaussian mixtures (given the local perturbation around the data point). Importantly, since the latent space is more informative than the output space, the proposed MAT can learn better a ro- bust and compact data representation, leading to further performance improvemen- t. The proposed MAT is important in that it can be considered as a superset of one recently-proposed discriminative feature learning approach called center loss. We conducted a series of experiments in both supervised and semi-supervised learn- ing on three benchmark data sets, showing that the proposed MAT can achieve remarkable performance, much better than those of the state-of-the-art adversarial approaches.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/26/2020

Regularization with Latent Space Virtual Adversarial Training

Virtual Adversarial Training (VAT) has shown impressive results among re...
research
07/02/2015

Distributional Smoothing with Virtual Adversarial Training

We propose local distributional smoothness (LDS), a new notion of smooth...
research
02/01/2021

Towards Speeding up Adversarial Training in Latent Spaces

Adversarial training is wildly considered as the most effective way to d...
research
11/15/2019

On Model Robustness Against Adversarial Examples

We study the model robustness against adversarial examples, referred to ...
research
11/04/2016

Semantic Noise Modeling for Better Representation Learning

Latent representation learned from multi-layered neural networks via hie...
research
06/04/2018

Adversarial confidence and smoothness regularizations for scalable unsupervised discriminative learning

In this paper, we consider a generic probabilistic discriminative learne...
research
06/26/2020

Learning Diverse Latent Representations for Improving the Resilience to Adversarial Attacks

This paper proposes an ensemble learning model that is resistant to adve...

Please sign up or login with your details

Forgot password? Click here to reset