Manifold Regularization for Adversarial Robustness

03/09/2020
by   Charles Jin, et al.
0

Manifold regularization is a technique that penalizes the complexity of learned functions over the intrinsic geometry of input data. We develop a connection to learning functions which are "locally stable", and propose new regularization terms for training deep neural networks that are stable against a class of local perturbations. These regularizers enable us to train a network to state-of-the-art robust accuracy of 70 using ℓ_∞ perturbations of size ϵ = 8/255. Furthermore, our techniques do not rely on the construction of any adversarial examples, thus running orders of magnitude faster than standard algorithms for adversarial training.

READ FULL TEXT
research
12/04/2019

Learning with Multiplicative Perturbations

Adversarial Training (AT) and Virtual Adversarial Training (VAT) are the...
research
03/29/2023

Beyond Empirical Risk Minimization: Local Structure Preserving Regularization for Improving Adversarial Robustness

It is broadly known that deep neural networks are susceptible to being f...
research
01/26/2016

Unifying Adversarial Training Algorithms with Flexible Deep Data Gradient Regularization

Many previous proposals for adversarial training of deep neural nets hav...
research
08/07/2019

Robust Learning with Jacobian Regularization

Design of reliable systems must guarantee stability against input pertur...
research
09/30/2018

On Regularization and Robustness of Deep Neural Networks

Despite their success, deep neural networks suffer from several drawback...
research
06/05/2021

k-Mixup Regularization for Deep Learning via Optimal Transport

Mixup is a popular regularization technique for training deep neural net...
research
09/13/2020

Manifold attack

Machine Learning in general and Deep Learning in particular has gained m...

Please sign up or login with your details

Forgot password? Click here to reset