Adversarially Robust Training through Structured Gradient Regularization

05/22/2018
by   Kevin Roth, et al.
2

We propose a novel data-dependent structured gradient regularizer to increase the robustness of neural networks vis-a-vis adversarial perturbations. Our regularizer can be derived as a controlled approximation from first principles, leveraging the fundamental link between training with noise and regularization. It adds very little computational overhead during learning and is simple to implement generically in standard deep learning frameworks. Our experiments provide strong evidence that structured gradient regularization can act as an effective first line of defense against attacks based on low-level signal corruption.

READ FULL TEXT
research
09/10/2020

Second Order Optimization for Adversarial Robustness and Interpretability

Deep neural networks are easily fooled by small perturbations known as a...
research
07/31/2018

Using Feature Grouping as a Stochastic Regularizer for High-Dimensional Noisy Data

The use of complex models --with many parameters-- is challenging with h...
research
09/15/2019

Wasserstein Diffusion Tikhonov Regularization

We propose regularization strategies for learning discriminative models ...
research
03/09/2023

TANGOS: Regularizing Tabular Neural Networks through Gradient Orthogonalization and Specialization

Despite their success with unstructured data, deep neural networks are n...
research
03/31/2023

Per-Example Gradient Regularization Improves Learning Signals from Noisy Data

Gradient regularization, as described in <cit.>, is a highly effective t...
research
04/29/2018

SHARE: Regularization for Deep Learning

Regularization is a big issue for training deep neural networks. In this...
research
05/24/2018

Laplacian Power Networks: Bounding Indicator Function Smoothness for Adversarial Defense

Deep Neural Networks often suffer from lack of robustness to adversarial...

Please sign up or login with your details

Forgot password? Click here to reset