Unifying Adversarial Training Algorithms with Flexible Deep Data Gradient Regularization

01/26/2016
by   Alexander G. Ororbia II, et al.
0

Many previous proposals for adversarial training of deep neural nets have included di- rectly modifying the gradient, training on a mix of original and adversarial examples, using contractive penalties, and approximately optimizing constrained adversarial ob- jective functions. In this paper, we show these proposals are actually all instances of optimizing a general, regularized objective we call DataGrad. Our proposed DataGrad framework, which can be viewed as a deep extension of the layerwise contractive au- toencoder penalty, cleanly simplifies prior work and easily allows extensions such as adversarial training with multi-task cues. In our experiments, we find that the deep gra- dient regularization of DataGrad (which also has L1 and L2 flavors of regularization) outperforms alternative forms of regularization, including classical L1, L2, and multi- task, both on the original dataset as well as on adversarial sets. Furthermore, we find that combining multi-task optimization with DataGrad adversarial training results in the most robust performance.

READ FULL TEXT
research
10/01/2018

Improved robustness to adversarial examples using Lipschitz regularization of the loss

Adversarial training is an effective method for improving robustness to ...
research
03/09/2020

Manifold Regularization for Adversarial Robustness

Manifold regularization is a technique that penalizes the complexity of ...
research
07/22/2021

Towards Explaining Adversarial Examples Phenomenon in Artificial Neural Networks

In this paper, we study the adversarial examples existence and adversari...
research
11/23/2018

Training Multi-Task Adversarial Network For Extracting Noise-Robust Speaker Embedding

Under noisy environments, to achieve the robust performance of speaker r...
research
02/06/2023

GAT: Guided Adversarial Training with Pareto-optimal Auxiliary Tasks

While leveraging additional training data is well established to improve...
research
11/17/2015

Understanding Adversarial Training: Increasing Local Stability of Neural Nets through Robust Optimization

We propose a general framework for increasing local stability of Artific...
research
04/21/2021

MagicPai at SemEval-2021 Task 7: Method for Detecting and Rating Humor Based on Multi-Task Adversarial Training

This paper describes MagicPai's system for SemEval 2021 Task 7, HaHackat...

Please sign up or login with your details

Forgot password? Click here to reset