Convergence and Margin of Adversarial Training on Separable Data

05/22/2019
by   Zachary Charles, et al.
0

Adversarial training is a technique for training robust machine learning models. To encourage robustness, it iteratively computes adversarial examples for the model, and then re-trains on these examples via some update rule. This work analyzes the performance of adversarial training on linearly separable data, and provides bounds on the number of iterations required for large margin. We show that when the update rule is given by an arbitrary empirical risk minimizer, adversarial training may require exponentially many iterations to obtain large margin. However, if gradient or stochastic gradient update rules are used, only polynomially many iterations are required to find a large-margin separator. By contrast, without the use of adversarial examples, gradient methods may require exponentially many iterations to achieve large margin. Our results are derived by showing that adversarial training with gradient updates minimizes a robust version of the empirical risk at a O((t)^2/t) rate, despite non-smoothness. We corroborate our theory empirically.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/13/2018

Curriculum Adversarial Training

Recently, deep learning has been applied to many security-sensitive appl...
research
08/25/2021

Bridged Adversarial Training

Adversarial robustness is considered as a required property of deep neur...
research
07/16/2022

Towards the Desirable Decision Boundary by Moderate-Margin Adversarial Training

Adversarial training, as one of the most effective defense methods again...
research
05/27/2022

R-HTDetector: Robust Hardware-Trojan Detection Based on Adversarial Training

Hardware Trojans (HTs) have become a serious problem, and extermination ...
research
06/30/2020

Gradient Methods Never Overfit On Separable Data

A line of recent works established that when training linear predictors ...
research
02/28/2020

Detecting and Recovering Adversarial Examples: An Input Sensitivity Guided Method

Deep neural networks undergo rapid development and achieve notable succe...
research
06/17/2022

Large-Margin Representation Learning for Texture Classification

This paper presents a novel approach combining convolutional layers (CLs...

Please sign up or login with your details

Forgot password? Click here to reset