Robustifying ℓ_∞ Adversarial Training to the Union of Perturbation Models

05/31/2021
by   Ameya D. Patil, et al.
0

Classical adversarial training (AT) frameworks are designed to achieve high adversarial accuracy against a single attack type, typically ℓ_∞ norm-bounded perturbations. Recent extensions in AT have focused on defending against the union of multiple perturbations but this benefit is obtained at the expense of a significant (up to 10×) increase in training complexity over single-attack ℓ_∞ AT. In this work, we expand the capabilities of widely popular single-attack ℓ_∞ AT frameworks to provide robustness to the union of (ℓ_∞, ℓ_2, ℓ_1) perturbations while preserving their training efficiency. Our technique, referred to as Shaped Noise Augmented Processing (SNAP), exploits a well-established byproduct of single-attack AT frameworks – the reduction in the curvature of the decision boundary of networks. SNAP prepends a given deep net with a shaped noise augmentation layer whose distribution is learned along with network parameters using any standard single-attack AT. As a result, SNAP enhances adversarial accuracy of ResNet-18 on CIFAR-10 against the union of (ℓ_∞, ℓ_2, ℓ_1) perturbations by 14 single-attack ℓ_∞ AT frameworks, and, for the first time, establishes a benchmark for ResNet-50 and ResNet-101 on ImageNet.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/09/2022

Towards Compositional Adversarial Robustness: Generalizing Adversarial Training to Composite Semantic Perturbations

Model robustness against adversarial examples of single perturbation typ...
research
09/09/2019

Adversarial Robustness Against the Union of Multiple Perturbation Models

Owing to the susceptibility of deep learning systems to adversarial atta...
research
11/07/2020

Bridging the Performance Gap between FGSM and PGD Adversarial Training

Deep learning achieves state-of-the-art performance in many tasks but ex...
research
12/16/2018

Trust Region Based Adversarial Attack on Neural Networks

Deep Neural Networks are quite vulnerable to adversarial perturbations. ...
research
06/13/2022

Towards Alternative Techniques for Improving Adversarial Robustness: Analysis of Adversarial Training at a Spectrum of Perturbations

Adversarial training (AT) and its variants have spearheaded progress in ...
research
07/16/2022

CARBEN: Composite Adversarial Robustness Benchmark

Prior literature on adversarial attack methods has mainly focused on att...
research
11/26/2018

EnResNet: ResNet Ensemble via the Feynman-Kac Formalism

We propose a simple yet powerful ResNet ensemble algorithm which consist...

Please sign up or login with your details

Forgot password? Click here to reset