An Empirical Evaluation of Perturbation-based Defenses

02/08/2020
by   Adam Dziedzic, et al.
0

Recent work has extensively shown that randomized perturbations of a neural network can improve its robustness to adversarial attacks. The literature is, however, lacking a detailed compare-and-contrast of the latest proposals to understand what classes of perturbations work, when they work, and why they work. We contribute a detailed experimental evaluation that elucidates these questions and benchmarks perturbation defenses in a consistent way. In particular, we show five main results: (1) all input perturbation defenses, whether random or deterministic, are essentially equivalent in their efficacy, (2) such defenses offer almost no robustness to adaptive attacks unless these perturbations are observed during training, (3) a tuned sequence of noise layers across a network provides the best empirical robustness, (4) attacks transfer between perturbation defenses so the attackers need not know the specific type of defense only that it involves perturbations, and (5) adversarial examples very close to original images show an elevated sensitivity to perturbation in a first-order analysis. Based on these insights, we demonstrate a new robust model built on noise injection and adversarial training that achieves state-of-the-art robustness.

READ FULL TEXT
research
05/03/2019

Transfer of Adversarial Robustness Between Perturbation Types

We study the transfer of adversarial robustness of deep neural networks ...
research
04/20/2023

Certified Adversarial Robustness Within Multiple Perturbation Bounds

Randomized smoothing (RS) is a well known certified defense against adve...
research
05/06/2020

GraCIAS: Grassmannian of Corrupted Images for Adversarial Security

Input transformation based defense strategies fall short in defending ag...
research
12/03/2020

Towards Defending Multiple Adversarial Perturbations via Gated Batch Normalization

There is now extensive evidence demonstrating that deep neural networks ...
research
05/17/2023

Raising the Bar for Certified Adversarial Robustness with Diffusion Models

Certified defenses against adversarial attacks offer formal guarantees o...
research
02/12/2021

Certified Defenses: Why Tighter Relaxations May Hurt Training?

Certified defenses based on convex relaxations are an established techni...

Please sign up or login with your details

Forgot password? Click here to reset