New CleverHans Feature: Better Adversarial Robustness Evaluations with Attack Bundling

11/08/2018
by   Ian Goodfellow, et al.
0

This technical report describes a new feature of the CleverHans library called "attack bundling". Many papers about adversarial examples present lists of error rates corresponding to different attack algorithms. A common approach is to take the maximum across this list and compare defenses against that error rate. We argue that a better approach is to use attack bundling: the max should be taken across many examples at the level of individual examples, then the error rate should be calculated by averaging after this maximization operation. Reporting the bundled attacker error rate provides a lower bound on the true worst-case error rate. The traditional approach of reporting the maximum error rate across attacks can underestimate the true worst-case error rate by an amount approaching 100% as the number of attacks approaches infinity. Attack bundling can be used with different prioritization schemes to optimize quantities such as error rate on adversarial examples, perturbation size needed to cause misclassification, or failure rate when using a specific confidence threshold.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/21/2018

Gradient Masking Causes CLEVER to Overestimate Adversarial Perturbation Size

A key problem in research on adversarial examples is that vulnerability ...
research
03/31/2022

Learning from many trajectories

We initiate a study of supervised learning from many independent sequenc...
research
11/12/2017

Machine vs Machine: Minimax-Optimal Defense Against Adversarial Examples

Recently, researchers have discovered that the state-of-the-art object c...
research
01/20/2022

Adversarial Jamming for a More Effective Constellation Attack

The common jamming mode in wireless communication is band barrage jammin...
research
02/01/2019

Approximate Logic Synthesis: A Reinforcement Learning-Based Technology Mapping Approach

Approximate Logic Synthesis (ALS) is the process of synthesizing and map...
research
02/15/2018

Adversarial Risk and the Dangers of Evaluating Against Weak Attacks

This paper investigates recently proposed approaches for defending again...
research
09/15/2021

Adversarial Mixing Policy for Relaxing Locally Linear Constraints in Mixup

Mixup is a recent regularizer for current deep classification networks. ...

Please sign up or login with your details

Forgot password? Click here to reset