DeepAI AI Chat
Log In Sign Up

Natural Adversarial Examples

07/16/2019
by   Dan Hendrycks, et al.
berkeley college
University of Washington
Toyota Technological Institute at Chicago
6

We introduce natural adversarial examples -- real-world, unmodified, and naturally occurring examples that cause classifier accuracy to significantly degrade. We curate 7,500 natural adversarial examples and release them in an ImageNet classifier test set that we call ImageNet-A. This dataset serves as a new way to measure classifier robustness. Like l_p adversarial examples, ImageNet-A examples successfully transfer to unseen or black-box classifiers. For example, on ImageNet-A a DenseNet-121 obtains around 2 accuracy drop of approximately 90 because ImageNet-A examples exploit deep flaws in current classifiers including their over-reliance on color, texture, and background cues. We observe that popular training techniques for improving robustness have little effect, but we show that some architectural changes can enhance robustness to natural adversarial examples. Future research is required to enable robust generalization to this hard ImageNet test set.

READ FULL TEXT

page 2

page 3

page 4

page 5

page 7

09/02/2021

Real World Robustness from Systematic Noise

Systematic error, which is not determined by chance, often refers to the...
02/23/2021

Rethinking Natural Adversarial Examples for Classification Models

Recently, it was found that many real-world examples without intentional...
06/21/2021

Adversarial Examples Make Strong Poisons

The adversarial machine learning literature is largely partitioned into ...
01/29/2019

Adversarial Examples Are a Natural Consequence of Test Error in Noise

Over the last few years, the phenomenon of adversarial examples --- mali...
06/05/2019

A systematic framework for natural perturbations from videos

We introduce a systematic framework for quantifying the robustness of cl...
02/11/2021

Defuse: Harnessing Unrestricted Adversarial Examples for Debugging Models Beyond Test Accuracy

We typically compute aggregate statistics on held-out test data to asses...
10/16/2020

Mischief: A Simple Black-Box Attack Against Transformer Architectures

We introduce Mischief, a simple and lightweight method to produce a clas...

Code Repositories

natural-adv-examples

A Harder ImageNet Test Set


view repo

Adversary_resistant_CV

The goal is to create a simple computer vision system that is at least somewhat resistant to adversarial example


view repo