Adversarial Examples Are a Natural Consequence of Test Error in Noise

01/29/2019
by   Nic Ford, et al.
30

Over the last few years, the phenomenon of adversarial examples --- maliciously constructed inputs that fool trained machine learning models --- has captured the attention of the research community, especially when the adversary is restricted to small modifications of a correctly handled input. Less surprisingly, image classifiers also lack human-level performance on randomly corrupted images, such as images with additive Gaussian noise. In this paper we provide both empirical and theoretical evidence that these are two manifestations of the same underlying phenomenon, establishing close connections between the adversarial robustness and corruption robustness research programs. This suggests that improving adversarial robustness should go hand in hand with improving performance in the presence of more general and realistic image corruptions. Based on our results we recommend that future adversarial defenses consider evaluating the robustness of their methods to distributional shift with benchmarks such as Imagenet-C.

READ FULL TEXT

page 7

page 12

page 18

page 20

page 22

page 23

page 24

page 25

research
07/16/2019

Natural Adversarial Examples

We introduce natural adversarial examples -- real-world, unmodified, and...
research
10/26/2021

A Frequency Perspective of Adversarial Robustness

Adversarial examples pose a unique challenge for deep learning systems. ...
research
08/07/2020

Adversarial Examples on Object Recognition: A Comprehensive Survey

Deep neural networks are at the forefront of machine learning research. ...
research
06/14/2021

Audio Attacks and Defenses against AED Systems – A Practical Study

Audio Event Detection (AED) Systems capture audio from the environment a...
research
11/02/2022

Certified Robustness of Quantum Classifiers against Adversarial Examples through Quantum Noise

Recently, quantum classifiers have been known to be vulnerable to advers...
research
03/29/2022

NICGSlowDown: Evaluating the Efficiency Robustness of Neural Image Caption Generation Models

Neural image caption generation (NICG) models have received massive atte...
research
02/23/2021

Rethinking Natural Adversarial Examples for Classification Models

Recently, it was found that many real-world examples without intentional...

Please sign up or login with your details

Forgot password? Click here to reset