Real World Robustness from Systematic Noise

09/02/2021
by   Yan Wang, et al.
0

Systematic error, which is not determined by chance, often refers to the inaccuracy (involving either the observation or measurement process) inherent to a system. In this paper, we exhibit some long-neglected but frequent-happening adversarial examples caused by systematic error. More specifically, we find the trained neural network classifier can be fooled by inconsistent implementations of image decoding and resize. This tiny difference between these implementations often causes an accuracy drop from training to deployment. To benchmark these real-world adversarial examples, we propose ImageNet-S dataset, which enables researchers to measure a classifier's robustness to systematic error. For example, we find a normal ResNet-50 trained on ImageNet can have 1 Together our evaluation and dataset may aid future work toward real-world robustness and practical generalization.

READ FULL TEXT
research
07/16/2019

Natural Adversarial Examples

We introduce natural adversarial examples -- real-world, unmodified, and...
research
03/28/2019

Benchmarking Neural Network Robustness to Common Corruptions and Perturbations

In this paper we establish rigorous benchmarks for image classifier robu...
research
07/24/2017

Synthesizing Robust Adversarial Examples

Neural network-based classifiers parallel or exceed human-level accuracy...
research
06/09/2022

CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Vision Models

Adversarial examples represent a serious threat for deep neural networks...
research
06/05/2019

A systematic framework for natural perturbations from videos

We introduce a systematic framework for quantifying the robustness of cl...
research
07/04/2018

Benchmarking Neural Network Robustness to Common Corruptions and Surface Variations

In this paper we establish rigorous benchmarks for image classifier robu...
research
05/28/2018

Adversarial Examples in Remote Sensing

This paper considers attacks against machine learning algorithms used in...

Please sign up or login with your details

Forgot password? Click here to reset