Rethinking Natural Adversarial Examples for Classification Models

02/23/2021
by   Xiao Li, et al.
11

Recently, it was found that many real-world examples without intentional modifications can fool machine learning models, and such examples are called "natural adversarial examples". ImageNet-A is a famous dataset of natural adversarial examples. By analyzing this dataset, we hypothesized that large, cluttered and/or unusual background is an important reason why the images in this dataset are difficult to be classified. We validated the hypothesis by reducing the background influence in ImageNet-A examples with object detection techniques. Experiments showed that the object detection models with various classification models as backbones obtained much higher accuracy than their corresponding classification models. A detection model based on the classification model EfficientNet-B7 achieved a top-1 accuracy of 53.95 surpassing previous state-of-the-art classification models trained on ImageNet, suggesting that accurate localization information can significantly boost the performance of classification models on ImageNet-A. We then manually cropped the objects in images from ImageNet-A and created a new dataset, named ImageNet-A-Plus. A human test on the new dataset showed that the deep learning-based classifiers still performed quite poorly compared with humans. Therefore, the new dataset can be used to study the robustness of classification models to the internal variance of objects without considering the background disturbance.

READ FULL TEXT

page 2

page 3

page 5

page 7

page 8

page 9

page 11

research
07/16/2019

Natural Adversarial Examples

We introduce natural adversarial examples -- real-world, unmodified, and...
research
11/07/2021

Natural Adversarial Objects

Although state-of-the-art object detection methods have shown compelling...
research
06/05/2019

A systematic framework for natural perturbations from videos

We introduce a systematic framework for quantifying the robustness of cl...
research
06/21/2021

Adversarial Examples Make Strong Poisons

The adversarial machine learning literature is largely partitioned into ...
research
11/30/2017

ConvNets and ImageNet Beyond Accuracy: Explanations, Bias Detection, Adversarial Examples and Model Criticism

ConvNets and Imagenet have driven the recent success of deep learning fo...
research
01/29/2019

Adversarial Examples Are a Natural Consequence of Test Error in Noise

Over the last few years, the phenomenon of adversarial examples --- mali...
research
07/12/2023

A New Dataset and Comparative Study for Aphid Cluster Detection

Aphids are one of the main threats to crops, rural families, and global ...

Please sign up or login with your details

Forgot password? Click here to reset