DeepAI
Log In Sign Up

Rethinking Natural Adversarial Examples for Classification Models

02/23/2021
by   Xiao Li, et al.
11

Recently, it was found that many real-world examples without intentional modifications can fool machine learning models, and such examples are called "natural adversarial examples". ImageNet-A is a famous dataset of natural adversarial examples. By analyzing this dataset, we hypothesized that large, cluttered and/or unusual background is an important reason why the images in this dataset are difficult to be classified. We validated the hypothesis by reducing the background influence in ImageNet-A examples with object detection techniques. Experiments showed that the object detection models with various classification models as backbones obtained much higher accuracy than their corresponding classification models. A detection model based on the classification model EfficientNet-B7 achieved a top-1 accuracy of 53.95 surpassing previous state-of-the-art classification models trained on ImageNet, suggesting that accurate localization information can significantly boost the performance of classification models on ImageNet-A. We then manually cropped the objects in images from ImageNet-A and created a new dataset, named ImageNet-A-Plus. A human test on the new dataset showed that the deep learning-based classifiers still performed quite poorly compared with humans. Therefore, the new dataset can be used to study the robustness of classification models to the internal variance of objects without considering the background disturbance.

READ FULL TEXT

page 2

page 3

page 5

page 7

page 8

page 9

page 11

07/16/2019

Natural Adversarial Examples

We introduce natural adversarial examples -- real-world, unmodified, and...
11/07/2021

Natural Adversarial Objects

Although state-of-the-art object detection methods have shown compelling...
06/05/2019

A systematic framework for natural perturbations from videos

We introduce a systematic framework for quantifying the robustness of cl...
06/21/2021

Adversarial Examples Make Strong Poisons

The adversarial machine learning literature is largely partitioned into ...
11/30/2017

ConvNets and ImageNet Beyond Accuracy: Explanations, Bias Detection, Adversarial Examples and Model Criticism

ConvNets and Imagenet have driven the recent success of deep learning fo...
08/05/2018

Is Robustness the Cost of Accuracy? -- A Comprehensive Study on the Robustness of 18 Deep Image Classification Models

The prediction accuracy has been the long-lasting and sole standard for ...
08/06/2020

IIIT-AR-13K: A New Dataset for Graphical Object Detection in Documents

We introduce a new dataset for graphical object detection in business do...

Code Repositories