Is Robustness the Cost of Accuracy? -- A Comprehensive Study on the Robustness of 18 Deep Image Classification Models

08/05/2018
by   Dong Su, et al.
12

The prediction accuracy has been the long-lasting and sole standard for comparing the performance of different image classification models, including the ImageNet competition. However, recent studies have highlighted the lack of robustness in well-trained deep neural networks to adversarial examples. Visually imperceptible perturbations to natural images can easily be crafted and mislead the image classifiers towards misclassification. To demystify the trade-offs between robustness and accuracy, in this paper we thoroughly benchmark 18 ImageNet models using multiple robustness metrics, including the distortion, success rate and transferability of adversarial examples between 306 pairs of models. Our extensive experimental results reveal several new insights: (1) linear scaling law - the empirical ℓ_2 and ℓ_∞ distortion metrics scale linearly with the logarithm of classification error; (2) model architecture is a more critical factor to robustness than model size, and the disclosed accuracy-robustness Pareto frontier can be used as an evaluation criterion for ImageNet model designers; (3) for a similar network architecture, increasing network depth slightly improves robustness in ℓ_∞ distortion; (4) there exist models (in VGG family) that exhibit high adversarial transferability, while most adversarial examples crafted from one model can only be transferred within the same family. Experiment code is publicly available at <https://github.com/huanzhang12/Adversarial_Survey>.

READ FULL TEXT

page 9

page 11

page 12

page 13

page 18

page 19

page 20

research
09/13/2017

EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples

Recent studies have highlighted the vulnerability of deep neural network...
research
11/30/2017

ConvNets and ImageNet Beyond Accuracy: Explanations, Bias Detection, Adversarial Examples and Model Criticism

ConvNets and Imagenet have driven the recent success of deep learning fo...
research
02/25/2020

I Am Going MAD: Maximum Discrepancy Competition for Comparing Classifiers Adaptively

The learning of hierarchical representations for image classification ha...
research
02/21/2019

Wasserstein Adversarial Examples via Projected Sinkhorn Iterations

A rapidly growing area of work has studied the existence of adversarial ...
research
09/02/2022

Impact of Scaled Image on Robustness of Deep Neural Networks

Deep neural networks (DNNs) have been widely used in computer vision tas...
research
03/30/2023

ImageNet-E: Benchmarking Neural Network Robustness via Attribute Editing

Recent studies have shown that higher accuracy on ImageNet usually leads...
research
10/08/2022

ViewFool: Evaluating the Robustness of Visual Recognition to Adversarial Viewpoints

Recent studies have demonstrated that visual recognition models lack rob...

Please sign up or login with your details

Forgot password? Click here to reset