Towards Understanding Adversarial Examples Systematically: Exploring Data Size, Task and Model Factors

02/28/2019
by   Ke Sun, et al.
0

Most previous works usually explained adversarial examples from several specific perspectives, lacking relatively integral comprehension about this problem. In this paper, we present a systematic study on adversarial examples from three aspects: the amount of training data, task-dependent and model-specific factors. Particularly, we show that adversarial generalization (i.e. test accuracy on adversarial examples) for standard training requires more data than standard generalization (i.e. test accuracy on clean examples); and uncover the global relationship between generalization and robustness with respect to the data size especially when data is augmented by generative models. This reveals the trade-off correlation between standard generalization and robustness in limited training data regime and their consistency when data size is large enough. Furthermore, we explore how different task-dependent and model-specific factors influence the vulnerability of deep neural networks by extensive empirical analysis. Relevant recommendations on defense against adversarial attacks are provided as well. Our results outline a potential path towards the luminous and systematic understanding of adversarial examples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/29/2021

Searching for an Effective Defender: Benchmarking Defense against Adversarial Word Substitution

Recent studies have shown that deep neural networks are vulnerable to in...
research
03/05/2020

Adversarial Vertex Mixup: Toward Better Adversarially Robust Generalization

Adversarial examples cause neural networks to produce incorrect outputs ...
research
05/27/2023

Pre-trained transformer for adversarial purification

With more and more deep neural networks being deployed as various daily ...
research
07/04/2020

Relationship between manifold smoothness and adversarial vulnerability in deep learning with local errors

Artificial neural networks can achieve impressive performances, and even...
research
12/11/2020

Random Projections for Adversarial Attack Detection

Whilst adversarial attack detection has received considerable attention,...
research
06/28/2023

Does Saliency-Based Training bring Robustness for Deep Neural Networks in Image Classification?

Deep Neural Networks are powerful tools to understand complex patterns a...
research
03/24/2023

How many dimensions are required to find an adversarial example?

Past work exploring adversarial vulnerability have focused on situations...

Please sign up or login with your details

Forgot password? Click here to reset