How many dimensions are required to find an adversarial example?

03/24/2023
by   Charles Godfrey, et al.
0

Past work exploring adversarial vulnerability have focused on situations where an adversary can perturb all dimensions of model input. On the other hand, a range of recent works consider the case where either (i) an adversary can perturb a limited number of input parameters or (ii) a subset of modalities in a multimodal problem. In both of these cases, adversarial examples are effectively constrained to a subspace V in the ambient input space 𝒳. Motivated by this, in this work we investigate how adversarial vulnerability depends on (V). In particular, we show that the adversarial success of standard PGD attacks with ℓ^p norm constraints behaves like a monotonically increasing function of ϵ ((V)/𝒳)^1/q where ϵ is the perturbation budget and 1/p + 1/q =1, provided p > 1 (the case p=1 presents additional subtleties which we analyze in some detail). This functional form can be easily derived from a simple toy linear model, and as such our results land further credence to arguments that adversarial examples are endemic to locally linear models on high dimensional spaces.

READ FULL TEXT
research
10/20/2022

Balanced Adversarial Training: Balancing Tradeoffs between Fickleness and Obstinacy in NLP Models

Traditional (fickle) adversarial examples involve finding a small pertur...
research
01/02/2018

High Dimensional Spaces, Deep Learning and Adversarial Examples

In this paper, we analyze deep learning from a mathematical point of vie...
research
08/26/2021

Why Adversarial Reprogramming Works, When It Fails, and How to Tell the Difference

Adversarial reprogramming allows repurposing a machine-learning model to...
research
03/25/2019

Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness

Adversarial examples are malicious inputs crafted to cause a model to mi...
research
11/08/2017

Intriguing Properties of Adversarial Examples

It is becoming increasingly clear that many machine learning classifiers...
research
02/28/2019

Towards Understanding Adversarial Examples Systematically: Exploring Data Size, Task and Model Factors

Most previous works usually explained adversarial examples from several ...
research
03/05/2019

L 1-norm double backpropagation adversarial defense

Adversarial examples are a challenging open problem for deep neural netw...

Please sign up or login with your details

Forgot password? Click here to reset