DeepAI AI Chat
Log In Sign Up

Local Intrinsic Dimensionality Signals Adversarial Perturbations

by   Sandamal Weerasinghe, et al.
The University of Melbourne

The vulnerability of machine learning models to adversarial perturbations has motivated a significant amount of research under the broad umbrella of adversarial machine learning. Sophisticated attacks may cause learning algorithms to learn decision functions or make decisions with poor predictive performance. In this context, there is a growing body of literature that uses local intrinsic dimensionality (LID), a local metric that describes the minimum number of latent variables required to describe each data point, for detecting adversarial samples and subsequently mitigating their effects. The research to date has tended to focus on using LID as a practical defence method often without fully explaining why LID can detect adversarial samples. In this paper, we derive a lower-bound and an upper-bound for the LID value of a perturbed data point and demonstrate that the bounds, in particular the lower-bound, has a positive correlation with the magnitude of the perturbation. Hence, we demonstrate that data points that are perturbed by a large amount would have large LID values compared to unperturbed samples, thus justifying its use in the prior literature. Furthermore, our empirical validation demonstrates the validity of the bounds on benchmark datasets.


page 1

page 2

page 3

page 4


Jacobian Regularization for Mitigating Universal Adversarial Perturbations

Universal Adversarial Perturbations (UAPs) are input perturbations that ...

A Spectral View of Adversarially Robust Features

Given the apparent difficulty of learning models that are robust to adve...

Optimization Models and Interpretations for Three Types of Adversarial Perturbations against Support Vector Machines

Adversarial perturbations have drawn great attentions in various deep ne...

Targeted Nonlinear Adversarial Perturbations in Images and Videos

We introduce a method for learning adversarial perturbations targeted to...

Vulnerability Under Adversarial Machine Learning: Bias or Variance?

Prior studies have unveiled the vulnerability of the deep neural network...

Normal vs. Adversarial: Salience-based Analysis of Adversarial Samples for Relation Extraction

Recent neural-based relation extraction approaches, though achieving pro...

Fooling the classifier: Ligand antagonism and adversarial examples

Machine learning algorithms are sensitive to so-called adversarial pertu...