Local Intrinsic Dimensionality Signals Adversarial Perturbations

09/24/2021
by   Sandamal Weerasinghe, et al.
0

The vulnerability of machine learning models to adversarial perturbations has motivated a significant amount of research under the broad umbrella of adversarial machine learning. Sophisticated attacks may cause learning algorithms to learn decision functions or make decisions with poor predictive performance. In this context, there is a growing body of literature that uses local intrinsic dimensionality (LID), a local metric that describes the minimum number of latent variables required to describe each data point, for detecting adversarial samples and subsequently mitigating their effects. The research to date has tended to focus on using LID as a practical defence method often without fully explaining why LID can detect adversarial samples. In this paper, we derive a lower-bound and an upper-bound for the LID value of a perturbed data point and demonstrate that the bounds, in particular the lower-bound, has a positive correlation with the magnitude of the perturbation. Hence, we demonstrate that data points that are perturbed by a large amount would have large LID values compared to unperturbed samples, thus justifying its use in the prior literature. Furthermore, our empirical validation demonstrates the validity of the bounds on benchmark datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/21/2021

Jacobian Regularization for Mitigating Universal Adversarial Perturbations

Universal Adversarial Perturbations (UAPs) are input perturbations that ...
research
11/15/2018

A Spectral View of Adversarially Robust Features

Given the apparent difficulty of learning models that are robust to adve...
research
04/07/2022

Optimization Models and Interpretations for Three Types of Adversarial Perturbations against Support Vector Machines

Adversarial perturbations have drawn great attentions in various deep ne...
research
08/27/2018

Targeted Nonlinear Adversarial Perturbations in Images and Videos

We introduce a method for learning adversarial perturbations targeted to...
research
04/01/2021

Normal vs. Adversarial: Salience-based Analysis of Adversarial Samples for Relation Extraction

Recent neural-based relation extraction approaches, though achieving pro...
research
08/01/2020

Vulnerability Under Adversarial Machine Learning: Bias or Variance?

Prior studies have unveiled the vulnerability of the deep neural network...
research
07/20/2023

Shared Adversarial Unlearning: Backdoor Mitigation by Unlearning Shared Adversarial Examples

Backdoor attacks are serious security threats to machine learning models...

Please sign up or login with your details

Forgot password? Click here to reset