The Dilemma Between Dimensionality Reduction and Adversarial Robustness

06/18/2020
by   Sheila Alemany, et al.
0

Recent work has shown the tremendous vulnerability to adversarial samples that are nearly indistinguishable from benign data but are improperly classified by the deep learning model. Some of the latest findings suggest the existence of adversarial attacks may be an inherent weakness of these models as a direct result of its sensitivity to well-generalizing features in high dimensional data. We hypothesize that data transformations can influence this vulnerability since a change in the data manifold directly determines the adversary's ability to create these adversarial samples. To approach this problem, we study the effect of dimensionality reduction through the lens of adversarial robustness. This study raises awareness of the positive and negative impacts of five commonly used data transformation techniques on adversarial robustness. The evaluation shows how these techniques contribute to an overall increased vulnerability where accuracy is only improved when the dimensionality reduction technique approaches the data's optimal intrinsic dimension. The conclusions drawn from this work contribute to understanding and creating more resistant learning models.

READ FULL TEXT
research
06/19/2017

Towards Deep Learning Models Resistant to Adversarial Attacks

Recent work has demonstrated that neural networks are vulnerable to adve...
research
11/23/2020

Dimensionality reduction, regularization, and generalization in overparameterized regressions

Overparameterization in deep learning is powerful: Very large models fit...
research
01/31/2019

Improving Model Robustness with Transformation-Invariant Attacks

Vulnerability of neural networks under adversarial attacks has raised se...
research
05/28/2021

Visualizing Representations of Adversarially Perturbed Inputs

It has been shown that deep learning models are vulnerable to adversaria...
research
08/14/2019

Tensor-Train Parameterization for Ultra Dimensionality Reduction

Locality preserving projections (LPP) are a classical dimensionality red...
research
04/03/2023

OutCenTR: A novel semi-supervised framework for predicting exploits of vulnerabilities in high-dimensional datasets

An ever-growing number of vulnerabilities are reported every day. Yet th...
research
10/02/2022

Benign Autoencoders

The success of modern machine learning algorithms depends crucially on e...

Please sign up or login with your details

Forgot password? Click here to reset