Explaining Adversarial Vulnerability with a Data Sparsity Hypothesis

03/01/2021
by   Mahsa Paknezhad, et al.
0

Despite many proposed algorithms to provide robustness to deep learning (DL) models, DL models remain susceptible to adversarial attacks. We hypothesize that the adversarial vulnerability of DL models stems from two factors. The first factor is data sparsity which is that in the high dimensional data space, there are large regions outside the support of the data distribution. The second factor is the existence of many redundant parameters in the DL models. Owing to these factors, different models are able to come up with different decision boundaries with comparably high prediction accuracy. The appearance of the decision boundaries in the space outside the support of the data distribution does not affect the prediction accuracy of the model. However, they make an important difference in the adversarial robustness of the model. We propose that the ideal decision boundary should be as far as possible from the support of the data distribution.In this paper, we develop a training framework for DL models to learn such decision boundaries spanning the space around the class distributions further from the data points themselves. Semi-supervised learning was deployed to achieve this objective by leveraging unlabeled data generated in the space outside the support of the data distribution. We measure adversarial robustness of the models trained using this training framework against well-known adversarial attacks We found that our results, other regularization methods and adversarial training also support our hypothesis of data sparcity. We show that the unlabeled data generated by noise using our framework is almost as effective as unlabeled data, sourced from existing data sets or generated by synthesis algorithms, on adversarial robustness. Our code is available at https://github.com/MahsaPaknezhad/AdversariallyRobustTraining.

READ FULL TEXT

page 6

page 12

page 14

research
04/19/2023

ESimCSE Unsupervised Contrastive Learning Jointly with UDA Semi-Supervised Learning for Large Label System Text Classification Mode

The challenges faced by text classification with large tag systems in na...
research
12/09/2020

Securing Deep Spiking Neural Networks against Adversarial Attacks through Inherent Structural Parameters

Deep Learning (DL) algorithms have gained popularity owing to their prac...
research
07/09/2020

Boundary thickness and robustness in learning models

Robustness of machine learning models to various adversarial and non-adv...
research
08/08/2023

Improving Performance of Semi-Supervised Learning by Adversarial Attacks

Semi-supervised learning (SSL) algorithm is a setup built upon a realist...
research
07/08/2020

How benign is benign overfitting?

We investigate two causes for adversarial vulnerability in deep neural n...
research
09/23/2020

Enhancing Mixup-based Semi-Supervised Learning with Explicit Lipschitz Regularization

The success of deep learning relies on the availability of large-scale a...
research
12/18/2020

ROBY: Evaluating the Robustness of a Deep Model by its Decision Boundaries

With the successful application of deep learning models in many real-wor...

Please sign up or login with your details

Forgot password? Click here to reset