Adversarial Neural Pruning

08/12/2019
by   Divyam Madaan, et al.
2

It is well known that neural networks are susceptible to adversarial perturbations and are also computationally and memory intensive which makes it difficult to deploy them in real-world applications where security and computation are constrained. In this work, we aim to obtain both robust and sparse networks that are applicable to such scenarios, based on the intuition that latent features have a varying degree of susceptibility to adversarial perturbations. Specifically, we define vulnerability at the latent feature space and then propose a Bayesian framework to prioritize features based on their contribution to both the original and adversarial loss, to prune vulnerable features and preserve the robust ones. Through quantitative evaluation and qualitative analysis of the perturbation to latent features, we show that our sparsification method is a defense mechanism against adversarial attacks and the robustness indeed comes from our model's ability to prune vulnerable latent features that are more susceptible to adversarial perturbations.

READ FULL TEXT

page 4

page 8

research
07/13/2022

Perturbation Inactivation Based Adversarial Defense for Face Recognition

Deep learning-based face recognition models are vulnerable to adversaria...
research
06/08/2019

Defending against Adversarial Attacks through Resilient Feature Regeneration

Deep neural network (DNN) predictions have been shown to be vulnerable t...
research
09/10/2019

Learning to Disentangle Robust and Vulnerable Features for Adversarial Detection

Although deep neural networks have shown promising performances on vario...
research
04/01/2021

Normal vs. Adversarial: Salience-based Analysis of Adversarial Samples for Relation Extraction

Recent neural-based relation extraction approaches, though achieving pro...
research
02/12/2018

Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks

High sensitivity of neural networks against malicious perturbations on i...
research
11/29/2022

Understanding and Enhancing Robustness of Concept-based Models

Rising usage of deep neural networks to perform decision making in criti...
research
07/31/2022

Is current research on adversarial robustness addressing the right problem?

Short answer: Yes, Long answer: No! Indeed, research on adversarial robu...

Please sign up or login with your details

Forgot password? Click here to reset