Sparsity-based Defense against Adversarial Attacks on Linear Classifiers

01/15/2018
by   Zhinus Marzi, et al.
0

Deep neural networks represent the state of the art in machine learning in a growing number of fields, including vision, speech and natural language processing. However, recent work raises important questions about the robustness of such architectures, by showing that it is possible to induce classification errors through tiny, almost imperceptible, perturbations. Vulnerability to such "adversarial attacks", or "adversarial examples", has been conjectured to be due to the excessive linearity of deep networks. In this paper, we study this phenomenon in the setting of a linear classifier, and show that it is possible to exploit sparsity in natural data to combat ℓ_∞-bounded adversarial perturbations. Specifically, we demonstrate the efficacy of a sparsifying front end via an ensemble averaged analysis, and experimental results for the MNIST handwritten digit database. To the best of our knowledge, this is the first work to provide a theoretically rigorous framework for defense against adversarial attacks.

READ FULL TEXT
research
03/11/2018

Combating Adversarial Attacks Using Sparse Representations

It is by now well-known that small adversarial perturbations can induce ...
research
09/11/2017

Ensemble Methods as a Defense to Adversarial Perturbations Against Deep Neural Networks

Deep learning has become the state of the art approach in many machine l...
research
11/18/2020

Self-Gradient Networks

The incredible effectiveness of adversarial attacks on fooling deep neur...
research
10/24/2018

Toward Robust Neural Networks via Sparsification

It is by now well-known that small adversarial perturbations can induce ...
research
06/05/2023

Adversarial Ink: Componentwise Backward Error Attacks on Deep Learning

Deep neural networks are capable of state-of-the-art performance in many...
research
11/21/2019

Robustness Certificates for Sparse Adversarial Attacks by Randomized Ablation

Recently, techniques have been developed to provably guarantee the robus...

Please sign up or login with your details

Forgot password? Click here to reset