Sparse Coding Frontend for Robust Neural Networks

04/12/2021
by   Metehan Cekic, et al.
0

Deep Neural Networks are known to be vulnerable to small, adversarially crafted, perturbations. The current most effective defense methods against these adversarial attacks are variants of adversarial training. In this paper, we introduce a radically different defense trained only on clean images: a sparse coding based frontend which significantly attenuates adversarial attacks before they reach the classifier. We evaluate our defense on CIFAR-10 dataset under a wide range of attack types (including Linf , L2, and L1 bounded attacks), demonstrating its promise as a general-purpose approach for defense.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/09/2021

Attacking Adversarial Attacks as A Defense

It is well known that adversarial attacks can fool deep neural networks ...
research
11/21/2020

A Neuro-Inspired Autoencoding Defense Against Adversarial Perturbations

Deep Neural Networks (DNNs) are vulnerable to adversarial attacks: caref...
research
05/23/2023

The Best Defense is a Good Offense: Adversarial Augmentation against Adversarial Attacks

Many defenses against adversarial attacks (robust classifiers, randomiza...
research
12/07/2018

Adversarial Attacks, Regression, and Numerical Stability Regularization

Adversarial attacks against neural networks in a regression setting are ...
research
06/28/2020

FDA3 : Federated Defense Against Adversarial Attacks for Cloud-Based IIoT Applications

Along with the proliferation of Artificial Intelligence (AI) and Interne...
research
03/02/2020

Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness

While deep neural networks have been achieving state-of-the-art performa...
research
05/21/2018

Featurized Bidirectional GAN: Adversarial Defense via Adversarially Learned Semantic Inference

Deep neural networks have been demonstrated to be vulnerable to adversar...

Please sign up or login with your details

Forgot password? Click here to reset