Dropping Pixels for Adversarial Robustness

05/01/2019
by   Hossein Hosseini, et al.
0

Deep neural networks are vulnerable against adversarial examples. In this paper, we propose to train and test the networks with randomly subsampled images with high drop rates. We show that this approach significantly improves robustness against adversarial examples in all cases of bounded L0, L2 and L_inf perturbations, while reducing the standard accuracy by a small value. We argue that subsampling pixels can be thought to provide a set of robust features for the input image and, thus, improves robustness without performing adversarial training.

READ FULL TEXT

page 1

page 3

page 4

research
09/23/2020

Semantics-Preserving Adversarial Training

Adversarial training is a defense technique that improves adversarial ro...
research
09/04/2019

Are Adversarial Robustness and Common Perturbation Robustness Independant Attributes ?

Neural Networks have been shown to be sensitive to common perturbations ...
research
01/09/2018

Less is More: Culling the Training Set to Improve Robustness of Deep Neural Networks

Deep neural networks are vulnerable to adversarial examples. Prior defen...
research
07/03/2020

Towards Robust Deep Learning with Ensemble Networks and Noisy Layers

In this paper we provide an approach for deep learning that protects aga...
research
02/17/2021

Improving Hierarchical Adversarial Robustness of Deep Neural Networks

Do all adversarial examples have the same consequences? An autonomous dr...
research
10/14/2021

Adversarial examples by perturbing high-level features in intermediate decoder layers

We propose a novel method for creating adversarial examples. Instead of ...
research
06/22/2022

Understanding the effect of sparsity on neural networks robustness

This paper examines the impact of static sparsity on the robustness of a...

Please sign up or login with your details

Forgot password? Click here to reset