Localized Adversarial Training for Increased Accuracy and Robustness in Image Classification

09/10/2019
by   Eitan Rothberg, et al.
0

Today's state-of-the-art image classifiers fail to correctly classify carefully manipulated adversarial images. In this work, we develop a new, localized adversarial attack that generates adversarial examples by imperceptibly altering the backgrounds of normal images. We first use this attack to highlight the unnecessary sensitivity of neural networks to changes in the background of an image, then use it as part of a new training technique: localized adversarial training. By including locally adversarial images in the training set, we are able to create a classifier that suffers less loss than a non-adversarially trained counterpart model on both natural and adversarial inputs. The evaluation of our localized adversarial training algorithm on MNIST and CIFAR-10 datasets shows decreased accuracy loss on natural images, and increased robustness against adversarial inputs.

READ FULL TEXT
research
12/21/2019

Jacobian Adversarially Regularized Networks for Robustness

Adversarial examples are crafted with imperceptible perturbations with t...
research
01/08/2018

LaVAN: Localized and Visible Adversarial Noise

Most works on adversarial examples for deep-learning based image classif...
research
02/16/2023

On the Effect of Adversarial Training Against Invariance-based Adversarial Examples

Adversarial examples are carefully crafted attack points that are suppos...
research
03/15/2021

Constant Random Perturbations Provide Adversarial Robustness with Minimal Effect on Accuracy

This paper proposes an attack-independent (non-adversarial training) tec...
research
01/13/2016

Localized Dictionary design for Geometrically Robust Sonar ATR

Advancements in Sonar image capture have opened the door to powerful cla...
research
06/18/2022

DECK: Model Hardening for Defending Pervasive Backdoors

Pervasive backdoors are triggered by dynamic and pervasive input perturb...
research
06/02/2022

Robustness Evaluation and Adversarial Training of an Instance Segmentation Model

To evaluate the robustness of non-classifier models, we propose probabil...

Please sign up or login with your details

Forgot password? Click here to reset