Utilizing Adversarial Targeted Attacks to Boost Adversarial Robustness

09/04/2021
by   Uriya Pesso, et al.
0

Adversarial attacks have been shown to be highly effective at degrading the performance of deep neural networks (DNNs). The most prominent defense is adversarial training, a method for learning a robust model. Nevertheless, adversarial training does not make DNNs immune to adversarial perturbations. We propose a novel solution by adopting the recently suggested Predictive Normalized Maximum Likelihood. Specifically, our defense performs adversarial targeted attacks according to different hypotheses, where each hypothesis assumes a specific label for the test sample. Then, by comparing the hypothesis probabilities, we predict the label. Our refinement process corresponds to recent findings of the adversarial subspace properties. We extensively evaluate our approach on 16 adversarial attack benchmarks using ResNet-50, WideResNet-28, and a2-layer ConvNet trained with ImageNet, CIFAR10, and MNIST, showing a significant improvement of up to 5.7

READ FULL TEXT
research
12/06/2018

On Configurable Defense against Adversarial Example Attacks

Machine learning systems based on deep neural networks (DNNs) have gaine...
research
01/16/2020

Universal Adversarial Attack on Attention and the Resulting Dataset DAmageNet

Adversarial attacks on deep neural networks (DNNs) have been found for s...
research
10/26/2021

Drawing Robust Scratch Tickets: Subnetworks with Inborn Robustness Are Found within Randomly Initialized Networks

Deep Neural Networks (DNNs) are known to be vulnerable to adversarial at...
research
04/25/2023

Learning Robust Deep Equilibrium Models

Deep equilibrium (DEQ) models have emerged as a promising class of impli...
research
07/24/2019

Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training

We introduce a feature scattering-based adversarial training approach fo...
research
05/25/2019

Adversarial Distillation for Ordered Top-k Attacks

Deep Neural Networks (DNNs) are vulnerable to adversarial attacks, espec...
research
08/01/2019

Robustifying deep networks for image segmentation

Purpose: The purpose of this study is to investigate the robustness of a...

Please sign up or login with your details

Forgot password? Click here to reset