DeepAI AI Chat
Log In Sign Up

Heat and Blur: An Effective and Fast Defense Against Adversarial Examples

03/17/2020
by   Haya Brama, et al.
Ariel University
0

The growing incorporation of artificial neural networks (NNs) into many fields, and especially into life-critical systems, is restrained by their vulnerability to adversarial examples (AEs). Some existing defense methods can increase NNs' robustness, but they often require special architecture or training procedures and are irrelevant to already trained models. In this paper, we propose a simple defense that combines feature visualization with input modification, and can, therefore, be applicable to various pre-trained networks. By reviewing several interpretability methods, we gain new insights regarding the influence of AEs on NNs' computation. Based on that, we hypothesize that information about the "true" object is preserved within the NN's activity, even when the input is adversarial, and present a feature visualization version that can extract that information in the form of relevance heatmaps. We then use these heatmaps as a basis for our defense, in which the adversarial effects are corrupted by massive blurring. We also provide a new evaluation metric that can capture the effects of both attacks and defenses more thoroughly and descriptively, and demonstrate the effectiveness of the defense and the utility of the suggested evaluation measurement with VGG19 results on the ImageNet dataset.

READ FULL TEXT

page 3

page 4

12/04/2020

Advocating for Multiple Defense Strategies against Adversarial Examples

It has been empirically observed that defense mechanisms designed to pro...
07/26/2018

Evaluating and Understanding the Robustness of Adversarial Logit Pairing

We evaluate the robustness of Adversarial Logit Pairing, a recently prop...
09/29/2017

Ground-Truth Adversarial Examples

The ability to deploy neural networks in real-world, safety-critical sys...
12/25/2018

Adversarial Feature Genome: a Data Driven Adversarial Examples Recognition Method

Convolutional neural networks (CNNs) are easily spoofed by adversarial e...
11/26/2019

An Adaptive View of Adversarial Robustness from Test-time Smoothing Defense

The safety and robustness of learning-based decision-making systems are ...
10/27/2019

Adversarial Defense Via Local Flatness Regularization

Adversarial defense is a popular and important research area. Due to its...