DeepAI AI Chat
Log In Sign Up

Heat and Blur: An Effective and Fast Defense Against Adversarial Examples

by   Haya Brama, et al.
Ariel University

The growing incorporation of artificial neural networks (NNs) into many fields, and especially into life-critical systems, is restrained by their vulnerability to adversarial examples (AEs). Some existing defense methods can increase NNs' robustness, but they often require special architecture or training procedures and are irrelevant to already trained models. In this paper, we propose a simple defense that combines feature visualization with input modification, and can, therefore, be applicable to various pre-trained networks. By reviewing several interpretability methods, we gain new insights regarding the influence of AEs on NNs' computation. Based on that, we hypothesize that information about the "true" object is preserved within the NN's activity, even when the input is adversarial, and present a feature visualization version that can extract that information in the form of relevance heatmaps. We then use these heatmaps as a basis for our defense, in which the adversarial effects are corrupted by massive blurring. We also provide a new evaluation metric that can capture the effects of both attacks and defenses more thoroughly and descriptively, and demonstrate the effectiveness of the defense and the utility of the suggested evaluation measurement with VGG19 results on the ImageNet dataset.


page 3

page 4


Advocating for Multiple Defense Strategies against Adversarial Examples

It has been empirically observed that defense mechanisms designed to pro...

Evaluating and Understanding the Robustness of Adversarial Logit Pairing

We evaluate the robustness of Adversarial Logit Pairing, a recently prop...

Ground-Truth Adversarial Examples

The ability to deploy neural networks in real-world, safety-critical sys...

Adversarial Feature Genome: a Data Driven Adversarial Examples Recognition Method

Convolutional neural networks (CNNs) are easily spoofed by adversarial e...

An Adaptive View of Adversarial Robustness from Test-time Smoothing Defense

The safety and robustness of learning-based decision-making systems are ...

Adversarial Defense Via Local Flatness Regularization

Adversarial defense is a popular and important research area. Due to its...