Detecting Localized Adversarial Examples: A Generic Approach using Critical Region Analysis

02/10/2021
by   Fengting Li, et al.
0

Deep neural networks (DNNs) have been applied in a wide range of applications,e.g.,face recognition and image classification; however,they are vulnerable to adversarial examples. By adding a small amount of imperceptible perturbations,an attacker can easily manipulate the outputs of a DNN. Particularly,the localized adversarial examples only perturb a small and contiguous region of the target object,so that they are robust and effective in both digital and physical worlds. Although the localized adversarial examples have more severe real-world impacts than traditional pixel attacks,they have not been well addressed in the literature. In this paper,we propose a generic defense system called TaintRadar to accurately detect localized adversarial examples via analyzing critical regions that have been manipulated by attackers. The main idea is that when removing critical regions from input images,the ranking changes of adversarial labels will be larger than those of benign labels. Compared with existing defense solutions,TaintRadar can effectively capture sophisticated localized partial attacks, e.g.,the eye-glasses attack,while not requiring additional training or fine-tuning of the original model's structure. Comprehensive experiments have been conducted in both digital and physical worlds to verify the effectiveness and robustness of our defense.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 6

03/13/2021

Learning Defense Transformers for Counterattacking Adversarial Examples

Deep neural networks (DNNs) are vulnerable to adversarial examples with ...
02/13/2022

Adversarial Fine-tuning for Backdoor Defense: Connect Adversarial Examples to Triggered Samples

Deep neural networks (DNNs) are known to be vulnerable to backdoor attac...
01/08/2018

LaVAN: Localized and Visible Adversarial Noise

Most works on adversarial examples for deep-learning based image classif...
01/24/2022

What You See is Not What the Network Infers: Detecting Adversarial Examples Based on Semantic Contradiction

Adversarial examples (AEs) pose severe threats to the applications of de...
12/04/2019

Towards Robust Image Classification Using Sequential Attention Models

In this paper we propose to augment a modern neural-network architecture...
03/27/2021

LiBRe: A Practical Bayesian Approach to Adversarial Detection

Despite their appealing flexibility, deep neural networks (DNNs) are vul...
10/01/2019

Cross-Layer Strategic Ensemble Defense Against Adversarial Examples

Deep neural network (DNN) has demonstrated its success in multiple domai...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.