DoPa: A Fast and Comprehensive CNN Defense Methodology against Physical Adversarial Attacks

05/21/2019
by   Zirui Xu, et al.
0

Recently, Convolutional Neural Networks (CNNs) demonstrate a considerable vulnerability to adversarial attacks, which can be easily misled by adversarial perturbations. With more aggressive methods proposed, adversarial attacks can be also applied to the physical world, causing practical issues to various CNN powered applications. Most existing defense works for physical adversarial attacks only focus on eliminating explicit perturbation patterns from inputs, ignoring interpretation and solution to CNN's intrinsic vulnerability. Therefore, most of them depend on considerable data processing costs and lack the expected versatility to different attacks. In this paper, we propose DoPa - a fast and comprehensive CNN defense methodology against physical adversarial attacks. By interpreting the CNN's vulnerability, we find that non-semantic adversarial perturbations can activate CNN with significantly abnormal activations and even overwhelm other semantic input patterns' activations. We improve the CNN recognition process by adding a self-verification stage to analyze the semantics of distinguished activation patterns with only one CNN inference involved. Based on the detection result, we further propose a data recovery methodology to defend the physical adversarial attacks. We apply such detection and defense methodology into both image and audio CNN recognition process. Experiments show that our methodology can achieve an average rate of 90 adversarial attacks. Also, the proposed defense method can achieve a 92 detection successful rate and 77.5 applications. Moreover, the proposed defense methods are at most 2.3x faster compared to the state-of-the-art defense methods, making them feasible to resource-constrained platforms, such as mobile devices.

READ FULL TEXT

page 3

page 4

research
10/17/2019

LanCe: A Comprehensive and Lightweight CNN Defense Methodology against Physical Adversarial Attacks on Embedded Multimedia Applications

Recently, adversarial attacks can be applied to the physical world, caus...
research
04/26/2020

Enabling Fast and Universal Audio Adversarial Attack Using Generative Model

Recently, the vulnerability of DNN-based audio systems to adversarial at...
research
08/24/2022

Trace and Detect Adversarial Attacks on CNNs using Feature Response Maps

The existence of adversarial attacks on convolutional neural networks (C...
research
12/21/2020

Exploiting Vulnerability of Pooling in Convolutional Neural Networks by Strict Layer-Output Manipulation for Adversarial Attacks

Convolutional neural networks (CNN) have been more and more applied in m...
research
04/27/2020

Adversarial Fooling Beyond "Flipping the Label"

Recent advancements in CNNs have shown remarkable achievements in variou...
research
06/25/2019

Are Adversarial Perturbations a Showstopper for ML-Based CAD? A Case Study on CNN-Based Lithographic Hotspot Detection

There is substantial interest in the use of machine learning (ML) based ...
research
04/30/2021

Deep Image Destruction: A Comprehensive Study on Vulnerability of Deep Image-to-Image Models against Adversarial Attacks

Recently, the vulnerability of deep image classification models to adver...

Please sign up or login with your details

Forgot password? Click here to reset