Detect and Defense Against Adversarial Examples in Deep Learning using Natural Scene Statistics and Adaptive Denoising

07/12/2021
by   Anouar Kherchouche, et al.
14

Despite the enormous performance of deepneural networks (DNNs), recent studies have shown theirvulnerability to adversarial examples (AEs), i.e., care-fully perturbed inputs designed to fool the targetedDNN. Currently, the literature is rich with many ef-fective attacks to craft such AEs. Meanwhile, many de-fenses strategies have been developed to mitigate thisvulnerability. However, these latter showed their effec-tiveness against specific attacks and does not general-ize well to different attacks. In this paper, we proposea framework for defending DNN classifier against ad-versarial samples. The proposed method is based on atwo-stage framework involving a separate detector anda denoising block. The detector aims to detect AEs bycharacterizing them through the use of natural scenestatistic (NSS), where we demonstrate that these statis-tical features are altered by the presence of adversarialperturbations. The denoiser is based on block matching3D (BM3D) filter fed by an optimum threshold valueestimated by a convolutional neural network (CNN) toproject back the samples detected as AEs into theirdata manifold. We conducted a complete evaluation onthree standard datasets namely MNIST, CIFAR-10 andTiny-ImageNet. The experimental results show that theproposed defense method outperforms the state-of-the-art defense techniques by improving the robustnessagainst a set of attacks under black-box, gray-box and white-box settings. The source code is available at: https://github.com/kherchouche-anouar/2DAE

READ FULL TEXT

page 1

page 2

page 6

page 7

page 9

page 14

page 15

page 16

research
11/19/2021

Enhanced countering adversarial attacks via input denoising and feature restoring

Despite the fact that deep neural networks (DNNs) have achieved prominen...
research
10/22/2019

Adversarial Example Detection by Classification for Deep Speech Recognition

Machine Learning systems are vulnerable to adversarial attacks and will ...
research
01/06/2021

Adversarial Robustness by Design through Analog Computing and Synthetic Gradients

We propose a new defense mechanism against adversarial attacks inspired ...
research
10/26/2021

Frequency Centric Defense Mechanisms against Adversarial Examples

Adversarial example (AE) aims at fooling a Convolution Neural Network by...
research
11/18/2022

Diagnostics for Deep Neural Networks with Automated Copy/Paste Attacks

Deep neural networks (DNNs) are powerful, but they can make mistakes tha...
research
12/07/2021

Defending against Model Stealing via Verifying Embedded External Features

Obtaining a well-trained model involves expensive data collection and tr...
research
05/24/2022

EBM Life Cycle: MCMC Strategies for Synthesis, Defense, and Density Modeling

This work presents strategies to learn an Energy-Based Model (EBM) accor...

Please sign up or login with your details

Forgot password? Click here to reset