Adversarial Example Detection for DNN Models: A Review

05/01/2021
by   Ahmed Aldahdooh, et al.
0

Deep Learning (DL) has shown great success in many human-related tasks, which has led to its adoption in many computer vision based applications, such as security surveillance system, autonomous vehicles and healthcare. Such safety-critical applications have to draw its path to success deployment once they have the capability to overcome safety-critical challenges. Among these challenges are the defense against or/and the detection of the adversarial example (AE). Adversary can carefully craft small, often imperceptible, noise called perturbations, to be added to the clean image to generate the AE. The aim of AE is to fool the DL model which makes it a potential risk for DL applications. Many test-time evasion attacks and countermeasures, i.e., defense or detection methods, are proposed in the literature. Moreover, few reviews and surveys were published and theoretically showed the taxonomy of the threats and the countermeasure methods with little focus in AE detection methods. In this paper, we attempt to provide a theoretical and experimental review for AE detection methods. A detailed discussion for such methods is provided and experimental results for eight state-of-the-art detectors are presented under different scenarios on four datasets. We also provide potential challenges and future perspectives for this research direction.

READ FULL TEXT

page 8

page 23

page 24

research
03/27/2022

A Systematic Survey of Attack Detection and Prevention in Connected and Autonomous Vehicles

The number of Connected and Autonomous Vehicles (CAVs) is increasing rap...
research
04/20/2021

MixDefense: A Defense-in-Depth Framework for Adversarial Example Detection Based on Statistical and Semantic Analysis

Machine learning with deep neural networks (DNNs) has become one of the ...
research
09/24/2020

Adversarial Examples in Deep Learning for Multivariate Time Series Regression

Multivariate time series (MTS) regression tasks are common in many real-...
research
02/03/2023

TextShield: Beyond Successfully Detecting Adversarial Sentences in Text Classification

Adversarial attack serves as a major challenge for neural network models...
research
02/24/2022

Testing Deep Learning Models: A First Comparative Study of Multiple Testing Techniques

Deep Learning (DL) has revolutionized the capabilities of vision-based s...
research
02/21/2022

A Tutorial on Adversarial Learning Attacks and Countermeasures

Machine learning algorithms are used to construct a mathematical model f...
research
04/24/2020

Towards Characterizing Adversarial Defects of Deep Learning Software from the Lens of Uncertainty

Over the past decade, deep learning (DL) has been successfully applied t...

Please sign up or login with your details

Forgot password? Click here to reset