Enhanced countering adversarial attacks via input denoising and feature restoring

11/19/2021
by   Yanni Li, et al.
0

Despite the fact that deep neural networks (DNNs) have achieved prominent performance in various applications, it is well known that DNNs are vulnerable to adversarial examples/samples (AEs) with imperceptible perturbations in clean/original samples. To overcome the weakness of the existing defense methods against adversarial attacks, which damages the information on the original samples, leading to the decrease of the target classifier accuracy, this paper presents an enhanced countering adversarial attack method IDFR (via Input Denoising and Feature Restoring). The proposed IDFR is made up of an enhanced input denoiser (ID) and a hidden lossy feature restorer (FR) based on the convex hull optimization. Extensive experiments conducted on benchmark datasets show that the proposed IDFR outperforms the various state-of-the-art defense methods, and is highly effective for protecting target models against various adversarial black-box or white-box attacks. \footnote{Souce code is released at: \href{https://github.com/ID-FR/IDFR}{https://github.com/ID-FR/IDFR}}

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/12/2021

Detect and Defense Against Adversarial Examples in Deep Learning using Natural Scene Statistics and Adaptive Denoising

Despite the enormous performance of deepneural networks (DNNs), recent s...
research
06/22/2021

DetectX – Adversarial Input Detection using Current Signatures in Memristive XBar Arrays

Adversarial input detection has emerged as a prominent technique to hard...
research
07/28/2023

Beating Backdoor Attack at Its Own Game

Deep neural networks (DNNs) are vulnerable to backdoor attack, which doe...
research
05/16/2022

Robust Representation via Dynamic Feature Aggregation

Deep convolutional neural network (CNN) based models are vulnerable to t...
research
01/07/2019

Image Super-Resolution as a Defense Against Adversarial Attacks

Convolutional Neural Networks have achieved significant success across m...
research
02/07/2023

SCALE-UP: An Efficient Black-box Input-level Backdoor Detection via Analyzing Scaled Prediction Consistency

Deep neural networks (DNNs) are vulnerable to backdoor attacks, where ad...
research
09/02/2020

Open-set Adversarial Defense

Open-set recognition and adversarial defense study two key aspects of de...

Please sign up or login with your details

Forgot password? Click here to reset