Non-Intrusive Detection of Adversarial Deep Learning Attacks via Observer Networks

02/22/2020
by   Kirthi Shankar Sivamani, et al.
0

Recent studies have shown that deep learning models are vulnerable to specifically crafted adversarial inputs that are quasi-imperceptible to humans. In this letter, we propose a novel method to detect adversarial inputs, by augmenting the main classification network with multiple binary detectors (observer networks) which take inputs from the hidden layers of the original network (convolutional kernel outputs) and classify the input as clean or adversarial. During inference, the detectors are treated as a part of an ensemble network and the input is deemed adversarial if at least half of the detectors classify it as so. The proposed method addresses the trade-off between accuracy of classification on clean and adversarial samples, as the original classification network is not modified during the detection process. The use of multiple observer networks makes attacking the detection mechanism non-trivial even when the attacker is aware of the victim classifier. We achieve a 99.5 CIFAR-10 dataset using the Fast Gradient Sign Attack in a semi-white box setup. The number of false positive detections is a mere 0.12 scenario.

READ FULL TEXT

page 1

page 5

research
02/18/2019

AuxBlocks: Defense Adversarial Example via Auxiliary Blocks

Deep learning models are vulnerable to adversarial examples, which poses...
research
06/07/2023

Adversarial Sample Detection Through Neural Network Transport Dynamics

We propose a detector of adversarial samples that is based on the view o...
research
03/27/2023

EMShepherd: Detecting Adversarial Samples via Side-channel Leakage

Deep Neural Networks (DNN) are vulnerable to adversarial perturbations-s...
research
06/22/2021

DetectX – Adversarial Input Detection using Current Signatures in Memristive XBar Arrays

Adversarial input detection has emerged as a prominent technique to hard...
research
10/18/2020

FADER: Fast Adversarial Example Rejection

Deep neural networks are vulnerable to adversarial examples, i.e., caref...
research
05/31/2019

Bypassing Backdoor Detection Algorithms in Deep Learning

Deep learning models are known to be vulnerable to various adversarial m...
research
10/09/2019

Deep Latent Defence

Deep learning methods have shown state of the art performance in a range...

Please sign up or login with your details

Forgot password? Click here to reset