Can the state of relevant neurons in a deep neural networks serve as indicators for detecting adversarial attacks?

10/29/2020
by   Roger Granda, et al.
9

We present a method for adversarial attack detection based on the inspection of a sparse set of neurons. We follow the hypothesis that adversarial attacks introduce imperceptible perturbations in the input and that these perturbations change the state of neurons relevant for the concepts modelled by the attacked model. Therefore, monitoring the status of these neurons would enable the detection of adversarial attacks. Focusing on the image classification task, our method identifies neurons that are relevant for the classes predicted by the model. A deeper qualitative inspection of these sparse set of neurons indicates that their state changes in the presence of adversarial samples. Moreover, quantitative results from our empirical evaluation indicate that our method is capable of recognizing adversarial samples, produced by state-of-the-art attack methods, with comparable accuracy to that of state-of-the-art detectors.

READ FULL TEXT

page 5

page 7

page 8

page 9

research
01/31/2022

Adversarial Robustness in Deep Learning: Attacks on Fragile Neurons

We identify fragile and robust neurons of deep learning architectures us...
research
12/03/2020

Detecting Trojaned DNNs Using Counterfactual Attributions

We target the problem of detecting Trojans or backdoors in DNNs. Such mo...
research
03/27/2023

EMShepherd: Detecting Adversarial Samples via Side-channel Leakage

Deep Neural Networks (DNN) are vulnerable to adversarial perturbations-s...
research
06/05/2023

Adversarial Ink: Componentwise Backward Error Attacks on Deep Learning

Deep neural networks are capable of state-of-the-art performance in many...
research
10/27/2018

Attacks Meet Interpretability: Attribute-steered Detection of Adversarial Samples

Adversarial sample attacks perturb benign inputs to induce DNN misbehavi...
research
05/24/2023

Relating Implicit Bias and Adversarial Attacks through Intrinsic Dimension

Despite their impressive performance in classification, neural networks ...

Please sign up or login with your details

Forgot password? Click here to reset