Interpreting and Improving Adversarial Robustness with Neuron Sensitivity

09/16/2019
by   Chongzhi Zhang, et al.
14

Deep neural networks (DNNs) are vulnerable to adversarial examples where inputs with imperceptible perturbations mislead DNNs to incorrect results. Despite the potential risk they bring, adversarial examples are also valuable for providing insights into the weakness and blind-spots of DNNs. Thus, the interpretability of a DNN in adversarial setting aims to explain the rationale behind its decision-making process and makes deeper understanding which results in better practical applications. To address this issue, we try to explain adversarial robustness for deep models from a new perspective of neuron sensitivity which is measured by neuron behavior variation intensity against benign and adversarial examples. In this paper, we first draw the close connection between adversarial robustness and neuron sensitivities, as sensitive neurons make the most non-trivial contributions to model predictions in adversarial setting. Based on that, we further propose to improve adversarial robustness by constraining the similarities of sensitive neurons between benign and adversarial examples which stabilizes the behaviors of sensitive neurons in adversarial setting. Moreover, we demonstrate that state-of-the-art adversarial training methods improve model robustness by reducing neuron sensitivities which in turn confirms the strong connections between adversarial robustness and neuron sensitivity as well as the effectiveness of using sensitive neurons to build robust models. Extensive experiments on various datasets demonstrate that our algorithm effectively achieve excellent results.

READ FULL TEXT

page 3

page 4

page 12

research
01/25/2019

Towards Interpretable Deep Neural Networks by Leveraging Adversarial Examples

Sometimes it is not enough for a DNN to produce an outcome. For example,...
research
10/09/2018

Analyzing the Noise Robustness of Deep Neural Networks

Deep neural networks (DNNs) are vulnerable to maliciously generated adve...
research
03/20/2020

One Neuron to Fool Them All

Despite vast research in adversarial examples, the root causes of model ...
research
10/27/2021

Adversarial Neuron Pruning Purifies Backdoored Deep Models

As deep neural networks (DNNs) are growing larger, their requirements fo...
research
09/21/2022

Toy Models of Superposition

Neural networks often pack many unrelated concepts into a single neuron ...
research
02/12/2022

DeepSensor: Deep Learning Testing Framework Based on Neuron Sensitivity

Despite impressive capabilities and outstanding performance, deep neural...
research
12/24/2021

CatchBackdoor: Backdoor Testing by Critical Trojan Neural Path Identification via Differential Fuzzing

The success of deep neural networks (DNNs) in real-world applications ha...

Please sign up or login with your details

Forgot password? Click here to reset