Detecting Trojaned DNNs Using Counterfactual Attributions

12/03/2020
by   Karan Sikka, et al.
7

We target the problem of detecting Trojans or backdoors in DNNs. Such models behave normally with typical inputs but produce specific incorrect predictions for inputs poisoned with a Trojan trigger. Our approach is based on a novel observation that the trigger behavior depends on a few ghost neurons that activate on trigger pattern and exhibit abnormally higher relative attribution for wrong decisions when activated. Further, these trigger neurons are also active on normal inputs of the target class. Thus, we use counterfactual attributions to localize these ghost neurons from clean inputs and then incrementally excite them to observe changes in the model's accuracy. We use this information for Trojan detection by using a deep set encoder that enables invariance to the number of model classes, architecture, etc. Our approach is implemented in the TrinityAI tool that exploits the synergies between trustworthiness, resilience, and interpretability challenges in deep learning. We evaluate our approach on benchmarks with high diversity in model architectures, triggers, etc. We show consistent gains (+10 state-of-the-art methods that rely on the susceptibility of the DNN to specific adversarial attacks, which in turn requires strong assumptions on the nature of the Trojan attack.

READ FULL TEXT
research
01/21/2020

Massif: Interactive Interpretation of Adversarial Attacks on Deep Learning

Deep neural networks (DNNs) are increasingly powering high-stakes applic...
research
10/29/2020

Can the state of relevant neurons in a deep neural networks serve as indicators for detecting adversarial attacks?

We present a method for adversarial attack detection based on the inspec...
research
10/27/2018

Attacks Meet Interpretability: Attribute-steered Detection of Adversarial Samples

Adversarial sample attacks perturb benign inputs to induce DNN misbehavi...
research
04/14/2020

Extending Adversarial Attacks to Produce Adversarial Class Probability Distributions

Despite the remarkable performance and generalization levels of deep lea...
research
03/09/2020

Gradient-based adversarial attacks on categorical sequence models via traversing an embedded world

An adversarial attack paradigm explores various scenarios for vulnerabil...
research
03/23/2021

Are all outliers alike? On Understanding the Diversity of Outliers for Detecting OODs

Deep neural networks (DNNs) are known to produce incorrect predictions w...
research
03/16/2021

EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural Networks by Examining Differential Feature Symmetry

Backdoor attack injects malicious behavior to models such that inputs em...

Please sign up or login with your details

Forgot password? Click here to reset