STRIP: A Defence Against Trojan Attacks on Deep Neural Networks

02/18/2019
by   Yansong Gao, et al.
0

Recent trojan attacks on deep neural network (DNN) models are one insidious variant of data poisoning attacks. Trojan attacks exploit an effective backdoor created in a DNN model by leveraging the difficulty in interpretability of the learned model to misclassify any inputs signed with the attacker's chosen trojan trigger input. Trojan attacks are easy to craft; survive even in adverse conditions such as different viewpoints, and lighting conditions on images; and threaten real world applications such as autonomous vehicles and robotics. The trojan trigger is a secret guarded and exploited by the attacker. Therefore, detecting such trojaned inputs is a challenge, especially at run-time when models are in active operation. We focus on vision systems and build the STRong Intentional Perturbation (STRIP) based run-time trojan attack detection system. We intentionally perturb the incoming input, for instance by superimposing various image patterns, and observe the randomness of the predicted classes for the perturbed inputs for a given model---malicious or benign---under deployment. A low entropy in the predicted classes violates the input-dependence property of a benign model and implies the presence of a malicious input---a characteristic of a trojaned input. The high efficacy of our method is validated through case studies on two popular and contrasting datasets: MNIST and CIFAR10. We achieve an overall false acceptance rate (FAR) of less than 1 rate (FRR) of 1 triggers are identified in previous attack works and one dedicated trigger is crafted by us to demonstrate the trigger-size insensitivity of the STRIP detection approach. In particular, on the dataset of natural images in CIFAR10, we have empirically achieved the desired result of 0

READ FULL TEXT

page 1

page 2

page 6

page 8

research
08/09/2019

DeepCleanse: Input Sanitization Framework Against Trojan Attacks on Deep Neural Network Systems

Doubts over safety and trustworthiness of deep learning systems have eme...
research
04/22/2020

Live Trojan Attacks on Deep Neural Networks

Like all software systems, the execution of deep learning models is dict...
research
11/23/2019

Design and Evaluation of a Multi-Domain Trojan Detection Method on Deep Neural Networks

This work corroborates a run-time Trojan detection method exploiting STR...
research
05/31/2022

CASSOCK: Viable Backdoor Attacks against DNN in The Wall of Source-Specific Backdoor Defences

Backdoor attacks have been a critical threat to deep neural network (DNN...
research
08/09/2019

Februus: Input Purification Defence Against Trojan Attacks on Deep Neural Network Systems

We propose Februus; a novel idea to neutralize insidous and highly poten...
research
11/22/2021

NTD: Non-Transferability Enabled Backdoor Detection

A backdoor deep learning (DL) model behaves normally upon clean inputs b...
research
08/08/2022

PerD: Perturbation Sensitivity-based Neural Trojan Detection Framework on NLP Applications

Deep Neural Networks (DNNs) have been shown to be susceptible to Trojan ...

Please sign up or login with your details

Forgot password? Click here to reset