DeepCleanse: Input Sanitization Framework Against Trojan Attacks on Deep Neural Network Systems

08/09/2019
by   Bao Gia Doan, et al.
0

Doubts over safety and trustworthiness of deep learning systems have emerged from recent Trojan attacks. These attack variants are both highly potent and insidious. An attacker exploits a backdoor crafted in the neural network using a secret trigger, a Trojan, applied to an input to alter the model decision of any input to a target prediction determined and only known to the attacker. Detecting trojan attacks, especially at run-time, is challenging: i) the Trojan triggers can be unoticeable to humans; ii) the trigger is a secret guarded and known only by the attacker; iii) the stealthy and malicious behavior is only evident when the secret trigger is activated by an attacker; and iv) interpreting a learned model to identify a Trojan is a difficult task. In this paper, we propose a novel idea to neutralize backdoor attacks by sanitizing inputs to Deep Neural Networks in the input domain of vision tasks called DeepCleanse. We sanitize the incoming input, by analyzing the side channel information inevitably leaked in the model interpretability of a potentially trojaned model, extracting the potentially affected areas and subsequently using an inpainting method we propose for restoring the input for the classification task. Through extensive experiments, using two popular datasets (CIFAR10 and GTSRB) we demonstrate the efficacy of DeepCleanse against backdoor attacks with the dramatic reductions in the attack success rates of backdoor attack; from 100 benign or sanitized Trojan inputs. To the best of our knowledge, this is the first defense method capable of sanitizing trojaned inputs; in particular, our approach neither requires ground-truth label data nor anomaly detection methods for accurate detection of a Trojan.

READ FULL TEXT

page 3

page 11

page 12

page 14

research
08/09/2019

Februus: Input Purification Defence Against Trojan Attacks on Deep Neural Network Systems

We propose Februus; a novel idea to neutralize insidous and highly poten...
research
02/18/2019

STRIP: A Defence Against Trojan Attacks on Deep Neural Networks

Recent trojan attacks on deep neural network (DNN) models are one insidi...
research
02/28/2023

FreeEagle: Detecting Complex Neural Trojans in Data-Free Cases

Trojan attack on deep neural networks, also known as backdoor attack, is...
research
03/17/2022

PiDAn: A Coherence Optimization Approach for Backdoor Attack Detection and Mitigation in Deep Neural Networks

Backdoor attacks impose a new threat in Deep Neural Networks (DNNs), whe...
research
08/09/2019

DeepCleanse: A Black-box Input SanitizationFramework Against Backdoor Attacks on DeepNeural Networks

As Machine Learning, especially Deep Learning, has been increasingly use...
research
09/23/2021

FooBaR: Fault Fooling Backdoor Attack on Neural Network Training

Neural network implementations are known to be vulnerable to physical at...
research
04/22/2020

Live Trojan Attacks on Deep Neural Networks

Like all software systems, the execution of deep learning models is dict...

Please sign up or login with your details

Forgot password? Click here to reset