DeepCleanse: A Black-box Input SanitizationFramework Against Backdoor Attacks on DeepNeural Networks

08/09/2019
by   Bao Gia Doan, et al.
0

As Machine Learning, especially Deep Learning, has been increasingly used in varying areas as a trusted source to make decisions, the safety and security aspect of those systems become an increasing concern. Recently, Deep Learning system has been shown vulnerable to backdoor or trojan attacks in which the model was trained from adversary data. The main goal of this kind of attack is to generate a backdoor into Deep Learning model that can be triggered to recognize a certain pattern designed by attackers to change the model decision to a chosen target label. Detecting this kind of attack is challenging. The backdoor attack is stealthy as the malicious behavior only occurs when the right trigger designed by attackers is presented while the poisoned network operates normally with accuracy as identical as the benign model at all other time. Furthermore, with the recent trend of relying on pre-trained models as well as outsourcing the training process due to the lack of computational resources, this backdoor attack is becoming more relevant recently. In this paper, we propose a novel idea to detect and eliminate backdoor attacks for deep neural networks in the input domain of computer vision called DeepCleanse. Through extensive experiment results, we demonstrate the effectiveness of DeepCleanse against advanced backdoor attacks for deep neural networks across multiple vision applications. Our method can eliminate the trojan effects out of the backdoored Deep Neural Network. To the best of our knowledge, this is the first method in backdoor defense that works in black-box setting capable of sanitizing and restoring trojaned input that neither requires costly ground-truth labeled data nor anomaly detection.

READ FULL TEXT

page 3

page 11

page 12

page 14

research
08/09/2019

Februus: Input Purification Defence Against Trojan Attacks on Deep Neural Network Systems

We propose Februus; a novel idea to neutralize insidous and highly poten...
research
11/09/2018

Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering

While machine learning (ML) models are being increasingly trusted to mak...
research
08/09/2019

DeepCleanse: Input Sanitization Framework Against Trojan Attacks on Deep Neural Network Systems

Doubts over safety and trustworthiness of deep learning systems have eme...
research
02/13/2018

Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring

Deep Neural Networks have recently gained lots of success after enabling...
research
10/08/2019

Detecting AI Trojans Using Meta Neural Analysis

Machine learning models, especially neural networks (NNs), have achieved...
research
04/21/2023

Launching a Robust Backdoor Attack under Capability Constrained Scenarios

As deep neural networks continue to be used in critical domains, concern...
research
09/14/2021

PETGEN: Personalized Text Generation Attack on Deep Sequence Embedding-based Classification Models

What should a malicious user write next to fool a detection model? Ident...

Please sign up or login with your details

Forgot password? Click here to reset