Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering

11/09/2018
by   Bryant Chen, et al.
0

While machine learning (ML) models are being increasingly trusted to make decisions in different and varying areas, the safety of systems using such models has become an increasing concern. In particular, ML models are often trained on data from potentially untrustworthy sources, providing adversaries with the opportunity to manipulate them by inserting carefully crafted samples into the training set. Recent work has shown that this type of attack, called a poisoning attack, allows adversaries to insert backdoors or trojans into the model, enabling malicious behavior with simple external backdoor triggers at inference time and only a blackbox perspective of the model itself. Detecting this type of attack is challenging because the unexpected behavior occurs only when a backdoor trigger, which is known only to the adversary, is present. Model users, either direct users of training data or users of pre-trained model from a catalog, may not guarantee the safe operation of their ML-based system. In this paper, we propose a novel approach to backdoor detection and removal for neural networks. Through extensive experimental results, we demonstrate its effectiveness for neural networks classifying text and images. To the best of our knowledge, this is the first methodology capable of detecting poisonous data crafted to insert backdoors and repairing the model that does not require a verified and trusted dataset.

READ FULL TEXT

page 5

page 8

research
08/09/2019

DeepCleanse: A Black-box Input SanitizationFramework Against Backdoor Attacks on DeepNeural Networks

As Machine Learning, especially Deep Learning, has been increasingly use...
research
02/18/2023

RobustNLP: A Technique to Defend NLP Models Against Backdoor Attacks

As machine learning (ML) systems are being increasingly employed in the ...
research
03/26/2020

Obliviousness Makes Poisoning Adversaries Weaker

Poisoning attacks have emerged as a significant security threat to machi...
research
05/18/2022

Backdoor Attacks on Bayesian Neural Networks using Reverse Distribution

Due to cost and time-to-market constraints, many industries outsource th...
research
07/11/2020

Mitigating backdoor attacks in LSTM-based Text Classification Systems by Backdoor Keyword Identification

It has been proved that deep neural networks are facing a new threat cal...
research
11/02/2018

ISA4ML: Training Data-Unaware Imperceptible Security Attacks on Machine Learning Modules of Autonomous Vehicles

Due to big data analysis ability, machine learning (ML) algorithms are b...
research
09/07/2022

Why So Toxic? Measuring and Triggering Toxic Behavior in Open-Domain Chatbots

Chatbots are used in many applications, e.g., automated agents, smart ho...

Please sign up or login with your details

Forgot password? Click here to reset