A new Backdoor Attack in CNNs by training set corruption without label poisoning

02/12/2019
by   Mauro Barni, et al.
0

Backdoor attacks against CNNs represent a new threat against deep learning systems, due to the possibility of corrupting the training set so to induce an incorrect behaviour at test time. To avoid that the trainer recognises the presence of the corrupted samples, the corruption of the training set must be as stealthy as possible. Previous works have focused on the stealthiness of the perturbation injected into the training samples, however they all assume that the labels of the corrupted samples are also poisoned. This greatly reduces the stealthiness of the attack, since samples whose content does not agree with the label can be identified by visual inspection of the training set or by running a pre-classification step. In this paper we present a new backdoor attack without label poisoning Since the attack works by corrupting only samples of the target class, it has the additional advantage that it does not need to identify beforehand the class of the samples to be attacked at test time. Results obtained on the MNIST digits recognition task and the traffic signs classification task show that backdoor attacks without label poisoning are indeed possible, thus raising a new alarm regarding the use of deep learning in security-critical applications.

READ FULL TEXT
research
02/19/2022

Label-Smoothed Backdoor Attack

By injecting a small number of poisoned samples into the training set, b...
research
03/15/2019

Visual recognition in the wild by sampling deep similarity functions

Recognising relevant objects or object states in its environment is a ba...
research
11/16/2021

An Overview of Backdoor Attacks Against Deep Neural Networks and Possible Defences

Together with impressive advances touching every aspect of our society, ...
research
06/26/2020

Deep Partition Aggregation: Provable Defense against General Poisoning Attacks

Adversarial poisoning attacks distort training data in order to corrupt ...
research
09/02/2021

Excess Capacity and Backdoor Poisoning

A backdoor data poisoning attack is an adversarial attack wherein the at...
research
11/11/2019

Rethinking Generalisation

In this paper, we present a new approach to computing the generalisation...
research
01/14/2019

Towards Testing of Deep Learning Systems with Training Set Reduction

Testing the implementation of deep learning systems and their training r...

Please sign up or login with your details

Forgot password? Click here to reset