An Overview of Backdoor Attacks Against Deep Neural Networks and Possible Defences

11/16/2021
by   Wei Guo, et al.
16

Together with impressive advances touching every aspect of our society, AI technology based on Deep Neural Networks (DNN) is bringing increasing security concerns. While attacks operating at test time have monopolised the initial attention of researchers, backdoor attacks, exploiting the possibility of corrupting DNN models by interfering with the training process, represents a further serious threat undermining the dependability of AI techniques. In a backdoor attack, the attacker corrupts the training data so to induce an erroneous behaviour at test time. Test time errors, however, are activated only in the presence of a triggering event corresponding to a properly crafted input sample. In this way, the corrupted network continues to work as expected for regular inputs, and the malicious behaviour occurs only when the attacker decides to activate the backdoor hidden within the network. In the last few years, backdoor attacks have been the subject of an intense research activity focusing on both the development of new classes of attacks, and the proposal of possible countermeasures. The goal of this overview paper is to review the works published until now, classifying the different types of attacks and defences proposed so far. The classification guiding the analysis is based on the amount of control that the attacker has on the training process, and the capability of the defender to verify the integrity of the data used for training, and to monitor the operations of the DNN at training and test time. As such, the proposed analysis is particularly suited to highlight the strengths and weaknesses of both attacks and defences with reference to the application scenarios they are operating in.

READ FULL TEXT

page 1

page 7

page 8

page 10

page 11

page 13

research
12/06/2021

Test-Time Detection of Backdoor Triggers for Poisoned Deep Neural Networks

Backdoor (Trojan) attacks are emerging threats against deep neural netwo...
research
08/11/2023

Test-Time Adaptation for Backdoor Defense

Deep neural networks have played a crucial part in many critical domains...
research
05/01/2021

A Master Key Backdoor for Universal Impersonation Attack against DNN-based Face Verification

We introduce a new attack against face verification systems based on Dee...
research
02/12/2019

A new Backdoor Attack in CNNs by training set corruption without label poisoning

Backdoor attacks against CNNs represent a new threat against deep learni...
research
08/21/2018

Are You Tampering With My Data?

We propose a novel approach towards adversarial attacks on neural networ...
research
01/11/2023

Universal Detection of Backdoor Attacks via Density-based Clustering and Centroids Analysis

In this paper, we propose a Universal Defence based on Clustering and Ce...
research
05/14/2020

Protecting the integrity of the training procedure of neural networks

Due to significant improvements in performance in recent years, neural n...

Please sign up or login with your details

Forgot password? Click here to reset