Eliminating Backdoor Triggers for Deep Neural Networks Using Attention Relation Graph Distillation

04/21/2022
by   Jun Xia, et al.
8

Due to the prosperity of Artificial Intelligence (AI) techniques, more and more backdoors are designed by adversaries to attack Deep Neural Networks (DNNs).Although the state-of-the-art method Neural Attention Distillation (NAD) can effectively erase backdoor triggers from DNNs, it still suffers from non-negligible Attack Success Rate (ASR) together with lowered classification ACCuracy (ACC), since NAD focuses on backdoor defense using attention features (i.e., attention maps) of the same order. In this paper, we introduce a novel backdoor defense framework named Attention Relation Graph Distillation (ARGD), which fully explores the correlation among attention features with different orders using our proposed Attention Relation Graphs (ARGs). Based on the alignment of ARGs between both teacher and student models during knowledge distillation, ARGD can eradicate more backdoor triggers than NAD. Comprehensive experimental results show that, against six latest backdoor attacks, ARGD outperforms NAD by up to 94.85 up to 3.23

READ FULL TEXT
research
01/15/2021

Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks

Deep neural networks (DNNs) are known vulnerable to backdoor attacks, a ...
research
05/09/2022

Model-Contrastive Learning for Backdoor Defense

Along with the popularity of Artificial Intelligence (AI) techniques, an...
research
05/10/2021

KDExplainer: A Task-oriented Attention Model for Explaining Knowledge Distillation

Knowledge distillation (KD) has recently emerged as an efficacious schem...
research
11/30/2022

T2G-Former: Organizing Tabular Features into Relation Graphs Promotes Heterogeneous Feature Interaction

Recent development of deep neural networks (DNNs) for tabular learning h...
research
06/26/2022

Knowledge Distillation with Representative Teacher Keys Based on Attention Mechanism for Image Classification Model Compression

With the improvement of AI chips (e.g., GPU, TPU, and NPU) and the fast ...
research
12/05/2020

Parallel Blockwise Knowledge Distillation for Deep Neural Network Compression

Deep neural networks (DNNs) have been extremely successful in solving ma...
research
06/08/2019

Strategies to architect AI Safety: Defense to guard AI from Adversaries

The impact of designing for security of AI is critical for humanity in t...

Please sign up or login with your details

Forgot password? Click here to reset