BackdoorBox: A Python Toolbox for Backdoor Learning

02/01/2023
by   Yiming Li, et al.
0

Third-party resources (e.g., samples, backbones, and pre-trained models) are usually involved in the training of deep neural networks (DNNs), which brings backdoor attacks as a new training-phase threat. In general, backdoor attackers intend to implant hidden backdoor in DNNs, so that the attacked DNNs behave normally on benign samples whereas their predictions will be maliciously changed to a pre-defined target label if hidden backdoors are activated by attacker-specified trigger patterns. To facilitate the research and development of more secure training schemes and defenses, we design an open-sourced Python toolbox that implements representative and advanced backdoor attacks and defenses under a unified and flexible framework. Our toolbox has four important and promising characteristics, including consistency, simplicity, flexibility, and co-development. It allows researchers and developers to easily implement and compare different methods on benchmark or their local datasets. This Python toolbox, namely , is available at <https://github.com/THUYimingLi/BackdoorBox>.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/07/2020

Backdoor Attack with Sample-Specific Triggers

Recently, backdoor attacks pose a new security threat to the training pr...
research
09/27/2022

Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protection

Deep neural networks (DNNs) have demonstrated their superiority in pract...
research
02/24/2023

Defending Against Backdoor Attacks by Layer-wise Feature Analysis

Training deep neural networks (DNNs) usually requires massive training d...
research
03/23/2023

Backdoor Defense via Adaptively Splitting Poisoned Dataset

Backdoor defenses have been studied to alleviate the threat of deep neur...
research
06/08/2021

Handcrafted Backdoors in Deep Neural Networks

Deep neural networks (DNNs), while accurate, are expensive to train. Man...
research
11/02/2022

Untargeted Backdoor Attack against Object Detection

Recent studies revealed that deep neural networks (DNNs) are exposed to ...
research
03/23/2023

Don't FREAK Out: A Frequency-Inspired Approach to Detecting Backdoor Poisoned Samples in DNNs

In this paper we investigate the frequency sensitivity of Deep Neural Ne...

Please sign up or login with your details

Forgot password? Click here to reset