Few-shot Backdoor Attacks via Neural Tangent Kernels

10/12/2022
by   Jonathan Hayase, et al.
0

In a backdoor attack, an attacker injects corrupted examples into the training set. The goal of the attacker is to cause the final trained model to predict the attacker's desired target label when a predefined trigger is added to test inputs. Central to these attacks is the trade-off between the success rate of the attack and the number of corrupted training examples injected. We pose this attack as a novel bilevel optimization problem: construct strong poison examples that maximize the attack success rate of the trained model. We use neural tangent kernels to approximate the training dynamics of the model being attacked and automatically learn strong poison examples. We experiment on subclasses of CIFAR-10 and ImageNet with WideResNet-34 and ConvNeXt architectures on periodic and patch trigger attacks and show that NTBA-designed poisoned examples achieve, for example, an attack success rate of 90 times smaller number of poison examples injected compared to the baseline. We provided an interpretation of the NTBA-designed attacks using the analysis of kernel linear regression. We further demonstrate a vulnerability in overparametrized deep neural networks, which is revealed by the shape of the neural tangent kernel.

READ FULL TEXT

page 2

page 6

page 7

page 9

page 18

research
06/28/2018

Adversarial Reprogramming of Neural Networks

Deep neural networks are susceptible to adversarial attacks. In computer...
research
04/22/2022

Data-Efficient Backdoor Attacks

Recent studies have proven that deep neural networks are vulnerable to b...
research
09/13/2018

Defensive Dropout for Hardening Deep Neural Networks under Adversarial Attacks

Deep neural networks (DNNs) are known vulnerable to adversarial attacks....
research
09/02/2021

Excess Capacity and Backdoor Poisoning

A backdoor data poisoning attack is an adversarial attack wherein the at...
research
03/23/2021

The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers?

One of the most concerning threats for modern AI systems is data poisoni...
research
08/27/2022

TrojViT: Trojan Insertion in Vision Transformers

Vision Transformers (ViTs) have demonstrated the state-of-the-art perfor...
research
09/04/2020

Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching

Data Poisoning attacks involve an attacker modifying training data to ma...

Please sign up or login with your details

Forgot password? Click here to reset