Launching a Robust Backdoor Attack under Capability Constrained Scenarios

04/21/2023
by   Ming Yi, et al.
0

As deep neural networks continue to be used in critical domains, concerns over their security have emerged. Deep learning models are vulnerable to backdoor attacks due to the lack of transparency. A poisoned backdoor model may perform normally in routine environments, but exhibit malicious behavior when the input contains a trigger. Current research on backdoor attacks focuses on improving the stealthiness of triggers, and most approaches require strong attacker capabilities, such as knowledge of the model structure or control over the training process. These attacks are impractical since in most cases the attacker's capabilities are limited. Additionally, the issue of model robustness has not received adequate attention. For instance, model distillation is commonly used to streamline model size as the number of parameters grows exponentially, and most of previous backdoor attacks failed after model distillation; the image augmentation operations can destroy the trigger and thus disable the backdoor. This study explores the implementation of black-box backdoor attacks within capability constraints. An attacker can carry out such attacks by acting as either an image annotator or an image provider, without involvement in the training process or knowledge of the target model's structure. Through the design of a backdoor trigger, our attack remains effective after model distillation and image augmentation, making it more threatening and practical. Our experimental results demonstrate that our method achieves a high attack success rate in black-box scenarios and evades state-of-the-art backdoor defenses.

READ FULL TEXT

page 1

page 4

page 6

page 8

research
06/04/2022

Saliency Attack: Towards Imperceptible Black-box Adversarial Attack

Deep neural networks are vulnerable to adversarial examples, even in the...
research
06/23/2020

RayS: A Ray Searching Method for Hard-label Adversarial Attack

Deep neural networks are vulnerable to adversarial attacks. Among differ...
research
11/15/2017

The best defense is a good offense: Countering black box attacks by predicting slightly wrong labels

Black-Box attacks on machine learning models occur when an attacker, des...
research
08/09/2019

DeepCleanse: A Black-box Input SanitizationFramework Against Backdoor Attacks on DeepNeural Networks

As Machine Learning, especially Deep Learning, has been increasingly use...
research
01/18/2021

DeepPayload: Black-box Backdoor Attack on Deep Learning Models through Neural Payload Injection

Deep learning models are increasingly used in mobile applications as cri...
research
05/20/2023

Stability, Generalization and Privacy: Precise Analysis for Random and NTK Features

Deep learning models can be vulnerable to recovery attacks, raising priv...
research
09/05/2022

An Adaptive Black-box Defense against Trojan Attacks (TrojDef)

Trojan backdoor is a poisoning attack against Neural Network (NN) classi...

Please sign up or login with your details

Forgot password? Click here to reset