Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks

11/25/2021
by   Xiangyu Qi, et al.
14

One major goal of the AI security community is to securely and reliably produce and deploy deep learning models for real-world applications. To this end, data poisoning based backdoor attacks on deep neural networks (DNNs) in the production stage (or training stage) and corresponding defenses are extensively explored in recent years. Ironically, backdoor attacks in the deployment stage, which can often happen in unprofessional users' devices and are thus arguably far more threatening in real-world scenarios, draw much less attention of the community. We attribute this imbalance of vigilance to the weak practicality of existing deployment-stage backdoor attack algorithms and the insufficiency of real-world attack demonstrations. To fill the blank, in this work, we study the realistic threat of deployment-stage backdoor attacks on DNNs. We base our study on a commonly used deployment-stage attack paradigm – adversarial weight attack, where adversaries selectively modify model weights to embed backdoor into deployed DNNs. To approach realistic practicality, we propose the first gray-box and physically realizable weights attack algorithm for backdoor injection, namely subnet replacement attack (SRA), which only requires architecture information of the victim model and can support physical triggers in the real world. Extensive experimental simulations and system-level real-world attack demonstrations are conducted. Our results not only suggest the effectiveness and practicality of the proposed attack algorithm, but also reveal the practical risk of a novel type of computer virus that may widely spread and stealthily inject backdoor into DNN models in user devices. By our study, we call for more attention to the vulnerability of DNNs in the deployment stage.

READ FULL TEXT

page 8

page 16

page 17

page 18

research
07/15/2021

Subnet Replacement: Deployment-stage backdoor attack against deep neural networks in gray-box setting

We study the realistic potential of conducting backdoor attack against d...
research
08/12/2023

One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training

Deep neural networks (DNNs) are widely deployed on real-world devices. C...
research
11/03/2022

Physically Adversarial Attacks and Defenses in Computer Vision: A Survey

Although Deep Neural Networks (DNNs) have been widely applied in various...
research
06/01/2022

NeuroUnlock: Unlocking the Architecture of Obfuscated Deep Neural Networks

The advancements of deep neural networks (DNNs) have led to their deploy...
research
02/21/2021

Targeted Attack against Deep Neural Networks via Flipping Limited Weight Bits

To explore the vulnerability of deep neural networks (DNNs), many attack...
research
06/01/2023

Robust Backdoor Attack with Visible, Semantic, Sample-Specific, and Compatible Triggers

Deep neural networks (DNNs) can be manipulated to exhibit specific behav...
research
09/28/2021

Smart at what cost? Characterising Mobile Deep Neural Networks in the wild

With smartphones' omnipresence in people's pockets, Machine Learning (ML...

Please sign up or login with your details

Forgot password? Click here to reset