One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training

08/12/2023
by   Jianshuo Dong, et al.
0

Deep neural networks (DNNs) are widely deployed on real-world devices. Concerns regarding their security have gained great attention from researchers. Recently, a new weight modification attack called bit flip attack (BFA) was proposed, which exploits memory fault inject techniques such as row hammer to attack quantized models in the deployment stage. With only a few bit flips, the target model can be rendered useless as a random guesser or even be implanted with malicious functionalities. In this work, we seek to further reduce the number of bit flips. We propose a training-assisted bit flip attack, in which the adversary is involved in the training stage to build a high-risk model to release. This high-risk model, obtained coupled with a corresponding malicious model, behaves normally and can escape various detection methods. The results on benchmark datasets show that an adversary can easily convert this high-risk but normal model to a malicious one on victim's side by flipping only one critical bit on average in the deployment stage. Moreover, our attack still poses a significant threat even when defenses are employed. The codes for reproducing main experiments are available at <https://github.com/jianshuod/TBA>.

READ FULL TEXT

page 3

page 9

page 13

research
11/25/2021

Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks

One major goal of the AI security community is to securely and reliably ...
research
11/01/2021

ZeBRA: Precisely Destroying Neural Networks with Zero-Data Based Repeated Bit Flip Attack

In this paper, we present Zero-data Based Repeated bit flip Attack (ZeBR...
research
07/27/2022

Hardly Perceptible Trojan Attack against Neural Networks with Bit Flips

The security of deep neural networks (DNNs) has attracted increasing att...
research
07/25/2022

Versatile Weight Attack via Flipping Limited Bits

To explore the vulnerability of deep neural networks (DNNs), many attack...
research
02/21/2021

Targeted Attack against Deep Neural Networks via Flipping Limited Weight Bits

To explore the vulnerability of deep neural networks (DNNs), many attack...
research
07/15/2021

Subnet Replacement: Deployment-stage backdoor attack against deep neural networks in gray-box setting

We study the realistic potential of conducting backdoor attack against d...
research
10/20/2021

Moiré Attack (MA): A New Potential Risk of Screen Photos

Images, captured by a camera, play a critical role in training Deep Neur...

Please sign up or login with your details

Forgot password? Click here to reset