Targeted Attack against Deep Neural Networks via Flipping Limited Weight Bits

02/21/2021
by   Jiawang Bai, et al.
0

To explore the vulnerability of deep neural networks (DNNs), many attack paradigms have been well studied, such as the poisoning-based backdoor attack in the training stage and the adversarial attack in the inference stage. In this paper, we study a novel attack paradigm, which modifies model parameters in the deployment stage for malicious purposes. Specifically, our goal is to misclassify a specific sample into a target class without any sample modification, while not significantly reduce the prediction accuracy of other samples to ensure the stealthiness. To this end, we formulate this problem as a binary integer programming (BIP), since the parameters are stored as binary bits (i.e., 0 and 1) in the memory. By utilizing the latest technique in integer programming, we equivalently reformulate this BIP problem as a continuous optimization problem, which can be effectively and efficiently solved using the alternating direction method of multipliers (ADMM) method. Consequently, the flipped critical bits can be easily determined through optimization, rather than using a heuristic strategy. Extensive experiments demonstrate the superiority of our method in attacking DNNs.

READ FULL TEXT
research
07/25/2022

Versatile Weight Attack via Flipping Limited Bits

To explore the vulnerability of deep neural networks (DNNs), many attack...
research
09/10/2019

TBT: Targeted Neural Network Attack with Bit Trojan

Security of modern Deep Neural Networks (DNNs) is under severe scrutiny ...
research
07/15/2021

Subnet Replacement: Deployment-stage backdoor attack against deep neural networks in gray-box setting

We study the realistic potential of conducting backdoor attack against d...
research
11/01/2021

ZeBRA: Precisely Destroying Neural Networks with Zero-Data Based Repeated Bit Flip Attack

In this paper, we present Zero-data Based Repeated bit flip Attack (ZeBR...
research
05/28/2019

Fault Sneaking Attack: a Stealthy Framework for Misleading Deep Neural Networks

Despite the great achievements of deep neural networks (DNNs), the vulne...
research
11/25/2021

Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks

One major goal of the AI security community is to securely and reliably ...
research
08/12/2023

One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training

Deep neural networks (DNNs) are widely deployed on real-world devices. C...

Please sign up or login with your details

Forgot password? Click here to reset