An Optimization Perspective on Realizing Backdoor Injection Attacks on Deep Neural Networks in Hardware

10/14/2021
by   M. Caner Tol, et al.
1

State-of-the-art deep neural networks (DNNs) have been proven to be vulnerable to adversarial manipulation and backdoor attacks. Backdoored models deviate from expected behavior on inputs with predefined triggers while retaining performance on clean data. Recent works focus on software simulation of backdoor injection during the inference phase by modifying network weights, which we find often unrealistic in practice due to the hardware restriction such as bit allocation in memory. In contrast, in this work, we investigate the viability of backdoor injection attacks in real-life deployments of DNNs on hardware and address such practical issues in hardware implementation from a novel optimization perspective. We are motivated by the fact that the vulnerable memory locations are very rare, device-specific, and sparsely distributed. Consequently, we propose a novel network training algorithm based on constrained optimization for realistic backdoor injection attack in hardware. By modifying parameters uniformly across the convolutional and fully-connected layers as well as optimizing the trigger pattern together, we achieve the state-of-the-art attack performance with fewer bit flips. For instance, our method on a hardware-deployed ResNet-20 model trained on CIFAR-10 can achieve over 91 10 bits out of 2.2 million bits.

READ FULL TEXT

page 3

page 9

research
09/10/2019

TBT: Targeted Neural Network Attack with Bit Trojan

Security of modern Deep Neural Networks (DNNs) is under severe scrutiny ...
research
11/02/2021

HASHTAG: Hash Signatures for Online Detection of Fault-Injection Attacks on Deep Neural Networks

We propose HASHTAG, the first framework that enables high-accuracy detec...
research
07/27/2022

Hardly Perceptible Trojan Attack against Neural Networks with Bit Flips

The security of deep neural networks (DNNs) has attracted increasing att...
research
12/25/2021

Stealthy Attack on Algorithmic-Protected DNNs via Smart Bit Flipping

Recently, deep neural networks (DNNs) have been deployed in safety-criti...
research
05/16/2020

NeuroAttack: Undermining Spiking Neural Networks Security through Externally Triggered Bit-Flips

Due to their proven efficiency, machine-learning systems are deployed in...
research
09/28/2022

A Closer Look at Evaluating the Bit-Flip Attack Against Deep Neural Networks

Deep neural network models are massively deployed on a wide variety of h...
research
04/22/2022

Data-Efficient Backdoor Attacks

Recent studies have proven that deep neural networks are vulnerable to b...

Please sign up or login with your details

Forgot password? Click here to reset