Stealthy Attack on Algorithmic-Protected DNNs via Smart Bit Flipping

12/25/2021
by   Behnam Ghavami, et al.
0

Recently, deep neural networks (DNNs) have been deployed in safety-critical systems such as autonomous vehicles and medical devices. Shortly after that, the vulnerability of DNNs were revealed by stealthy adversarial examples where crafted inputs – by adding tiny perturbations to original inputs – can lead a DNN to generate misclassification outputs. To improve the robustness of DNNs, some algorithmic-based countermeasures against adversarial examples have been introduced thereafter. In this paper, we propose a new type of stealthy attack on protected DNNs to circumvent the algorithmic defenses: via smart bit flipping in DNN weights, we can reserve the classification accuracy for clean inputs but misclassify crafted inputs even with algorithmic countermeasures. To fool protected DNNs in a stealthy way, we introduce a novel method to efficiently find their most vulnerable weights and flip those bits in hardware. Experimental results show that we can successfully apply our stealthy attack against state-of-the-art algorithmic-protected DNNs.

READ FULL TEXT

page 5

page 6

research
05/28/2023

Amplification trojan network: Attack deep neural networks by amplifying their inherent weakness

Recent works found that deep neural networks (DNNs) can be fooled by adv...
research
04/12/2019

Cycle-Consistent Adversarial GAN: the integration of adversarial attack and defense

In image classification of deep learning, adversarial examples where inp...
research
03/30/2020

DeepHammer: Depleting the Intelligence of Deep Neural Networks through Targeted Chain of Bit Flips

Security of machine learning is increasingly becoming a major concern du...
research
11/05/2019

The Tale of Evil Twins: Adversarial Inputs versus Backdoored Models

Despite their tremendous success in a wide range of applications, deep n...
research
10/14/2021

An Optimization Perspective on Realizing Backdoor Injection Attacks on Deep Neural Networks in Hardware

State-of-the-art deep neural networks (DNNs) have been proven to be vuln...
research
04/19/2022

CorrGAN: Input Transformation Technique Against Natural Corruptions

Because of the increasing accuracy of Deep Neural Networks (DNNs) on dif...
research
06/03/2019

Terminal Brain Damage: Exposing the Graceless Degradation in Deep Neural Networks Under Hardware Fault Attacks

Deep neural networks (DNNs) have been shown to tolerate "brain damage": ...

Please sign up or login with your details

Forgot password? Click here to reset