A Closer Look at Evaluating the Bit-Flip Attack Against Deep Neural Networks

09/28/2022
by   Kevin Hector, et al.
0

Deep neural network models are massively deployed on a wide variety of hardware platforms. This results in the appearance of new attack vectors that significantly extend the standard attack surface, extensively studied by the adversarial machine learning community. One of the first attack that aims at drastically dropping the performance of a model, by targeting its parameters (weights) stored in memory, is the Bit-Flip Attack (BFA). In this work, we point out several evaluation challenges related to the BFA. First of all, the lack of an adversary's budget in the standard threat model is problematic, especially when dealing with physical attacks. Moreover, since the BFA presents critical variability, we discuss the influence of some training parameters and the importance of the model architecture. This work is the first to present the impact of the BFA against fully-connected architectures that present different behaviors compared to convolutional neural networks. These results highlight the importance of defining robust and sound evaluation methodologies to properly evaluate the dangers of parameter-based attacks as well as measure the real level of robustness offered by a defense.

READ FULL TEXT
research
04/25/2023

Evaluation of Parameter-based Attacks against Embedded Neural Networks with Laser Injection

Upcoming certification actions related to the security of machine learni...
research
11/10/2022

A Practical Introduction to Side-Channel Extraction of Deep Neural Network Parameters

Model extraction is a major threat for embedded deep neural network mode...
research
08/31/2023

Fault Injection and Safe-Error Attack for Extraction of Embedded Neural Network Models

Model extraction emerges as a critical security threat with attack vecto...
research
05/29/2019

A backdoor attack against LSTM-based text classification systems

With the widespread use of deep learning system in many applications, th...
research
10/14/2021

An Optimization Perspective on Realizing Backdoor Injection Attacks on Deep Neural Networks in Hardware

State-of-the-art deep neural networks (DNNs) have been proven to be vuln...
research
04/06/2021

Backdoor Attack in the Physical World

Backdoor attack intends to inject hidden backdoor into the deep neural n...

Please sign up or login with your details

Forgot password? Click here to reset