Fault Injection and Safe-Error Attack for Extraction of Embedded Neural Network Models

08/31/2023
by   Kevin Hector, et al.
0

Model extraction emerges as a critical security threat with attack vectors exploiting both algorithmic and implementation-based approaches. The main goal of an attacker is to steal as much information as possible about a protected victim model, so that he can mimic it with a substitute model, even with a limited access to similar training data. Recently, physical attacks such as fault injection have shown worrying efficiency against the integrity and confidentiality of embedded models. We focus on embedded deep neural network models on 32-bit microcontrollers, a widespread family of hardware platforms in IoT, and the use of a standard fault injection strategy - Safe Error Attack (SEA) - to perform a model extraction attack with an adversary having a limited access to training data. Since the attack strongly depends on the input queries, we propose a black-box approach to craft a successful attack set. For a classical convolutional neural network, we successfully recover at least 90 of the most significant bits with about 1500 crafted inputs. These information enable to efficiently train a substitute model, with only 8 dataset, that reaches high fidelity and near identical accuracy level than the victim model.

READ FULL TEXT
research
05/04/2021

An Overview of Laser Injection against Embedded Neural Network Models

For many IoT domains, Machine Learning and more particularly Deep Learni...
research
08/31/2023

Fault Injection on Embedded Neural Networks: Impact of a Single Instruction Skip

With the large-scale integration and use of neural network models, espec...
research
11/10/2022

A Practical Introduction to Side-Channel Extraction of Deep Neural Network Parameters

Model extraction is a major threat for embedded deep neural network mode...
research
09/23/2021

FooBaR: Fault Fooling Backdoor Attack on Neural Network Training

Neural network implementations are known to be vulnerable to physical at...
research
02/22/2019

Attacking Hardware AES with DFA

We present the first practical attack on a hardware AES accelerator with...
research
09/28/2022

A Closer Look at Evaluating the Bit-Flip Attack Against Deep Neural Networks

Deep neural network models are massively deployed on a wide variety of h...
research
03/10/2020

Cryptanalytic Extraction of Neural Network Models

We argue that the machine learning problem of model extraction is actual...

Please sign up or login with your details

Forgot password? Click here to reset