An Overview of Laser Injection against Embedded Neural Network Models

05/04/2021
by   Mathieu Dumont, et al.
0

For many IoT domains, Machine Learning and more particularly Deep Learning brings very efficient solutions to handle complex data and perform challenging and mostly critical tasks. However, the deployment of models in a large variety of devices faces several obstacles related to trust and security. The latest is particularly critical since the demonstrations of severe flaws impacting the integrity, confidentiality and accessibility of neural network models. However, the attack surface of such embedded systems cannot be reduced to abstract flaws but must encompass the physical threats related to the implementation of these models within hardware platforms (e.g., 32-bit microcontrollers). Among physical attacks, Fault Injection Analysis (FIA) are known to be very powerful with a large spectrum of attack vectors. Most importantly, highly focused FIA techniques such as laser beam injection enable very accurate evaluation of the vulnerabilities as well as the robustness of embedded systems. Here, we propose to discuss how laser injection with state-of-the-art equipment, combined with theoretical evidences from Adversarial Machine Learning, highlights worrying threats against the integrity of deep learning inference and claims that join efforts from the theoretical AI and Physical Security communities are a urgent need.

READ FULL TEXT

page 4

page 5

research
04/25/2023

Evaluation of Parameter-based Attacks against Embedded Neural Networks with Laser Injection

Upcoming certification actions related to the security of machine learni...
research
08/31/2023

Fault Injection on Embedded Neural Networks: Impact of a Single Instruction Skip

With the large-scale integration and use of neural network models, espec...
research
08/31/2023

Fault Injection and Safe-Error Attack for Extraction of Embedded Neural Network Models

Model extraction emerges as a critical security threat with attack vecto...
research
05/04/2021

A Review of Confidentiality Threats Against Embedded Neural Network Models

Utilization of Machine Learning (ML) algorithms, especially Deep Neural ...
research
09/11/2023

Classification of Quantum Computer Fault Injection Attacks

The rapid growth of interest in quantum computing has brought about the ...
research
01/24/2019

Securing Tag-based recommender systems against profile injection attacks: A comparative study. (Extended Report)

This work addresses the challenges related to attacks on collaborative t...
research
08/30/2018

Securing Tag-based recommender systems against profile injection attacks: A comparative study

This work addresses challenges related to attacks on social tagging syst...

Please sign up or login with your details

Forgot password? Click here to reset