CSI Neural Network: Using Side-channels to Recover Your Artificial Neural Network Information

10/22/2018
by   Lejla Batina, et al.
0

Machine learning has become mainstream across industries. Numerous examples proved the validity of it for security applications. In this work, we investigate how to reverse engineer a neural network by using only power side-channel information. To this end, we consider a multilayer perceptron as the machine learning architecture of choice and assume a non-invasive and eavesdropping attacker capable of measuring only passive side-channel leakages like power consumption, electromagnetic radiation, and reaction time. We conduct all experiments on real data and common neural net architectures in order to properly assess the applicability and extendability of those attacks. Practical results are shown on an ARM CORTEX-M3 microcontroller. Our experiments show that the side-channel attacker is capable of obtaining the following information: the activation functions used in the architecture, the number of layers and neurons in the layers, the number of output classes, and weights in the neural network. Thus, the attacker can effectively reverse engineer the network using side-channel information. Next, we show that once the attacker has the knowledge about the neural network architecture, he/she could also recover the inputs to the network with only a single-shot measurement. Finally, we discuss several mitigations one could use to thwart such attacks.

READ FULL TEXT

page 1

page 6

page 8

page 10

page 12

page 13

research
06/12/2020

Power Consumption Variation over Activation Functions

The power that machine learning models consume when making predictions c...
research
10/30/2017

Empirical analysis of non-linear activation functions for Deep Neural Networks in classification tasks

We provide an overview of several non-linear activation functions in a n...
research
03/25/2023

A Desynchronization-Based Countermeasure Against Side-Channel Analysis of Neural Networks

Model extraction attacks have been widely applied, which can normally be...
research
03/26/2021

Leaky Nets: Recovering Embedded Neural Network Models and Inputs through Simple Power and Timing Side-Channels – Attacks and Defenses

With the recent advancements in machine learning theory, many commercial...
research
10/29/2019

MaskedNet: A Pathway for Secure Inference against Power Side-Channel Attacks

Differential Power Analysis (DPA) has been an active area of research fo...
research
12/19/2019

Model Weight Theft With Just Noise Inputs: The Curious Case of the Petulant Attacker

This paper explores the scenarios under which an attacker can claim that...
research
10/29/2019

MaskedNet: The First Hardware Inference Engine Aiming Power Side-Channel Protection

Differential Power Analysis (DPA) has been an active area of research fo...

Please sign up or login with your details

Forgot password? Click here to reset