MaskedNet: The First Hardware Inference Engine Aiming Power Side-Channel Protection

10/29/2019
by   Anuj Dubey, et al.
0

Differential Power Analysis (DPA) has been an active area of research for the past two decades to study the attacks for extracting secret information from cryptographic implementations through power measurements and their defenses. Unfortunately, the research on power side-channels have so far predominantly focused on analyzing implementations of ciphers such as AES, DES, RSA, and recently post-quantum cryptography primitives (e.g., lattices). Meanwhile, machine-learning, and in particular deep-learning applications are becoming ubiquitous with several scenarios where the Machine Learning Models are Intellectual Properties requiring confidentiality. Expanding side-channel analysis to Machine Learning Model extraction, however, is largely unexplored. This paper expands the DPA framework to neural-network classifiers. First, it shows DPA attacks during inference to extract the secret model parameters such as weights and biases of a neural network. Second, it proposes the first countermeasures against these attacks by augmenting masking. The resulting design uses novel masked components such as masked adder trees for fully-connected layers and masked Rectifier Linear Units for activation functions. On a SAKURA-X FPGA board, experiments show that the first-order DPA attacks on the unprotected implementation can succeed with only 200 traces and our protection respectively increases the latency and area-cost by 2.8x and 2.3x.

READ FULL TEXT
research
10/29/2019

MaskedNet: A Pathway for Secure Inference against Power Side-Channel Attacks

Differential Power Analysis (DPA) has been an active area of research fo...
research
06/16/2020

BoMaNet: Boolean Masking of an Entire Neural Network

Recent work on stealing machine learning (ML) models from inference engi...
research
03/25/2023

A Desynchronization-Based Countermeasure Against Side-Channel Analysis of Neural Networks

Model extraction attacks have been widely applied, which can normally be...
research
09/01/2021

Guarding Machine Learning Hardware Against Physical Side-Channel Attacks

Machine learning (ML) models can be trade secrets due to their developme...
research
03/26/2021

Leaky Nets: Recovering Embedded Neural Network Models and Inputs through Simple Power and Timing Side-Channels – Attacks and Defenses

With the recent advancements in machine learning theory, many commercial...
research
10/22/2018

CSI Neural Network: Using Side-channels to Recover Your Artificial Neural Network Information

Machine learning has become mainstream across industries. Numerous examp...
research
07/05/2021

Proving SIFA Protection of Masked Redundant Circuits

Implementation attacks like side-channel and fault attacks pose a consid...

Please sign up or login with your details

Forgot password? Click here to reset