Guarding Machine Learning Hardware Against Physical Side-Channel Attacks

09/01/2021
by   Anuj Dubey, et al.
0

Machine learning (ML) models can be trade secrets due to their development cost. Hence, they need protection against malicious forms of reverse engineering (e.g., in IP piracy). With a growing shift of ML to the edge devices, in part for performance and in part for privacy benefits, the models have become susceptible to the so-called physical side-channel attacks. ML being a relatively new target compared to cryptography poses the problem of side-channel analysis in a context that lacks published literature. The gap between the burgeoning edge-based ML devices and the research on adequate defenses to provide side-channel security for them thus motivates our study. Our work develops and combines different flavors of side-channel defenses for ML models in the hardware blocks. We propose and optimize the first defense based on Boolean masking. We first implement all the masked hardware blocks. We then present an adder optimization to reduce the area and latency overheads. Finally, we couple it with a shuffle-based defense. We quantify that the area-delay overhead of masking ranges from 5.4× to 4.7× depending on the adder topology used and demonstrate first-order side-channel security of millions of power traces. Additionally, the shuffle countermeasure impedes a straightforward second-order attack on our first-order masked implementation.

READ FULL TEXT

page 18

page 19

page 20

research
02/03/2023

Defensive ML: Defending Architectural Side-channels with Adversarial Obfuscation

Side-channel attacks that use machine learning (ML) for signal analysis ...
research
06/26/2020

WARDEN: Warranting Robustness Against Deception in Next-Generation Systems

Malicious users of a data center can reverse engineer power-management f...
research
06/08/2023

Island-based Random Dynamic Voltage Scaling vs ML-Enhanced Power Side-Channel Attacks

In this paper, we describe and analyze an island-based random dynamic vo...
research
10/29/2019

MaskedNet: The First Hardware Inference Engine Aiming Power Side-Channel Protection

Differential Power Analysis (DPA) has been an active area of research fo...
research
06/16/2020

BoMaNet: Boolean Masking of an Entire Neural Network

Recent work on stealing machine learning (ML) models from inference engi...
research
10/01/2019

Stealthy Opaque Predicates in Hardware – Obfuscating Constant Expressions at Negligible Overhead

Opaque predicates are a well-established fundamental building block for ...

Please sign up or login with your details

Forgot password? Click here to reset