BoMaNet: Boolean Masking of an Entire Neural Network

06/16/2020
by   Anuj Dubey, et al.
0

Recent work on stealing machine learning (ML) models from inference engines with physical side-channel attacks warrant an urgent need for effective side-channel defenses. This work proposes the first fully-masked neural network inference engine design. Masking uses secure multi-party computation to split the secrets into random shares and to decorrelate the statistical relation of secret-dependent computations to side-channels (e.g., the power draw). In this work, we construct secure hardware primitives to mask all the linear and non-linear operations in a neural network. We address the challenge of masking integer addition by converting each addition into a sequence of XOR and AND gates and by augmenting Trichina's secure Boolean masking style. We improve the traditional Trichina's AND gates by adding pipelining elements for better glitch-resistance and we architect the whole design to sustain a throughput of 1 masked addition per cycle. We implement the proposed secure inference engine on a Xilinx Spartan-6 (XC6SLX75) FPGA. The results show that masking incurs an overhead of 3.5% in latency and 5.9× in area. Finally, we demonstrate the security of the masked design with 2M traces.

READ FULL TEXT
research
10/29/2019

MaskedNet: The First Hardware Inference Engine Aiming Power Side-Channel Protection

Differential Power Analysis (DPA) has been an active area of research fo...
research
10/29/2019

MaskedNet: A Pathway for Secure Inference against Power Side-Channel Attacks

Differential Power Analysis (DPA) has been an active area of research fo...
research
07/22/2022

Secure and Lightweight Strong PUF Challenge Obfuscation with Keyed Non-linear FSR

We propose a secure and lightweight key based challenge obfuscation for ...
research
10/05/2019

Multiplierless and Sparse Machine Learning based on Margin Propagation Networks

The new generation of machine learning processors have evolved from mult...
research
09/01/2021

Guarding Machine Learning Hardware Against Physical Side-Channel Attacks

Machine learning (ML) models can be trade secrets due to their developme...
research
04/19/2021

Vectorized Secure Evaluation of Decision Forests

As the demand for machine learning-based inference increases in tandem w...
research
09/30/2019

Lattice PUF: A Strong Physical Unclonable Function Provably Secure against Machine Learning Attacks

We propose a strong physical unclonable function (PUF) that is provably ...

Please sign up or login with your details

Forgot password? Click here to reset