Exploiting Logic Locking for a Neural Trojan Attack on Machine Learning Accelerators

04/12/2023
by   Hongye Xu, et al.
0

Logic locking has been proposed to safeguard intellectual property (IP) during chip fabrication. Logic locking techniques protect hardware IP by making a subset of combinational modules in a design dependent on a secret key that is withheld from untrusted parties. If an incorrect secret key is used, a set of deterministic errors is produced in locked modules, restricting unauthorized use. A common target for logic locking is neural accelerators, especially as machine-learning-as-a-service becomes more prevalent. In this work, we explore how logic locking can be used to compromise the security of a neural accelerator it protects. Specifically, we show how the deterministic errors caused by incorrect keys can be harnessed to produce neural-trojan-style backdoors. To do so, we first outline a motivational attack scenario where a carefully chosen incorrect key, which we call a trojan key, produces misclassifications for an attacker-specified input class in a locked accelerator. We then develop a theoretically-robust attack methodology to automatically identify trojan keys. To evaluate this attack, we launch it on several locked accelerators. In our largest benchmark accelerator, our attack identified a trojan key that caused a 74% decrease in classification accuracy for attacker-specified trigger inputs, while degrading accuracy by only 1.7% for other inputs on average.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/06/2022

Joint Protection Scheme for Deep Neural Network Hardware Accelerators and Models

Deep neural networks (DNNs) are utilized in numerous image processing, o...
research
01/04/2020

DLockout: A Design Lockout Technique for Key Obfuscated RTL IP Designs

Intellectual Property (IP) infringement including piracy and over produc...
research
02/21/2022

Hardware Obfuscation of Digital FIR Filters

A finite impulse response (FIR) filter is a ubiquitous block in digital ...
research
02/22/2019

Attacking Hardware AES with DFA

We present the first practical attack on a hardware AES accelerator with...
research
12/02/2018

Training for 'Unstable' CNN Accelerator:A Case Study on FPGA

With the great advancements of convolution neural networks(CNN), CNN acc...
research
03/18/2022

HDLock: Exploiting Privileged Encoding to Protect Hyperdimensional Computing Models against IP Stealing

Hyperdimensional Computing (HDC) is facing infringement issues due to st...

Please sign up or login with your details

Forgot password? Click here to reset