Joint Protection Scheme for Deep Neural Network Hardware Accelerators and Models

10/06/2022
by   Jingbo Zhou, et al.
0

Deep neural networks (DNNs) are utilized in numerous image processing, object detection, and video analysis tasks and need to be implemented using hardware accelerators to achieve practical speed. Logic locking is one of the most popular methods for preventing chip counterfeiting. Nevertheless, existing logic-locking schemes need to sacrifice the number of input patterns leading to wrong output under incorrect keys to resist the powerful satisfiability (SAT)-attack. Furthermore, DNN model inference is fault-tolerant. Hence, using a wrong key for those SAT-resistant logic-locking schemes may not affect the accuracy of DNNs. This makes the previous SAT-resistant logic-locking scheme ineffective on protecting DNN accelerators. Besides, to prevent DNN models from being illegally used, the models need to be obfuscated by the designers before they are provided to end-users. Previous obfuscation methods either require long time to retrain the model or leak information about the model. This paper proposes a joint protection scheme for DNN hardware accelerators and models. The DNN accelerator is modified using a hardware key (Hkey) and a model key (Mkey). Different from previous logic locking, the Hkey, which is used to protect the accelerator, does not affect the output when it is wrong. As a result, the SAT attack can be effectively resisted. On the other hand, a wrong Hkey leads to substantial increase in memory accesses, inference time, and energy consumption and makes the accelerator unusable. A correct Mkey can recover the DNN model that is obfuscated by the proposed method. Compared to previous model obfuscation schemes, our proposed method avoids model retraining and does not leak model information.

READ FULL TEXT

page 1

page 6

research
03/18/2019

Software-Defined Design Space Exploration for an Efficient AI Accelerator Architecture

Deep neural networks (DNNs) have been shown to outperform conventional m...
research
04/12/2023

Exploiting Logic Locking for a Neural Trojan Attack on Machine Learning Accelerators

Logic locking has been proposed to safeguard intellectual property (IP) ...
research
08/02/2023

Mercury: An Automated Remote Side-channel Attack to Nvidia Deep Learning Accelerator

DNN accelerators have been widely deployed in many scenarios to speed up...
research
04/08/2021

Algorithmic Obfuscation for LDPC Decoders

In order to protect intellectual property against untrusted foundry, man...
research
10/01/2019

Leveraging Model Interpretability and Stability to increase Model Robustness

State of the art Deep Neural Networks (DNN) can now achieve above human ...
research
11/07/2022

LOCAL: Low-Complex Mapping Algorithm for Spatial DNN Accelerators

Deep neural networks are a promising solution for applications that solv...
research
02/21/2023

Dynamic Resource Partitioning for Multi-Tenant Systolic Array Based DNN Accelerator

Deep neural networks (DNN) have become significant applications in both ...

Please sign up or login with your details

Forgot password? Click here to reset