Hermes Attack: Steal DNN Models with Lossless Inference Accuracy

06/23/2020
by   Yuankun Zhu, et al.
0

Deep Neural Networks (DNNs) models become one of the most valuable enterprise assets due to their critical roles in all aspects of applications. With the trend of privatization deployment of DNN models, the data leakage of the DNN models is becoming increasingly serious and widespread. All existing model-extraction attacks can only leak parts of targeted DNN models with low accuracy or high overhead. In this paper, we first identify a new attack surface – unencrypted PCIe traffic, to leak DNN models. Based on this new attack surface, we propose a novel model-extraction attack, namely Hermes Attack, which is the first attack to fully steal the whole victim DNN model. The stolen DNN models have the same hyper-parameters, parameters, and semantically identical architecture as the original ones. It is challenging due to the closed-source CUDA runtime, driver, and GPU internals, as well as the undocumented data structures and the loss of some critical semantics in the PCIe traffic. Additionally, there are millions of PCIe packets with numerous noises and chaos orders. Our Hermes Attack addresses these issues by huge reverse engineering efforts and reliable semantic reconstruction, as well as skillful packet selection and order correction. We implement a prototype of the Hermes Attack, and evaluate two sequential DNN models (i.e., MINIST and VGG) and one consequential DNN model (i.e., ResNet) on three NVIDIA GPU platforms, i.e., NVIDIA Geforce GT 730, NVIDIA Geforce GTX 1080 Ti, and NVIDIA Geforce RTX 2080 Ti. The evaluation results indicate that our scheme is able to efficiently and completely reconstruct ALL of them with making inferences on any one image. Evaluated with Cifar10 test dataset that contains 10,000 images, the experiment results show that the stolen models have the same inference accuracy as the original ones (i.e., lossless inference accuracy).

READ FULL TEXT

page 5

page 10

research
06/01/2022

NeuroUnlock: Unlocking the Architecture of Obfuscated Deep Neural Networks

The advancements of deep neural networks (DNNs) have led to their deploy...
research
11/08/2021

DeepSteal: Advanced Model Extractions Leveraging Efficient Weight Stealing in Memories

Recent advancements of Deep Neural Networks (DNNs) have seen widespread ...
research
08/14/2018

Cache Telepathy: Leveraging Shared Resource Attacks to Learn DNN Architectures

Deep Neural Networks (DNNs) are fast becoming ubiquitous for their abili...
research
08/29/2022

Demystifying Arch-hints for Model Extraction: An Attack in Unified Memory System

The deep neural network (DNN) models are deemed confidential due to thei...
research
04/06/2023

EZClone: Improving DNN Model Extraction Attack via Shape Distillation from GPU Execution Profiles

Deep Neural Networks (DNNs) have become ubiquitous due to their performa...
research
09/21/2023

DeepTheft: Stealing DNN Model Architectures through Power Side Channel

Deep Neural Network (DNN) models are often deployed in resource-sharing ...

Please sign up or login with your details

Forgot password? Click here to reset