NeuroUnlock: Unlocking the Architecture of Obfuscated Deep Neural Networks

06/01/2022
by   Mahya Morid Ahmadi, et al.
0

The advancements of deep neural networks (DNNs) have led to their deployment in diverse settings, including safety and security-critical applications. As a result, the characteristics of these models have become sensitive intellectual properties that require protection from malicious users. Extracting the architecture of a DNN through leaky side-channels (e.g., memory access) allows adversaries to (i) clone the model, and (ii) craft adversarial attacks. DNN obfuscation thwarts side-channel-based architecture stealing (SCAS) attacks by altering the run-time traces of a given DNN while preserving its functionality. In this work, we expose the vulnerability of state-of-the-art DNN obfuscation methods to these attacks. We present NeuroUnlock, a novel SCAS attack against obfuscated DNNs. Our NeuroUnlock employs a sequence-to-sequence model that learns the obfuscation procedure and automatically reverts it, thereby recovering the original DNN architecture. We demonstrate the effectiveness of NeuroUnlock by recovering the architecture of 200 randomly generated and obfuscated DNNs running on the Nvidia RTX 2080 TI graphics processing unit (GPU). Moreover, NeuroUnlock recovers the architecture of various other obfuscated DNNs, such as the VGG-11, VGG-13, ResNet-20, and ResNet-32 networks. After recovering the architecture, NeuroUnlock automatically builds a near-equivalent DNN with only a 1.4 show that launching a subsequent adversarial attack on the recovered DNNs boosts the success rate of the adversarial attack by 51.7 to launching it on the obfuscated versions. Additionally, we propose a novel methodology for DNN obfuscation, ReDLock, which eradicates the deterministic nature of the obfuscation and achieves 2.16X more resilience to the NeuroUnlock attack. We release the NeuroUnlock and the ReDLock as open-source frameworks.

READ FULL TEXT

page 1

page 9

research
03/12/2023

DNN-Alias: Deep Neural Network Protection Against Side-Channel Attacks via Layer Balancing

Extracting the architecture of layers of a given deep neural network (DN...
research
06/23/2020

Hermes Attack: Steal DNN Models with Lossless Inference Accuracy

Deep Neural Networks (DNNs) models become one of the most valuable enter...
research
04/06/2023

EZClone: Improving DNN Model Extraction Attack via Shape Distillation from GPU Execution Profiles

Deep Neural Networks (DNNs) have become ubiquitous due to their performa...
research
09/21/2020

ES Attack: Model Stealing against Deep Neural Networks without Data Hurdles

Deep neural networks (DNNs) have become the essential components for var...
research
11/25/2021

Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks

One major goal of the AI security community is to securely and reliably ...
research
11/09/2021

QUDOS: Quorum-Based Cloud-Edge Distributed DNNs for Security Enhanced Industry 4.0

Distributed machine learning algorithms that employ Deep Neural Networks...
research
09/19/2020

It's Raining Cats or Dogs? Adversarial Rain Attack on DNN Perception

Rain is a common phenomenon in nature and an essential factor for many d...

Please sign up or login with your details

Forgot password? Click here to reset