DeepObfuscation: Securing the Structure of Convolutional Neural Networks via Knowledge Distillation

06/27/2018
by   Hui Xu, et al.
0

This paper investigates the piracy problem of deep learning models. Designing and training a well-performing model is generally expensive. However, when releasing them, attackers may reverse engineer the models and pirate their design. This paper, therefore, proposes deep learning obfuscation, aiming at obstructing attackers from pirating a deep learning model. In particular, we focus on obfuscating convolutional neural networks (CNN), a widely employed type of deep learning architectures for image recognition. Our approach obfuscates a CNN model eventually by simulating its feature extractor with a shallow and sequential convolutional block. To this end, we employ a recursive simulation method and a joint training method to train the simulation network. The joint training method leverages both the intermediate knowledge generated by a feature extractor and data labels to train a simulation network. In this way, we can obtain an obfuscated model without accuracy loss. We have verified the feasibility of our approach with three prevalent CNNs, i.e., GoogLeNet, ResNet, and DenseNet. Although these networks are very deep with tens or hundreds of layers, we can simulate them in a shallow network including only five or seven convolutional layers. The obfuscated models are even more efficient than the original models. Our obfuscation approach is very effective to protect the critical structure of a deep learning model from being exposed to attackers. Moreover, it can also thwart attackers from pirating the model with transfer learning or incremental learning techniques because the shallow simulation network bears poor learning ability. To our best knowledge, this paper serves as a first attempt to obfuscate deep learning models, which may shed light on more future studies.

READ FULL TEXT

page 1

page 4

page 9

research
04/22/2019

MinCall - MinION end2end convolutional deep learning basecaller

The Oxford Nanopore Technologies's MinION is the first portable DNA sequ...
research
12/15/2021

MissMarple : A Novel Socio-inspired Feature-transfer Learning Deep Network for Image Splicing Detection

In this paper we propose a novel socio-inspired convolutional neural net...
research
10/22/2020

Malaria detection from RBC images using shallow Convolutional Neural Networks

The advent of Deep Learning models like VGG-16 and Resnet-50 has conside...
research
06/12/2020

Online Sequential Extreme Learning Machines: Features Combined From Hundreds of Midlayers

In this paper, we develop an algorithm called hierarchal online sequenti...
research
11/14/2021

A layer-stress learning framework universally augments deep neural network tasks

Deep neural networks (DNN) such as Multi-Layer Perception (MLP) and Conv...
research
12/29/2018

Greedy Layerwise Learning Can Scale to ImageNet

Shallow supervised 1-hidden layer neural networks have a number of favor...
research
10/15/2020

Why Layer-Wise Learning is Hard to Scale-up and a Possible Solution via Accelerated Downsampling

Layer-wise learning, as an alternative to global back-propagation, is ea...

Please sign up or login with your details

Forgot password? Click here to reset