MixNN: A design for protecting deep learning models

03/28/2022
by   Chao Liu, et al.
0

In this paper, we propose a novel design, called MixNN, for protecting deep learning model structure and parameters. The layers in a deep learning model of MixNN are fully decentralized. It hides communication address, layer parameters and operations, and forward as well as backward message flows among non-adjacent layers using the ideas from mix networks. MixNN has following advantages: 1) an adversary cannot fully control all layers of a model including the structure and parameters, 2) even some layers may collude but they cannot tamper with other honest layers, 3) model privacy is preserved in the training phase. We provide detailed descriptions for deployment. In one classification experiment, we compared a neural network deployed in a virtual machine with the same one using the MixNN design on the AWS EC2. The result shows that our MixNN retains less than 0.001 difference in terms of classification accuracy, while the whole running time of MixNN is about 7.5 times slower than the one running on a single virtual machine.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/19/2019

XONN: XNOR-based Oblivious Deep Neural Network Inference

Advancements in deep learning enable cloud servers to provide inference-...
research
08/26/2019

A Convolutional Neural Network with Mapping Layers for Hyperspectral Image Classification

In this paper, we propose a convolutional neural network with mapping la...
research
05/29/2023

Intelligent gradient amplification for deep neural networks

Deep learning models offer superior performance compared to other machin...
research
12/27/2018

Stanza: Distributed Deep Learning with Small Communication Footprint

The parameter server architecture is prevalently used for distributed de...
research
06/18/2021

Recurrent Stacking of Layers in Neural Networks: An Application to Neural Machine Translation

In deep neural network modeling, the most common practice is to stack a ...
research
12/07/2018

Privacy Partitioning: Protecting User Data During the Deep Learning Inference Phase

We present a practical method for protecting data during the inference p...
research
12/11/2019

An Improving Framework of regularization for Network Compression

Deep Neural Networks have achieved remarkable success relying on the dev...

Please sign up or login with your details

Forgot password? Click here to reset