Popcorn: Paillier Meets Compression For Efficient Oblivious Neural Network Inference

07/05/2021
by   Jun Wang, et al.
0

Oblivious inference enables the cloud to provide neural network inference-as-a-service (NN-IaaS), whilst neither disclosing the client data nor revealing the server's model. However, the privacy guarantee under oblivious inference usually comes with a heavy cost of efficiency and accuracy. We propose Popcorn, a concise oblivious inference framework entirely built on the Paillier homomorphic encryption scheme. We design a suite of novel protocols to compute non-linear activation and max-pooling layers. We leverage neural network compression techniques (i.e., neural weights pruning and quantization) to accelerate the inference computation. To implement the Popcorn framework, we only need to replace algebraic operations of existing networks with their corresponding Paillier homomorphic operations, which is extremely friendly for engineering development. We first conduct the performance evaluation and comparison based on the MNIST and CIFAR-10 classification tasks. Compared with existing solutions, Popcorn brings a significant communication overhead deduction, with a moderate runtime increase. Then, we benchmark the performance of oblivious inference on ImageNet. To our best knowledge, this is the first report based on a commercial-level dataset, taking a step towards the deployment to production.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/16/2018

Gazelle: A Low Latency Framework for Secure Neural Network Inference

The growing popularity of cloud-based machine learning raises a natural ...
research
11/10/2020

Neural Network Compression Via Sparse Optimization

The compression of deep neural networks (DNNs) to reduce inference cost ...
research
06/01/2019

SHE: A Fast and Accurate Privacy-Preserving Deep Neural Network Via Leveled TFHE and Logarithmic Data Representation

Homomorphic Encryption (HE) is one of the most promising security soluti...
research
06/06/2022

Towards Practical Privacy-Preserving Solution for Outsourced Neural Network Inference

When neural network model and data are outsourced to cloud server for in...
research
01/30/2020

NASS: Optimizing Secure Inference via Neural Architecture Search

Due to increasing privacy concerns, neural network (NN) based secure inf...
research
12/03/2021

NN-LUT: Neural Approximation of Non-Linear Operations for Efficient Transformer Inference

Non-linear operations such as GELU, Layer normalization, and Softmax are...

Please sign up or login with your details

Forgot password? Click here to reset