CHEETAH: An Ultra-Fast, Approximation-Free, and Privacy-Preserved Neural Network Framework based on Joint Obscure Linear and Nonlinear Computations

11/12/2019
by   Qiao Zhang, et al.
31

Machine Learning as a Service (MLaaS) is enabling a wide range of smart applications on end devices. However, such convenience comes with a cost of privacy because users have to upload their private data to the cloud. This research aims to provide effective and efficient MLaaS such that the cloud server learns nothing about user data and the users cannot infer the proprietary model parameters owned by the server. This work makes the following contributions. First, it unveils the fundamental performance bottleneck of existing schemes due to the heavy permutations in computing linear transformation and the use of communication intensive Garbled Circuits for nonlinear transformation. Second, it introduces an ultra-fast secure MLaaS framework, CHEETAH, which features a carefully crafted secret sharing scheme that runs significantly faster than existing schemes without accuracy loss. Third, CHEETAH is evaluated on the benchmark of well-known, practical deep networks such as AlexNet and VGG-16 on the MNIST and ImageNet datasets. The results demonstrate more than 100x speedup over the fastest GAZELLE (Usenix Security'18), 2000x speedup over MiniONN (ACM CCS'17) and five orders of magnitude speedup over CryptoNets (ICML'16). This significant speedup enables a wide range of practical applications based on privacy-preserved deep neural networks.

READ FULL TEXT

page 2

page 6

page 8

page 9

page 10

page 11

page 13

page 14

research
05/05/2021

GALA: Greedy ComputAtion for Linear Algebra in Privacy-Preserved Neural Networks

Machine Learning as a Service (MLaaS) is enabling a wide range of smart ...
research
01/16/2018

Gazelle: A Low Latency Framework for Secure Neural Network Inference

The growing popularity of cloud-based machine learning raises a natural ...
research
05/18/2023

Free Lunch for Privacy Preserving Distributed Graph Learning

Learning on graphs is becoming prevalent in a wide range of applications...
research
09/04/2022

Joint Linear and Nonlinear Computation across Functions for Efficient Privacy-Preserving Neural Network Inference

While it is encouraging to witness the recent development in privacy-pre...
research
02/02/2019

CodedPrivateML: A Fast and Privacy-Preserving Framework for Distributed Machine Learning

How to train a machine learning model while keeping the data private and...
research
02/13/2023

DASH: Accelerating Distributed Private Machine Learning Inference with Arithmetic Garbled Circuits

The adoption of machine learning solutions is rapidly increasing across ...
research
07/05/2021

Popcorn: Paillier Meets Compression For Efficient Oblivious Neural Network Inference

Oblivious inference enables the cloud to provide neural network inferenc...

Please sign up or login with your details

Forgot password? Click here to reset