MPC-Pipe: an Efficient Pipeline Scheme for Secure Multi-party Machine Learning Inference

09/27/2022
by   Yongqin Wang, et al.
0

Multi-party computing (MPC) has been gaining popularity over the past years as a secure computing model, particularly for machine learning (ML) inference. Compared with its competitors, MPC has fewer overheads than homomorphic encryption (HE) and has a more robust threat model than hardware-based trusted execution environments (TEE) such as Intel SGX. Despite its apparent advantages, MPC protocols still pay substantial performance penalties compared to plaintext when applied to ML algorithms. The overhead is due to added computation and communication costs. For multiplications that are ubiquitous in ML algorithms, MPC protocols add 32x more computational costs and 1 round of broadcasting among MPC servers. Moreover, ML computations that have trivial costs in plaintext, such as Softmax, ReLU, and other non-linear operations become very expensive due to added communication. Those added overheads make MPC less palatable to deploy in real-time ML inference frameworks, such as speech translation. In this work, we present MPC-Pipe, an MPC pipeline inference technique that uses two ML-specific approaches. 1) inter-linear-layer pipeline and 2) inner layer pipeline. Those two techniques shorten the total inference runtime for machine learning models. Our experiments have shown to reduce ML inference latency by up to 12.6 weights are public, compared to current MPC protocol implementations.

READ FULL TEXT

page 1

page 5

page 6

page 9

research
01/02/2019

Secure Computation for Machine Learning With SPDZ

Secure Multi-Party Computation (MPC) is an area of cryptography that ena...
research
06/04/2021

Adam in Private: Secure and Fast Training of Deep Neural Networks with Adaptive Moment Estimation

Privacy-preserving machine learning (PPML) aims at enabling machine lear...
research
09/09/2023

Approximating ReLU on a Reduced Ring for Efficient MPC-based Private Inference

Secure multi-party computation (MPC) allows users to offload machine lea...
research
10/16/2022

VerifyML: Obliviously Checking Model Fairness Resilient to Malicious Model Holder

In this paper, we present VerifyML, the first secure inference framework...
research
09/14/2022

SEEK: model extraction attack against hybrid secure inference protocols

Security concerns about a machine learning model used in a prediction-as...
research
10/27/2022

Partially Oblivious Neural Network Inference

Oblivious inference is the task of outsourcing a ML model, like neural-n...
research
07/24/2023

PUMA: Secure Inference of LLaMA-7B in Five Minutes

With ChatGPT as a representative, tons of companies have began to provid...

Please sign up or login with your details

Forgot password? Click here to reset