SIMC 2.0: Improved Secure ML Inference Against Malicious Clients

07/11/2022
by   Guowen Xu, et al.
0

In this paper, we study the problem of secure ML inference against a malicious client and a semi-trusted server such that the client only learns the inference output while the server learns nothing. This problem is first formulated by Lehmkuhl et al. with a solution (MUSE, Usenix Security'21), whose performance is then substantially improved by Chandran et al.'s work (SIMC, USENIX Security'22). However, there still exists a nontrivial gap in these efforts towards practicality, giving the challenges of overhead reduction and secure inference acceleration in an all-round way. We propose SIMC 2.0, which complies with the underlying structure of SIMC, but significantly optimizes both the linear and non-linear layers of the model. Specifically, (1) we design a new coding method for homomorphic parallel computation between matrices and vectors. It is custom-built through the insight into the complementarity between cryptographic primitives in SIMC. As a result, it can minimize the number of rotation operations incurred in the calculation process, which is very computationally expensive compared to other homomorphic operations e.g., addition, multiplication). (2) We reduce the size of the garbled circuit (GC) (used to calculate nonlinear activation functions, e.g., ReLU) in SIMC by about two thirds. Then, we design an alternative lightweight protocol to perform tasks that are originally allocated to the expensive GCs. Compared with SIMC, our experiments show that SIMC 2.0 achieves a significant speedup by up to 17.4× for linear layer computation, and at least 1.3× reduction of both the computation and communication overheads in the implementation of non-linear layers under different data dimensions. Meanwhile, SIMC 2.0 demonstrates an encouraging runtime boost by 2.3∼ 4.3× over SIMC on different state-of-the-art ML models.

READ FULL TEXT

page 1

page 6

page 8

page 15

research
10/16/2022

VerifyML: Obliviously Checking Model Fairness Resilient to Malicious Model Holder

In this paper, we present VerifyML, the first secure inference framework...
research
05/22/2023

FSSA: Efficient 3-Round Secure Aggregation for Privacy-Preserving Federated Learning

Federated learning (FL) allows a large number of clients to collaborativ...
research
05/06/2022

Fusion: Efficient and Secure Inference Resilient to Malicious Server and Curious Clients

In secure machine learning inference, most current schemes assume that t...
research
04/14/2021

Preventing Manipulation Attack in Local Differential Privacy using Verifiable Randomization Mechanism

Local differential privacy (LDP) has been received increasing attention ...
research
10/09/2019

ExpertMatcher: Automating ML Model Selection for Clients using Hidden Representations

Recently, there has been the development of Split Learning, a framework ...
research
12/05/2019

ASTRA: High Throughput 3PC over Rings with Application to Secure Prediction

The concrete efficiency of secure computation has been the focus of many...
research
03/05/2022

Tabula: Efficiently Computing Nonlinear Activation Functions for Secure Neural Network Inference

Multiparty computation approaches to secure neural network inference tra...

Please sign up or login with your details

Forgot password? Click here to reset