Helen: Maliciously Secure Coopetitive Learning for Linear Models

07/16/2019
by   Wenting Zheng, et al.
0

Many organizations wish to collaboratively train machine learning models on their combined datasets for a common benefit (e.g., better medical research, or fraud detection). However, they often cannot share their plaintext datasets due to privacy concerns and/or business competition. In this paper, we design and build Helen, a system that allows multiple parties to train a linear model without revealing their data, a setting we call coopetitive learning. Compared to prior secure training systems, Helen protects against a much stronger adversary who is malicious and can compromise m-1 out of m parties. Our evaluation shows that Helen can achieve up to five orders of magnitude of performance improvement when compared to training using an existing state-of-the-art secure multi-party computation framework.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/26/2020

Senate: A Maliciously-Secure MPC Platform for Collaborative Analytics

Many organizations stand to benefit from pooling their data together in ...
research
09/24/2020

Secure Data Sharing With Flow Model

In the classical multi-party computation setting, multiple parties joint...
research
10/06/2020

Secure Collaborative Training and Inference for XGBoost

In recent years, gradient boosted decision tree learning has proven to b...
research
05/10/2021

SIRNN: A Math Library for Secure RNN Inference

Complex machine learning (ML) inference algorithms like recurrent neural...
research
12/04/2018

Outsourcing Private Machine Learning via Lightweight Secure Arithmetic Computation

In several settings of practical interest, two parties seek to collabora...
research
09/23/2021

Secure PAC Bayesian Regression via Real Shamir Secret Sharing

Common approach of machine learning is to generate a model by using huge...

Please sign up or login with your details

Forgot password? Click here to reset