Distributed Privacy-Preserving Prediction

10/25/2019
by   Lingjuan Lyu, et al.
0

In privacy-preserving machine learning, individual parties are reluctant to share their sensitive training data due to privacy concerns. Even the trained model parameters or prediction can pose serious privacy leakage. To address these problems, we demonstrate a generally applicable Distributed Privacy-Preserving Prediction (DPPP) framework, in which instead of sharing more sensitive data or model parameters, an untrusted aggregator combines only multiple models' predictions under provable privacy guarantee. Our framework integrates two main techniques to guarantee individual privacy. First, we improve the previous analysis of the Binomial mechanism to achieve distributed differential privacy. Second, we utilize homomorphic encryption to ensure that the aggregator learns nothing but the noisy aggregated prediction. We empirically evaluate the effectiveness of our framework on various datasets, and compare it with other baselines. The experimental results demonstrate that our framework has comparable performance to the non-private frameworks and delivers better results than the local differentially private framework and standalone framework.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/10/2021

Quantum machine learning with differential privacy

Quantum machine learning (QML) can complement the growing trend of using...
research
05/22/2020

Secure and Differentially Private Bayesian Learning on Distributed Data

Data integration and sharing maximally enhance the potential for novel a...
research
04/12/2020

PrivEdge: From Local to Distributed Private Training and Prediction

Machine Learning as a Service (MLaaS) operators provide model training a...
research
02/10/2021

Privacy-Preserving Graph Convolutional Networks for Text Classification

Graph convolutional networks (GCNs) are a powerful architecture for repr...
research
10/20/2022

Private Algorithms with Private Predictions

When applying differential privacy to sensitive data, a common way of ge...
research
02/28/2020

Asymptotic Theory for Differentially Private Generalized β-models with Parameters Increasing

Modelling edge weights play a crucial role in the analysis of network da...
research
07/24/2017

Share your Model instead of your Data: Privacy Preserving Mimic Learning for Ranking

Deep neural networks have become a primary tool for solving problems in ...

Please sign up or login with your details

Forgot password? Click here to reset