Weighted Distributed Differential Privacy ERM: Convex and Non-convex

10/23/2019
by   Yilin Kang, et al.
0

Distributed machine learning is an approach allowing different parties to learn a model over all data sets without disclosing their own data. In this paper, we propose a weighted distributed differential privacy (WD-DP) empirical risk minimization (ERM) method to train a model in distributed setting, considering different weights of different clients. We guarantee differential privacy by gradient perturbation, adding Gaussian noise, and advance the state-of-the-art on gradient perturbation method in distributed setting. By detailed theoretical analysis, we show that in distributed setting, the noise bound and the excess empirical risk bound can be improved by considering different weights held by multiple parties. Moreover, considering that the constraint of convex loss function in ERM is not easy to achieve in some situations, we generalize our method to non-convex loss functions which satisfy Polyak-Lojasiewicz condition. Experiments on real data sets show that our method is more reliable and we improve the performance of distributed differential privacy ERM, especially in the case that data scale on different clients is uneven.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/20/2020

Input Perturbation: A New Paradigm between Central and Local Differential Privacy

Traditionally, there are two models on differential privacy: the central...
research
05/07/2021

Differential Privacy for Pairwise Learning: Non-convex Analysis

Pairwise learning focuses on learning tasks with pairwise loss functions...
research
11/03/2021

Privately Publishable Per-instance Privacy

We consider how to privately share the personalized privacy losses incur...
research
08/30/2018

DP-ADMM: ADMM-based Distributed Learning with Differential Privacy

Distributed machine learning is making great changes in a wide variety o...
research
03/01/2017

Preserving Differential Privacy Between Features in Distributed Estimation

Privacy is crucial in many applications of machine learning. Legal, ethi...
research
06/07/2020

BUDS: Balancing Utility and Differential Privacy by Shuffling

Balancing utility and differential privacy by shuffling or BUDS is an ap...
research
07/17/2019

Learning Privately over Distributed Features: An ADMM Sharing Approach

Distributed machine learning has been widely studied in order to handle ...

Please sign up or login with your details

Forgot password? Click here to reset