Differential Privacy for Pairwise Learning: Non-convex Analysis

05/07/2021
by   Yilin Kang, et al.
0

Pairwise learning focuses on learning tasks with pairwise loss functions, which depend on pairs of training instances, and naturally fits for modeling relationships between pairs of samples. In this paper, we focus on the privacy of pairwise learning and propose a new differential privacy paradigm for pairwise learning, based on gradient perturbation. We analyze the privacy guarantees from two points of view: the ℓ_2-sensitivity and the moments accountant method. We further analyze the generalization error, the excess empirical risk, and the excess population risk of our proposed method and give corresponding bounds. By introducing algorithmic stability theory to pairwise differential privacy, our theoretical analysis does not require convex pairwise loss functions, which means that our method is general to both convex and non-convex conditions. Under these circumstances, the utility bounds are better than previous bounds under convexity or strongly convexity assumption, which is an attractive result.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/02/2021

Improved Rates for Differentially Private Stochastic Convex Optimization with Heavy-Tailed Data

We study stochastic convex optimization with heavy-tailed data under the...
research
03/30/2020

Secure Metric Learning via Differential Pairwise Privacy

Distance Metric Learning (DML) has drawn much attention over the last tw...
research
10/23/2019

Weighted Distributed Differential Privacy ERM: Convex and Non-convex

Distributed machine learning is an approach allowing different parties t...
research
02/20/2020

Input Perturbation: A New Paradigm between Central and Local Differential Privacy

Traditionally, there are two models on differential privacy: the central...
research
10/11/2021

Continual Learning with Differential Privacy

In this paper, we focus on preserving differential privacy (DP) in conti...
research
09/18/2017

Adaptive Laplace Mechanism: Differential Privacy Preservation in Deep Learning

In this paper, we focus on developing a novel mechanism to preserve diff...
research
12/14/2021

Generalization Bounds for Stochastic Gradient Langevin Dynamics: A Unified View via Information Leakage Analysis

Recently, generalization bounds of the non-convex empirical risk minimiz...

Please sign up or login with your details

Forgot password? Click here to reset