Differentially Private Empirical Risk Minimization with Sparsity-Inducing Norms

05/13/2019
by   K. S. Sesh Kumar, et al.
5

Differential privacy is concerned about the prediction quality while measuring the privacy impact on individuals whose information is contained in the data. We consider differentially private risk minimization problems with regularizers that induce structured sparsity. These regularizers are known to be convex but they are often non-differentiable. We analyze the standard differentially private algorithms, such as output perturbation, Frank-Wolfe and objective perturbation. Output perturbation is a differentially private algorithm that is known to perform well for minimizing risks that are strongly convex. Previous works have derived excess risk bounds that are independent of the dimensionality. In this paper, we assume a particular class of convex but non-smooth regularizers that induce structured sparsity and loss functions for generalized linear models. We also consider differentially private Frank-Wolfe algorithms to optimize the dual of the risk minimization problem. We derive excess risk bounds for both these algorithms. Both the bounds depend on the Gaussian width of the unit ball of the dual norm. We also show that objective perturbation of the risk minimization problems is equivalent to the output perturbation of a dual optimization problem. This is the first work that analyzes the dual optimization problems of risk minimization problems in the context of differential privacy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/20/2017

Differentially Private Empirical Risk Minimization with Input Perturbation

We propose a novel framework for the differentially private ERM, input p...
research
01/06/2022

Learning to be adversarially robust and differentially private

We study the difficulties in learning that arise from robust and differe...
research
03/01/2021

Wide Network Learning with Differential Privacy

Despite intense interest and considerable effort, the current generation...
research
11/07/2021

Sampling from Log-Concave Distributions with Infinity-Distance Guarantees and Applications to Differentially Private Optimization

For a d-dimensional log-concave distribution π(θ)∝ e^-f(θ) on a polytope...
research
03/23/2019

Data Poisoning against Differentially-Private Learners: Attacks and Defenses

Data poisoning attacks aim to manipulate the model produced by a learnin...
research
12/01/2009

Differentially Private Empirical Risk Minimization

Privacy-preserving machine learning algorithms are crucial for the incre...
research
01/16/2023

Enforcing Privacy in Distributed Learning with Performance Guarantees

We study the privatization of distributed learning and optimization stra...

Please sign up or login with your details

Forgot password? Click here to reset