Differentially Private Objective Perturbation: Beyond Smoothness and Convexity

09/03/2019
by   Seth Neel, et al.
0

One of the most effective algorithms for differentially private learning and optimization is objective perturbation. This technique augments a given optimization problem (e.g. deriving from an ERM problem) with a random linear term, and then exactly solves it. However, to date, analyses of this approach crucially rely on the convexity and smoothness of the objective function. We give two algorithms that extend this approach substantially. The first algorithm requires nothing except boundedness of the loss function, and operates over a discrete domain. Its privacy and accuracy guarantees hold even without assuming convexity. The second algorithm operates over a continuous domain and requires only that the loss function be bounded and Lipschitz in its continuous parameter. Its privacy analysis does not even require convexity. Its accuracy analysis does require convexity, but does not require second order conditions like smoothness. We complement our theoretical results with an empirical evaluation of the non-convex case, in which we use an integer program solver as our optimization oracle. We find that for the problem of learning linear classifiers, directly optimizing for 0/1 loss using our approach can out-perform the more standard approach of privately optimizing a convex-surrogate loss function on the Adult dataset.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/22/2021

Differentially Private SGD with Non-Smooth Loss

In this paper, we are concerned with differentially private SGD algorith...
research
05/22/2023

Faster Differentially Private Convex Optimization via Second-Order Methods

Differentially private (stochastic) gradient descent is the workhorse of...
research
06/15/2023

Differentially Private Domain Adaptation with Theoretical Guarantees

In many applications, the labeled data at the learner's disposal is subj...
research
12/01/2009

Differentially Private Empirical Risk Minimization

Privacy-preserving machine learning algorithms are crucial for the incre...
research
06/23/2019

The Cost of a Reductions Approach to Private Fair Optimization

We examine a reductions approach to fair optimization and learning where...
research
07/01/2022

When Does Differentially Private Learning Not Suffer in High Dimensions?

Large pretrained models can be privately fine-tuned to achieve performan...

Please sign up or login with your details

Forgot password? Click here to reset