Faster Differentially Private Convex Optimization via Second-Order Methods

05/22/2023
by   Arun Ganesh, et al.
0

Differentially private (stochastic) gradient descent is the workhorse of DP private machine learning in both the convex and non-convex settings. Without privacy constraints, second-order methods, like Newton's method, converge faster than first-order methods like gradient descent. In this work, we investigate the prospect of using the second-order information from the loss function to accelerate DP convex optimization. We first develop a private variant of the regularized cubic Newton method of Nesterov and Polyak, and show that for the class of strongly convex loss functions, our algorithm has quadratic convergence and achieves the optimal excess loss. We then design a practical second-order DP algorithm for the unconstrained logistic regression problem. We theoretically and empirically study the performance of our algorithm. Empirical results show our algorithm consistently achieves the best excess loss compared to other baselines and is 10-40x faster than DP-GD/DP-SGD.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/02/2023

Convergence of Gradient Descent with Linearly Correlated Noise and Applications to Differentially Private Learning

We study stochastic optimization with linearly correlated noise. Our stu...
research
11/24/2022

Differentially Private Image Classification from Features

Leveraging transfer learning has recently been shown to be an effective ...
research
04/04/2022

Langevin Diffusion: An Almost Universal Algorithm for Private Euclidean (Convex) Optimization

In this paper we revisit the problem of differentially private empirical...
research
03/10/2022

Differentially Private Learning Needs Hidden State (Or Much Faster Convergence)

Differential privacy analysis of randomized learning algorithms typicall...
research
07/17/2018

Jensen: An Easily-Extensible C++ Toolkit for Production-Level Machine Learning and Convex Optimization

This paper introduces Jensen, an easily extensible and scalable toolkit ...
research
11/03/2020

SGB: Stochastic Gradient Bound Method for Optimizing Partition Functions

This paper addresses the problem of optimizing partition functions in a ...
research
09/03/2019

Differentially Private Objective Perturbation: Beyond Smoothness and Convexity

One of the most effective algorithms for differentially private learning...

Please sign up or login with your details

Forgot password? Click here to reset