High-Dimensional Private Empirical Risk Minimization by Greedy Coordinate Descent

07/04/2022
by   Paul Mangold, et al.
0

In this paper, we study differentially private empirical risk minimization (DP-ERM). It has been shown that the (worst-case) utility of DP-ERM reduces as the dimension increases. This is a major obstacle to privately learning large machine learning models. In high dimension, it is common for some model's parameters to carry more information than others. To exploit this, we propose a differentially private greedy coordinate descent (DP-GCD) algorithm. At each iteration, DP-GCD privately performs a coordinate-wise gradient step along the gradients' (approximately) greatest entry. We show theoretically that DP-GCD can improve utility by exploiting structural properties of the problem's solution (such as sparsity or quasi-sparsity), with very fast progress in early iterations. We then illustrate this numerically, both on synthetic and real datasets. Finally, we describe promising directions for future work.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/22/2021

Differentially Private Coordinate Descent for Composite Empirical Risk Minimization

Machine learning models can leak information about the data used to trai...
research
06/12/2020

Differentially Private Stochastic Coordinate Descent

In this paper we tackle the challenge of making the stochastic coordinat...
research
10/11/2021

Differentially Private Approximate Quantiles

In this work we study the problem of differentially private (DP) quantil...
research
07/07/2020

Bypassing the Ambient Dimension: Private SGD with Gradient Subspace Identification

Differentially private SGD (DP-SGD) is one of the most popular methods f...
research
04/08/2023

A Unified Characterization of Private Learnability via Graph Theory

We provide a unified framework for characterizing pure and approximate d...
research
03/29/2017

Efficient Private ERM for Smooth Objectives

In this paper, we consider efficient differentially private empirical ri...
research
04/21/2022

Differentially Private Learning with Margin Guarantees

We present a series of new differentially private (DP) algorithms with d...

Please sign up or login with your details

Forgot password? Click here to reset