(Nearly) Optimal Private Linear Regression via Adaptive Clipping

07/11/2022
by   Prateek Varshney, et al.
5

We study the problem of differentially private linear regression where each data point is sampled from a fixed sub-Gaussian style distribution. We propose and analyze a one-pass mini-batch stochastic gradient descent method (DP-AMBSSGD) where points in each iteration are sampled without replacement. Noise is added for DP but the noise standard deviation is estimated online. Compared to existing (ϵ, δ)-DP techniques which have sub-optimal error bounds, DP-AMBSSGD is able to provide nearly optimal error bounds in terms of key parameters like dimensionality d, number of points N, and the standard deviation σ of the noise in observations. For example, when the d-dimensional covariates are sampled i.i.d. from the normal distribution, then the excess error of DP-AMBSSGD due to privacy is σ^2 d/N(1+d/ϵ^2 N), i.e., the error is meaningful when number of samples N= Ω(d log d) which is the standard operative regime for linear regression. In contrast, error bounds for existing efficient methods in this setting are: 𝒪(d^3/ϵ^2 N^2), even for σ=0. That is, for constant ϵ, the existing techniques require N=Ω(d√(d)) to provide a non-trivial result.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset