Near Optimal Private and Robust Linear Regression

01/30/2023
by   xxlkbz, et al.
0

We study the canonical statistical estimation problem of linear regression from n i.i.d. examples under (ε,δ)-differential privacy when some response variables are adversarially corrupted. We propose a variant of the popular differentially private stochastic gradient descent (DP-SGD) algorithm with two innovations: a full-batch gradient descent to improve sample complexity and a novel adaptive clipping to guarantee robustness. When there is no adversarial corruption, this algorithm improves upon the existing state-of-the-art approach and achieves a near optimal sample complexity. Under label-corruption, this is the first efficient linear regression algorithm to guarantee both (ε,δ)-DP and robustness. Synthetic experiments confirm the superiority of our approach.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro