Robust and Private Learning of Halfspaces

11/30/2020
by   Badih Ghazi, et al.
0

In this work, we study the trade-off between differential privacy and adversarial robustness under L2-perturbations in the context of learning halfspaces. We prove nearly tight bounds on the sample complexity of robust private learning of halfspaces for a large regime of parameters. A highlight of our results is that robust and private learning is harder than robust or private learning alone. We complement our theoretical analysis with experimental results on the MNIST and USPS datasets, for a learning algorithm that is both differentially private and adversarially robust.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset