DeepAI
Log In Sign Up

Robust and Private Learning of Halfspaces

11/30/2020
by   Badih Ghazi, et al.
0

In this work, we study the trade-off between differential privacy and adversarial robustness under L2-perturbations in the context of learning halfspaces. We prove nearly tight bounds on the sample complexity of robust private learning of halfspaces for a large regime of parameters. A highlight of our results is that robust and private learning is harder than robust or private learning alone. We complement our theoretical analysis with experimental results on the MNIST and USPS datasets, for a learning algorithm that is both differentially private and adversarially robust.

READ FULL TEXT

page 1

page 2

page 3

page 4

05/02/2020

Differentially Private Generation of Small Images

We explore the training of generative adversarial networks with differen...
01/06/2022

Learning to be adversarially robust and differentially private

We study the difficulties in learning that arise from robust and differe...
07/27/2022

Differentially Private Learning of Hawkes Processes

Hawkes processes have recently gained increasing attention from the mach...
04/20/2020

Connecting Robust Shuffle Privacy and Pan-Privacy

In the shuffle model of differential privacy, data-holding users send ra...
11/06/2020

Revisiting Model-Agnostic Private Learning: Faster Rates and Active Learning

The Private Aggregation of Teacher Ensembles (PATE) framework is one of ...
09/09/2019

Differentially Private Algorithms for Learning Mixtures of Separated Gaussians

Learning the parameters of a Gaussian mixtures models is a fundamental a...
01/31/2020

Locally Private Distributed Reinforcement Learning

We study locally differentially private algorithms for reinforcement lea...