Preserving Differential Privacy in Adversarial Learning with Provable Robustness

03/23/2019
by   NhatHai Phan, et al.
0

In this paper, we aim to develop a novel mechanism to preserve differential privacy (DP) in adversarial learning for deep neural networks, with provable robustness to adversarial examples. We leverage the sequential composition theory in differential privacy, to establish a new connection between differential privacy preservation and provable robustness. To address the trade-off among model utility, privacy loss, and robustness, we design an original, differentially private, adversarial objective function, based on the post-processing property in differential privacy, to tighten the sensitivity of our model. Theoretical analysis and thorough evaluations show that our mechanism notably improves the robustness of DP deep neural networks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset