Wide Network Learning with Differential Privacy

03/01/2021
by   Huanyu Zhang, et al.
7

Despite intense interest and considerable effort, the current generation of neural networks suffers a significant loss of accuracy under most practically relevant privacy training regimes. One particularly challenging class of neural networks are the wide ones, such as those deployed for NLP typeahead prediction or recommender systems. Observing that these models share something in common–an embedding layer that reduces the dimensionality of the input–we focus on developing a general approach towards training these models that takes advantage of the sparsity of the gradients. More abstractly, we address the problem of differentially private Empirical Risk Minimization (ERM) for models that admit sparse gradients. We demonstrate that for non-convex ERM problems, the loss is logarithmically dependent on the number of parameters, in contrast with polynomial dependence for the general case. Following the same intuition, we propose a novel algorithm for privately training neural networks. Finally, we provide an empirical study of a DP wide neural network on a real-world dataset, which has been rarely explored in the previous work.

READ FULL TEXT
research
04/03/2019

Differentially Private Model Publishing for Deep Learning

Deep learning techniques based on neural networks have shown significant...
research
05/13/2019

Differentially Private Empirical Risk Minimization with Sparsity-Inducing Norms

Differential privacy is concerned about the prediction quality while mea...
research
10/03/2020

Differentially Private Representation for NLP: Formal Guarantee and An Empirical Study on Privacy and Fairness

It has been demonstrated that hidden representation learned by a deep mo...
research
05/10/2023

Privacy-Preserving Recommender Systems with Synthetic Query Generation using Differentially Private Large Language Models

We propose a novel approach for developing privacy-preserving large-scal...
research
06/19/2023

Pre-Pruning and Gradient-Dropping Improve Differentially Private Image Classification

Scalability is a significant challenge when it comes to applying differe...
research
04/20/2023

Sparsity in neural networks can improve their privacy

This article measures how sparsity can make neural networks more robust ...
research
12/20/2018

Calibrating Lévy Process from Observations Based on Neural Networks and Automatic Differentiation with Convergence Proofs

The Lévy process has been widely applied to mathematical finance, quantu...

Please sign up or login with your details

Forgot password? Click here to reset