To Drop or Not to Drop: Robustness, Consistency and Differential Privacy Properties of Dropout

03/06/2015
by   Prateek Jain, et al.
0

Training deep belief networks (DBNs) requires optimizing a non-convex function with an extremely large number of parameters. Naturally, existing gradient descent (GD) based methods are prone to arbitrarily poor local minima. In this paper, we rigorously show that such local minima can be avoided (upto an approximation error) by using the dropout technique, a widely used heuristic in this domain. In particular, we show that by randomly dropping a few nodes of a one-hidden layer neural network, the training objective function, up to a certain approximation error, decreases by a multiplicative factor. On the flip side, we show that for training convex empirical risk minimizers (ERM), dropout in fact acts as a "stabilizer" or regularizer. That is, a simple dropout based GD method for convex ERMs is stable in the face of arbitrary changes to any one of the training points. Using the above assertion, we show that dropout provides fast rates for generalization error in learning (convex) generalized linear models (GLM). Moreover, using the above mentioned stability properties of dropout, we design dropout based differentially private algorithms for solving ERMs. The learned GLM thus, preserves privacy of each of the individual training points while providing accurate predictions for new test points. Finally, we empirically validate our stability assertions for dropout in the context of convex ERMs and show that surprisingly, dropout significantly outperforms (in terms of prediction accuracy) the L2 regularization based methods for several benchmark datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/30/2017

Differentially Private Variational Dropout

Deep neural networks with their large number of parameters are highly fl...
research
11/30/2017

Differentially Private Dropout

Large data collections required for the training of neural networks ofte...
research
01/18/2021

On the Differentially Private Nature of Perturbed Gradient Descent

We consider the problem of empirical risk minimization given a database,...
research
11/01/2021

A variance principle explains why dropout finds flatter minima

Although dropout has achieved great success in deep learning, little is ...
research
05/01/2018

Internal node bagging: an explicit ensemble learning method in neural network training

We introduce a novel view to understand how dropout works as an inexplic...
research
05/25/2023

Dropout Drops Double Descent

In this paper, we find and analyze that we can easily drop the double de...
research
12/05/2021

On the Convergence of Shallow Neural Network Training with Randomly Masked Neurons

Given a dense shallow neural network, we focus on iteratively creating, ...

Please sign up or login with your details

Forgot password? Click here to reset