On Convergence and Generalization of Dropout Training

10/23/2020
by   Poorya Mianjy, et al.
0

We study dropout in two-layer neural networks with rectified linear unit (ReLU) activations. Under mild overparametrization and assuming that the limiting kernel can separate the data distribution with a positive margin, we show that dropout training with logistic loss achieves ϵ-suboptimality in test error in O(1/ϵ) iterations.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset