Dropout: Explicit Forms and Capacity Control

03/06/2020
by   Raman Arora, et al.
0

We investigate the capacity control provided by dropout in various machine learning problems. First, we study dropout for matrix completion, where it induces a data-dependent regularizer that, in expectation, equals the weighted trace-norm of the product of the factors. In deep learning, we show that the data-dependent regularizer due to dropout directly controls the Rademacher complexity of the underlying class of deep neural networks. These developments enable us to give concrete generalization error bounds for the dropout algorithm in both matrix completion as well as training deep neural networks. We evaluate our theoretical findings on real-world datasets, including MovieLens, MNIST, and Fashion-MNIST.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/28/2019

On Dropout and Nuclear Norm Regularization

We give a formal and complete characterization of the explicit regulariz...
research
02/16/2014

Dropout Rademacher Complexity of Deep Neural Networks

Great successes of deep neural networks have been witnessed in various r...
research
12/10/2020

A generalised log-determinant regularizer for online semi-definite programming and its applications

We consider a variant of online semi-definite programming problem (OSDP)...
research
09/26/2016

Dropout with Expectation-linear Regularization

Dropout, a simple and effective way to train deep neural networks, has l...
research
01/23/2022

Weight Expansion: A New Perspective on Dropout and Generalization

While dropout is known to be a successful regularization technique, insi...
research
10/13/2017

Dropout as a Low-Rank Regularizer for Matrix Factorization

Regularization for matrix factorization (MF) and approximation problems ...
research
10/10/2017

An Analysis of Dropout for Matrix Factorization

Dropout is a simple yet effective algorithm for regularizing neural netw...

Please sign up or login with your details

Forgot password? Click here to reset