Path-SGD: Path-Normalized Optimization in Deep Neural Networks

06/08/2015
by   Behnam Neyshabur, et al.
0

We revisit the choice of SGD for training deep neural networks by reconsidering the appropriate geometry in which to optimize the weights. We argue for a geometry invariant to rescaling of weights that does not affect the output of the network, and suggest Path-SGD, which is an approximate steepest descent method with respect to a path-wise regularizer related to max-norm regularization. Path-SGD is easy and efficient to implement and leads to empirical gains over SGD and AdaGrad.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset