Blind Descent: A Prequel to Gradient Descent

06/20/2020
by   Akshat Gupta, et al.
0

We describe an alternative to gradient descent for backpropogation through a neural network, which we call Blind Descent. We believe that Blind Descent can be used to augment backpropagation by using it as an initialisation method and can also be used at saturation. Blind Descent, inherently by design, does not face problems like exploding or vanishing gradients.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset