Stochastic gradient descent with noise of machine learning type. Part II: Continuous time analysis

06/04/2021
by   Stephan Wojtowytsch, et al.
0

The representation of functions by artificial neural networks depends on a large number of parameters in a non-linear fashion. Suitable parameters of these are found by minimizing a 'loss functional', typically by stochastic gradient descent (SGD) or an advanced SGD-based algorithm. In a continuous time model for SGD with noise that follows the 'machine learning scaling', we show that in a certain noise regime, the optimization algorithm prefers 'flat' minima of the objective function in a sense which is different from the flat minimum selection of continuous time SGD with homogeneous noise.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset