Never look back - A modified EnKF method and its application to the training of neural networks without back propagation

05/21/2018
by   Eldad Haber, et al.
0

In this work, we present a new derivative-free optimization method and investigate its use for training neural networks. Our method is motivated by the Ensemble Kalman Filter (EnKF), which has been used successfully for solving optimization problems that involve large-scale, highly nonlinear dynamical systems. A key benefit of the EnKF method is that it requires only the evaluation of the forward propagation but not its derivatives. Hence, in the context of neural networks, it alleviates the need for back propagation and reduces the memory consumption dramatically. However, the method is not a pure "black-box" global optimization heuristic as it efficiently utilizes the structure of typical learning problems. Promising first results of the EnKF for training deep neural networks have been presented recently by Kovachki and Stuart. We propose an important modification of the EnKF that enables us to prove convergence of our method to the minimizer of a strongly convex function. Our method also bears similarity with implicit filtering and we demonstrate its potential for minimizing highly oscillatory functions using a simple example. Further, we provide numerical examples that demonstrate the potential of our method for training deep neural networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/21/2018

Never look back - The EnKF method and its application to the training of neural networks without back propagation

In this work, we present a new derivative-free optimization method and i...
research
09/15/2015

Adapting Resilient Propagation for Deep Learning

The Resilient Propagation (Rprop) algorithm has been very popular for ba...
research
11/06/2019

Convergence Acceleration of Ensemble Kalman Inversion in Nonlinear Settings

Many data-science problems can be formulated as an inverse problem, wher...
research
06/17/2017

Variants of RMSProp and Adagrad with Logarithmic Regret Bounds

Adaptive gradient methods have become recently very popular, in particul...
research
10/26/2022

A Variational Inequality Model for Learning Neural Networks

Neural networks have become ubiquitous tools for solving signal and imag...
research
06/05/2019

Efficient Subsampled Gauss-Newton and Natural Gradient Methods for Training Neural Networks

We present practical Levenberg-Marquardt variants of Gauss-Newton and na...
research
03/29/2023

EnKSGD: A Class Of Preconditioned Black Box Optimization And Inversion Algorithms

In this paper, we introduce the Ensemble Kalman-Stein Gradient Descent (...

Please sign up or login with your details

Forgot password? Click here to reset