Ensemble Kalman Inversion: A Derivative-Free Technique For Machine Learning Tasks

08/10/2018
by   Nikola B. Kovachki, et al.
0

The standard probabilistic perspective on machine learning gives rise to empirical risk-minimization tasks that are frequently solved by stochastic gradient descent (SGD) and variants thereof. We present a formulation of these tasks as classical inverse or filtering problems and, furthermore, we propose an efficient, gradient-free algorithm for finding a solution to these problems using ensemble Kalman inversion (EKI). Applications of our approach include offline and online supervised learning with deep neural networks, as well as graph-based semi-supervised learning. The essence of the EKI procedure is an ensemble based approximate gradient descent in which derivatives are replaced by differences from within the ensemble. We suggest several modifications to the basic method, derived from empirically successful heuristics developed in the context of SGD. Numerical results demonstrate wide applicability and robustness of the proposed algorithm.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/22/2023

Subsampling in ensemble Kalman inversion

We consider the Ensemble Kalman Inversion which has been recently introd...
research
10/26/2020

Iterative Ensemble Kalman Methods: A Unified Perspective with Some New Variants

This paper provides a unified perspective of iterative ensemble Kalman m...
research
05/05/2020

Ensemble Kalman filter for neural network based one-shot inversion

We study the use of novel techniques arising in machine learning for inv...
research
07/15/2023

Gradient-free training of neural ODEs for system identification and control using ensemble Kalman inversion

Ensemble Kalman inversion (EKI) is a sequential Monte Carlo method used ...
research
01/31/2022

Robust supervised learning with coordinate gradient descent

This paper considers the problem of supervised learning with linear meth...
research
08/16/2018

Experiential Robot Learning with Accelerated Neuroevolution

Derivative-based optimization techniques such as Stochastic Gradient Des...
research
01/08/2020

SGD with Hardness Weighted Sampling for Distributionally Robust Deep Learning

Distributionally Robust Optimization (DRO) has been proposed as an alter...

Please sign up or login with your details

Forgot password? Click here to reset