Recurrent Neural Network Training with Convex Loss and Regularization Functions by Extended Kalman Filtering

11/04/2021
by   Alberto Bemporad, et al.
0

We investigate the use of extended Kalman filtering to train recurrent neural networks for data-driven nonlinear, possibly adaptive, model-based control design. We show that the approach can be applied to rather arbitrary convex loss functions and regularization terms on the network parameters. We show that the learning method outperforms stochastic gradient descent in a nonlinear system identification benchmark and in training a linear system with binary outputs. We also explore the use of the algorithm in data-driven nonlinear model predictive control and its relation with disturbance models for offset-free tracking.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset