A Basic Recurrent Neural Network Model

12/29/2016
by   Fathi M. Salem, et al.
0

We present a model of a basic recurrent neural network (or bRNN) that includes a separate linear term with a slightly "stable" fixed matrix to guarantee bounded solutions and fast dynamic response. We formulate a state space viewpoint and adapt the constrained optimization Lagrange Multiplier (CLM) technique and the vector Calculus of Variations (CoV) to derive the (stochastic) gradient descent. In this process, one avoids the commonly used re-application of the circular chain-rule and identifies the error back-propagation with the co-state backward dynamic equations. We assert that this bRNN can successfully perform regression tracking of time-series. Moreover, the "vanishing and exploding" gradients are explicitly quantified and explained through the co-state dynamics and the update laws. The adapted CoV framework, in addition, can correctly and principally integrate new loss functions in the network on any variable and for varied goals, e.g., for supervised learning on the outputs and unsupervised learning on the internal (hidden) states.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/26/2021

Stochastic Recurrent Neural Network for Multistep Time Series Forecasting

Time series forecasting based on deep architectures has been gaining pop...
research
07/13/2023

Learning fixed points of recurrent neural networks by reparameterizing the network model

In computational neuroscience, fixed points of recurrent neural networks...
research
07/14/2020

Shuffling Recurrent Neural Networks

We propose a novel recurrent neural network model, where the hidden stat...
research
11/19/2022

Can Gradient Descent Provably Learn Linear Dynamic Systems?

We study the learning ability of linear recurrent neural networks with g...
research
11/04/2021

Recurrent Neural Network Training with Convex Loss and Regularization Functions by Extended Kalman Filtering

We investigate the use of extended Kalman filtering to train recurrent n...
research
01/02/2015

An Experimental Analysis of the Echo State Network Initialization Using the Particle Swarm Optimization

This article introduces a robust hybrid method for solving supervised le...
research
10/18/2021

Unsupervised Learned Kalman Filtering

In this paper we adapt KalmanNet, which is a recently pro-posed deep neu...

Please sign up or login with your details

Forgot password? Click here to reset