Selective Memory Recursive Least Squares: Uniformly Allocated Approximation Capabilities of RBF Neural Networks in Real-Time Learning

11/15/2022
by   Yiming Fei, et al.
0

When performing real-time learning tasks, the radial basis function neural network (RBFNN) is expected to make full use of the training samples such that its learning accuracy and generalization capability are guaranteed. Since the approximation capability of the RBFNN is finite, training methods with forgetting mechanisms such as the forgetting factor recursive least squares (FFRLS) and stochastic gradient descent (SGD) methods are widely used to maintain the learning ability of the RBFNN to new knowledge. However, with the forgetting mechanisms, some useful knowledge will get lost simply because they are learned a long time ago, which we refer to as the passive knowledge forgetting phenomenon. To address this problem, this paper proposes a real-time training method named selective memory recursive least squares (SMRLS) in which the feature space of the RBFNN is evenly discretized into a finite number of partitions and a synthesized objective function is developed to replace the original objective function of the ordinary recursive least squares (RLS) method. SMRLS is featured with a memorization mechanism that synthesizes the samples within each partition in real-time into representative samples uniformly distributed over the feature space, and thus overcomes the passive knowledge forgetting phenomenon and improves the generalization capability of the learned knowledge. Compared with the SGD or FFRLS methods, SMRLS achieves improved learning performance (learning speed, accuracy and generalization capability), which is demonstrated by corresponding simulation results.

READ FULL TEXT
research
08/08/2023

Real-Time Progressive Learning: Mutually Reinforcing Learning and Control with Neural-Network-Based Selective Memory

Memory, as the basis of learning, determines the storage, update and for...
research
09/07/2021

Revisiting Recursive Least Squares for Training Deep Neural Networks

Recursive least squares (RLS) algorithms were once widely used for train...
research
03/24/2020

Finite-Time Analysis of Stochastic Gradient Descent under Markov Randomness

Motivated by broad applications in reinforcement learning and machine le...
research
11/21/2021

Learning by Active Forgetting for Neural Networks

Remembering and forgetting mechanisms are two sides of the same coin in ...
research
02/08/2019

Stimulating STDP to Exploit Locality for Lifelong Learning without Catastrophic Forgetting

Stochastic gradient descent requires that training samples be drawn from...
research
12/24/2020

Mixed-Privacy Forgetting in Deep Networks

We show that the influence of a subset of the training samples can be re...
research
01/13/2022

Recursive Least Squares Policy Control with Echo State Network

The echo state network (ESN) is a special type of recurrent neural netwo...

Please sign up or login with your details

Forgot password? Click here to reset