An Empirical Study of the L2-Boost technique with Echo State Networks

01/02/2015
by   Sebastián Basterrech, et al.
0

A particular case of Recurrent Neural Network (RNN) was introduced at the beginning of the 2000s under the name of Echo State Networks (ESNs). The ESN model overcomes the limitations during the training of the RNNs while introducing no significant disadvantages. Although the model presents some well-identified drawbacks when the parameters are not well initialised. The performance of an ESN is highly dependent on its internal parameters and pattern of connectivity of the hidden-hidden weights Often, the tuning of the network parameters can be hard and can impact in the accuracy of the models. In this work, we investigate the performance of a specific boosting technique (called L2-Boost) with ESNs as single predictors. The L2-Boost technique has been shown to be an effective tool to combine "weak" predictors in regression problems. In this study, we use an ensemble of random initialized ESNs (without control their parameters) as "weak" predictors of the boosting procedure. We evaluate our approach on five well-know time-series benchmark problems. Additionally, we compare this technique with a baseline approach that consists of averaging the prediction of an ensemble of ESNs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/17/2017

Exploring Sparsity in Recurrent Neural Networks

Recurrent Neural Networks (RNN) are widely used to solve a variety of pr...
research
01/02/2015

An Experimental Analysis of the Echo State Network Initialization Using the Particle Swarm Optimization

This article introduces a robust hybrid method for solving supervised le...
research
07/28/2021

To Boost or not to Boost: On the Limits of Boosted Neural Networks

Boosting is a method for finding a highly accurate hypothesis by linearl...
research
03/07/2022

Evaluating State of the Art, Forecasting Ensembles- and Meta-learning Strategies for Model Fusion

Techniques of hybridisation and ensemble learning are popular model fusi...
research
03/22/2018

Learning through deterministic assignment of hidden parameters

Supervised learning frequently boils down to determining hidden and brig...
research
05/31/2018

A mixture model for aggregation of multiple pre-trained weak classifiers

Deep networks have gained immense popularity in Computer Vision and othe...
research
04/15/2020

DeeSCo: Deep heterogeneous ensemble with Stochastic Combinatory loss for gaze estimation

From medical research to gaming applications, gaze estimation is becomin...

Please sign up or login with your details

Forgot password? Click here to reset