Reservoir Computing and its Sensitivity to Symmetry in the Activation Function

09/21/2020
by   Joschka Herteux, et al.
0

Reservoir computing has repeatedly been shown to be extremely successful in the prediction of nonlinear time-series. However, there is no complete understanding of the proper design of a reservoir yet. We find that the simplest popular setup has a harmful symmetry, which leads to the prediction of what we call mirror-attractor. We prove this analytically. Similar problems can arise in a general context, and we use them to explain the success or failure of some designs. The symmetry is a direct consequence of the hyperbolic tangent activation function. Further, four ways to break the symmetry are compared numerically: A bias in the output, a shift in the input, a quadratic term in the readout, and a mixture of even and odd activation functions. Firstly, we test their susceptibility to the mirror-attractor. Secondly, we evaluate their performance on the task of predicting Lorenz data with the mean shifted to zero. The short-time prediction is measured with the forecast horizon while the largest Lyapunov exponent and the correlation dimension are used to represent the climate. Finally, the same analysis is repeated on a combined dataset of the Lorenz attractor and the Halvorsen attractor, which we designed to reveal potential problems with symmetry. We find that all methods except the output bias are able to fully break the symmetry with input shift and quadratic readout performing the best overall.

READ FULL TEXT
research
06/25/2018

Propagating Uncertainty through the tanh Function with Application to Reservoir Computing

Many neural networks use the tanh activation function, however when give...
research
03/04/2023

Lon-eå at SemEval-2023 Task 11: A Comparison of Activation Functions for Soft and Hard Label Prediction

We study the influence of different activation functions in the output l...
research
11/29/2022

Optimizing time-shifts for reservoir computing using a rank-revealing QR algorithm

Reservoir computing is a recurrent neural network paradigm in which only...
research
05/17/2021

Activation function design for deep networks: linearity and effective initialisation

The activation function deployed in a deep neural network has great infl...
research
01/30/2021

Symmetry-Aware Reservoir Computing

We demonstrate that matching the symmetry properties of a reservoir comp...
research
11/27/2021

AIS: A nonlinear activation function for industrial safety engineering

In the task of Chinese named entity recognition based on deep learning, ...

Please sign up or login with your details

Forgot password? Click here to reset