Deep Randomized Neural Networks

02/27/2020 ∙ by Claudio Gallicchio, et al. ∙ University of Pisa Sapienza University of Rome 0

Randomized Neural Networks explore the behavior of neural systems where the majority of connections are fixed, either in a stochastic or a deterministic fashion. Typical examples of such systems consist of multi-layered neural network architectures where the connections to the hidden layer(s) are left untrained after initialization. Limiting the training algorithms to operate on a reduced set of weights inherently characterizes the class of Randomized Neural Networks with a number of intriguing features. Among them, the extreme efficiency of the resulting learning processes is undoubtedly a striking advantage with respect to fully trained architectures. Besides, despite the involved simplifications, randomized neural systems possess remarkable properties both in practice, achieving state-of-the-art results in multiple domains, and theoretically, allowing to analyze intrinsic properties of neural architectures (e.g. before training of the hidden layers' connections). In recent years, the study of Randomized Neural Networks has been extended towards deep architectures, opening new research directions to the design of effective yet extremely efficient deep learning models in vectorial as well as in more complex data domains. This chapter surveys all the major aspects regarding the design and analysis of Randomized Neural Networks, and some of the key results with respect to their approximation capabilities. In particular, we first introduce the fundamentals of randomized neural models in the context of feed-forward networks (i.e., Random Vector Functional Link and equivalent models) and convolutional filters, before moving to the case of recurrent systems (i.e., Reservoir Computing networks). For both, we focus specifically on recent results in the domain of deep randomized systems, and (for recurrent models) their application to structured domains.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The relentless pace of success in deep learning over the last few years has been nothing short of extraordinary. After the initial breakthroughs in the ImageNet competition


, a popular viewpoint was that deep learning represented a significant shift away from hand-designing features to learning them from data. However, the majority of researchers today would agree that the shift can be more correctly classified as moving towards hand-designing

architectural biases in the networks themselves ulyanov2018deep

. This, combined with the flexibility of stochastic gradient descent and automatic differentiation, goes a long way towards explaining many of the recent advances in neural networks.

In this chapter, we consider how far we can go by relying almost exclusively on these architectural biases. In particular, we explore recent classes of deep learning models wherein the majority of connections are randomized

or more generally fixed according to some specific heuristic. In the case of shallow networks, the benefits of randomization have been explored numerous times. Among other things, we can mention the original perceptron architecture

rosenblatt1958perceptron , random vector functional-links pao1992functional ; pao1994learning , stochastic configuration networks wang2017stochastic ; wang2017robust ; wang2018deep , random features for kernel approximations hamid2013compact ; kar2012random ; le2013fastfood ; rahimi2007random ; rahimi2008uniform ; rahimi2009weighted , and reservoir computing jaeger2001echo ; lukovsevivcius2009reservoir . In general, these models trade-off a (possibly negligible) part of their accuracy for training processes that can be orders of magnitude faster than fully trainable networks. In addition, randomization makes them particularly attractive from a theoretical point of view, and a vast literature exists on their approximation properties.

Differently from previous reviews scardapane2017randomness ; gallicchio2017randomized ; wang2016editorial , in this chapter we focus on recent attempts at extending these ideas to the deep case, where a (possibly very large) number of hidden layers is stacked to obtain multiple intermediate representations. Extending the accuracy/efficiency trade-off also for deep architectures is not trivial, but the benefits of being able to do so are vast. As we show in this chapter, several alternatives exist for obtaining extremely fast and accurate randomized deep learning models in a variety of scenarios, especially whenever the dataset is medium or medium-to-large in size. We also comment on a number of intriguing analytical and theoretical properties arising from the study of deep randomized architectures, from their relation to kernel methods and Gaussian processes daniely2016toward , to metric learning giryes2016deep , pruning ramanujan2019s , and so on. Importantly, randomization allows to potentially blend non-differentiable components in the architecture (e.g., Heaviside step functions kawaguchi2018deep ), further extending the toolkit available to deep learning practitioners.

Because we touch on a number of different fields, we do not aim at a comprehensive survey of the literature. Rather, we highlight general ideas and concepts by a careful selection of papers and results, trying to convey the widest perspective. When possible, we also highlight points that in our opinion have been under-explored in the literature, and possible open areas of research. Finally, we consider a variety of types of data, ranging from vectors to images and graph-based datasets.


The rest of the chapter is organized in two broad parts, each further subdivided in two. We start with shallow, feedforward networks in Section 2. Because our focus is on deep models, we only provide basic concepts, and provide references and pointers to more comprehensive expositions of shallow randomized models when necessary. Building on this, Section 3 describes a selection of topics and papers pertaining to the analysis, design, and implementation of deep randomized feedforward models. Sections 4 and 6 replicate this organization for recurrent models: we first introduce the basic reservoir computing architecture in Section 4 (with a focus on echo state networks), exploring their extension to multiple hidden layers and structured types of data in Section 6. We conclude with several remarks in Section 8.


We use boldface notation for vectors (e.g., ) and matrices (e.g., ). Subscripts are used to denote a specific unit inside a layer, and superscripts are used for denoting a specific layer. An index in brackets is used for time dependency. For example, denotes the th unit of the th layer at time .

2 Randomization in Feed-forward Neural Networks

As we stated in the introduction, neural networks with a single hidden layer whose connections are fixed (either randomly or otherwise) have a long history in the field, dating back to some of the original works on perceptrons. Random vector functional-links (RVFLs), originally introduced and analyzed in the nineties igelnik1995stochastic ; igelnik1999ensemble ; pao1992functional ; pao1994learning represent the most comprehensive formalization of this idea, with further innovations and applications up to today alhamdoosh2014fast . In this section we provide an overview of their design and approximation capabilities, and refer to scardapane2017randomness for a more thorough overview on their history, and to wang2017stochastic ; wang2017robust ; wang2018deep for further developments in the context of these models.

2.1 Description of the model

Consider a generic function approximation task, where we denote by the input vector, by the output (e.g., a binary for classification), and by the model we would like to train. In particular, the basic RVFL model is defined as igelnik1995stochastic :


where the functions extract generic (fixed) features, which are linearly combined through the adaptable coefficients . An example are sigmoidal basis expansions with random coefficients and :


In general, we also consider

to add an offset to the model, and we can also include the original input features in the output layer (similar to modern residual connections in deep networks). Assuming that the parameters in eq:sigmoid are all selected beforehand (e.g., by randomization), the final model in eq:rvfl is a linear model

, where we stack in two column vectors and

all feature expansions and output coefficients respectively. As a result, all the theory of linear regression and classification can be applied almost straightforwardly

scardapane2017randomness .

Approximation capabilities for this class of networks have been studied extensively igelnik1995stochastic ; pao1994learning ; gorban2015approximation ; rudi2016generalization ; rahimi2007random ; rahimi2008uniform . In general, RVFL networks retain the universal approximation properties of fully-trainable neural networks, with an error that decreases in the order . The practical success of the networks depends strongly on the selection of the random coefficients, with recent works exploring this subject at length wang2017stochastic .

2.2 Training the network

We dwell now shortly on the topic of training and optimizing standard RVFL networks. In fact, speed of training (while maintaining good nonlinear approximation capabilities) is one of the major advantages of randomized neural networks and, conversely, keeping this accuracy/efficiency trade-off is one of the major challenges in the design of deeper architectures.

Consider a dataset of desired input/output pairs . We initialize the input-to-hidden parameters randomly, and collect the corresponding feature expansions row-wise in a matrix . While many variants of optimization are feasible scardapane2017randomness , by far the most common technique to train an RVFL net is to formulate the optimization problem as an -regularized least squares:


where is the vector of targets and a free (positive) hyper-parameter. The reason eq:least_squares_training is a popular approach relies on (i) its strong convexity (resulting in a single minimizer), and (ii) the linearity of its gradient. The latter is especially important, since for most medium-sized datasets the problem eq:least_squares_training can be solved immediately as:



is the identity matrix of appropriate shape, or alternatively (if

is much larger than the number of points in the dataset) as:


In general, solving the previous expressions has a cost which is cubic in the number of feature expansions or in the number of data points, depending on the specific formulation being chosen. For large scale problems, many ad-hoc implementations fan2008liblinear and algorithmic advances yuan2012recent are available to solve the problem in a fraction of the cost of a standard stochastic gradient descent. Note how, in both formulations, the term weighted by acts as a numerical stabilizer on the diagonal of the matrix being inverted.

Clearly, a wide range of variants on the basic problem in eq:least_squares_training are possible, almost all of them loosing the possibility of a closed-form solution. Of these, we mention two that are relevant to the following. First, when considering binary classification tasks (in which the target variable is constrained as

), we can reformulate the problem in a logistic regression fashion:


where is the sigmoid operation from eq:sigmoid and

denotes elementwise multiplication. The use of the sigmoid ensures that the output of the RVFL network can be interpreted as a probability and can be used in later computations about the confidence in the prediction. Second, replacing the squared

norm in eq:least_squares_training with the norm results in sparse weight vectors, which can make the network more efficient bach2012optimization ; cao2014sparse .

2.3 Additional considerations

Clearly, this is only intended as a very brief introduction to the topic of (shallow) RVFL networks, and we refer to other reviews for a more comprehensive treatment zhang2015comprehensive ; liwang2017 ; scardapane2017randomness ; gallicchio2017randomized . There is a pletora of interesting topics on which we skip or only brief touch, including ensembling strategies alhamdoosh2014fast and recent works on selecting the optimal range for the pseudo-random parameters wang2017stochastic . More generally, albeit we focus on the RVFL terminology, this class of networks has a rich history in which similar ideas have been reintroduced multiple times under different names (see also scardapane2017randomness ), so interesting pointers can be found in the literature on random kernel features rahimi2008uniform , the no-prop training algorithm widrow2013no , and several others. All of these works play on the delicate trade-off between keeping nonlinear approximation capabilities without sacrificing efficiency or, possibly, analytic solutions.

We now turn to the topic of extending these capabilities to the ‘deep’ case. Differently from the fully-trainable case, where stacking several adaptable layers can be easily justified empirically (and does not change the nature of the optimization problem), in the randomized case this is not trivial. Firstly, it is unclear whether simply stacking several randomized layers can improve accuracy at all, or even distort the original information content in the inputs. Secondly, designing other strategies going beyond the simple ‘stack’ of layers must remain sufficiently simple and efficient to contend with fully-trainable deep learning solutions (i.e., either provide gains in accuracy or order of magnitudes in improved efficiency). In the next section, we review some significant work dealing with these two questions.

3 Deep Random-weights Neural Networks

In this section we collect and organize a series of selected works dealing with the analysis and design of deep randomized networks. This is not built as a comprehensive survey of the state-of-the-art, but rather as a set of pointers to some of the most important ideas and results coming from the recent literature.

3.1 Analyzing randomized deep networks through the lens of kernel methods

To begin with, consider a generic deep randomized network defined as the composition of a representation function (a stack of one or more layers with random weights), and a linear model trained on top of the representations from (also called later a readout). This is a relatively straightforward extension of the previous section, where we allow the matrix to be generated by more complex architectures with random weights than a single, fully-connected layer. Irrespective of the accuracy of such a model, an analysis of its theoretical properties is interesting because it corresponds to investigating the behavior of a deep network in a small subspace around its random initialization. In fact, there is a vast literature showing insightful connections of this problem with the study of kernel machines and Gaussian Processes cho2009kernel ; rahimi2009weighted ; mairal2014convolutional ; anselmi2015deep ; daniely2016toward . A general conclusion of all these works is to show that, in the limit of infinite width, deep networks with randomized weights converge to Gaussian processes.

daniely2016toward generalizes most of the previous results for a vast class of representation functions

, whose structure can be described by a directed acyclic graph where we associate to each node a bounded activation function, comprising most commonly used feedforward and sequential networks. They show that the

skeleton of this function (i.e., the topological structure with no knowledge of the weights) is univocally associated to a kernel function

. The representations generated by a single realization of the skeleton, obtained by sampling the weights from a Gaussian distribution with appropriately scaled variance, are in general able to approximate the kernel itself. As a result, with high probability one can find a linear predictor

able to approximate all bounded functions in the hypothesis space associated to .

A complementary class of results, based on the novel idea of the neural tangent kernel (NTK), can be found in yang2019scaling ; arora2019exact

, allowing to extend these ideas more formally to networks with weight tying (e.g., convolutional neural networks), and to neural networks with trained weights.

3.2 The relation between random weights and metric learning

Another interesting class of results is obtained by giryes2016deep and later works, who explored the effect of the randomly initialized representation function on the metric space in which the data resides, exploiting tools from compressive sensing and dictionary learning. Roughly speaking, if one assumes that points in the input data corresponding to separate classes have ‘large’ angles (compared to points in the same class), then it is possible to show that performs an embedding of the data in which the latter angles are shrunk more than angles corresponding to points in the same cluster. With the separation among classes increasing, the embedding obtained by a deep network makes the data easier to classify by prioritizing their angle.

Differently from works described in the previous section, these results are not viable for any deep randomized network, but only for networks with random Gaussian weights and rectified linear units (ReLU) as activation functions (or similar):


The previous activation is necessary to make the network sensitive to the angles between inputs, shrinking them proportionally to their magnitude. At the same time, the analysis from giryes2016deep has several interesting practical implications. On one side, if the assumptions on the data are correct, they allow to derive some bounds between the implicit dimension of the data and the corresponding required size of the training set (see (giryes2016deep, , Section V)). More in general, even if the assumptions are not satisfied, this analysis provides a justification for the good performance of deep networks in practice, by assuming that learning the linear projection is equivalent to ‘choosing’ a suitable angle on which to perform the shrinking across classes, instead of using the angle of their principal axis.

These results directly lead to considering this class of networks for practical learning purposes. From giryes2016deep : “In fact, for some applications it is possible to use networks with random weights at the first layers for separating the points with distinguishable angles, followed by trained weights at the deeper layers for separating the remaining points.” Extensions and variations on this core concept are considered more in-depth over the next sections.

3.3 Deep randomized neural networks as priors

In a very broad sense, understanding the performance of deep randomized networks in practice is akin to understanding how much the spectacular results of deep networks in several domains are due to their architectures (i.e., their architectural biases), and how much can be attributed to the specific training algorithm for selecting the weights.

One key result in this sense was developed in the work on deep image priors ulyanov2018deep . The paper was one of the first to show that a randomly initialized convolutional neural network (CNN) contained enough structural information to act as an efficient prior in many image processing problems. The algorithm they exploit is very simple and can be summarized in a small number of steps. First, they randomly initialize a CNN , mapping from a simple latent vector to the space of images under consideration. Given a noisy starting image (e.g., an image with occlusions) and a loss term that depends on the specific task, the parameters of are optimized on the single image:


The restored image is then given by . This procedure is able to obtain state-of-the-art results on several image restoration tasks ulyanov2018deep . In their words, “Our results go against the common narrative that explain the success of deep learning in image restoration to the ability to learn rather than hand-craft priors; instead, random networks are better hand-crafted priors, and learning builds on this basis.

Along a similar line, pons2019randomly showed that randomly initialized CNNs on several audio classification problems performed better than some hand-crafted features, especially mel frequency cepstral coefficients (MFCCs), although they are still significantly worse than their fully trained equivalent.

3.4 Towards practical deep randomized networks: relation with pruning

Summarizing the discussion up to this point, we saw how deep randomized networks can be helpful for analyzing several interesting properties of deep networks. From a more practical viewpoint, fully randomized networks can be used in some specific scenarios, either as priors (due to their architectural biases), or as generic feature extractors. The question remains open, however, on whether we can exploit them also as generic learning models.

One of the first works to seriously explore this possibility was rosenfeld2019intriguing . The authors investigated the training of a deep network wherein a large percentage of weights was kept fixed to their original values. They showed that, for modern deep CNNs, it is possible to fix up to nine tenths of the parameters and train only the remaining , obtaining a negligible drop in accuracy in several scenarios. Apart from computational savings, this finding is interesting inasmuch it allows to describe a good portion of the neural network only with the knowledge of the specific pseudo-random number generator and its initial seed rosenfeld2019intriguing .

This line of reasoning also connects to one of the fundamental open research questions in deep learning, pruning of architectures frankle2018lottery . In particular, even if a posteriori (after training), a large percentage of weights in a deep network is found to be redundant and easily removable, a priori (before training) it is very hard to train small, compact networks. The lottery ticket hypothesis frankle2018lottery is a recent proposal arguing that the success of most deep networks can be attributed to small subsets of weights (tickets), and the benefit of very large networks is in having and initializing a very large number of such tickets, increasing the possibility of finding good ones. The hypothesis has generated many follow-ups (e.g., frankle2019lottery ; morcos2019one

), although at the moment its relation with fully randomized networks remains under-explored (with some exceptions, e.g.,

ramanujan2019s ). In particular, when moving to more structured types of pruning, it is found that the lottery ticket hypothesis compares worse with respect to training from scratch smaller architectures liu2018rethinking . “[…] for these pruning methods, what matters more may be the obtained architecture, instead of the preserved weights, despite training the large model is needed to find that target architecture.” In general, this points to the fact that more work on deep randomized networks and their initialization can be beneficial also to the field of model selection and architecture search. We return on this point in one of the next sections. We refer also to zhang2019all for similar analyses layer-wise.

3.5 Training of deep randomized networks via stacked autoencoders

One way to combine the advantage of randomization with a partial form of training is the use of stacked autoencoders, similar to some prior work on deep learning

vincent2010stacked . An autoencoder is a neural network with one or more hidden layers that is trained to map an input (or a corrupted version thereof) to , learning a suitable intermediate representation internally.

A general recipe to combine autoencoders with RVFLs networks is as follows cecotti2016deep :

  1. Initialize a random mapping similar to Section 2.1.

  2. Train the readout to map to the original input , obtaining a set of weights (through least-squares or a sparse version of it).

  3. Use as the first weight matrix of a separate deep randomized network.

  4. Repeat points (1)-(3) on the embedding generated at point (3).

A more constructive and theoretically grounded approach to the design of deep randomized networks is described in the literature on deep stochastic configuration networks wang2018deep . Because our focus here is on RVFL networks, we refer the interested reader to wang2018deep and papers therein for this separate class of algorithms.

3.6 Semi-random neural networks

Fully-trained and randomized neural networks (what arora2019exact calls strongly-trained and weakly-trained networks) are only two extremes of a relatively large continuum of models, all possessing separate trade-offs concerning accuracy, speed of training, inference, and so on. As an example of a model in the middle of this range, we describe here briefly the semi-random architecture proposed in kawaguchi2018deep .

We replace the th feature expansion in the basic RVFL model eq:rvfl by:


where is randomly sampled, is trainable, and the activation function is defined for a positive hyper-parameter as:


with being the step function. For example, for we obtain linear semi-random features, while for we obtain squared semi-random features. Mimicking the matrix notation of Section 2.1, the feature transformation can be written as:


where is the Hadamard (element-wise) product between matrices. While apparently counter-intuitive, this model shows a number of remarkable theoretical properties, as analyzed by kawaguchi2018deep . Among other things, a single-hidden-layer semi-random model maintains one minimum even with a non-convex optimization problem, and its extension to more than a single hidden layer has generalization bounds that are significantly better than comparable fully-trainable networks with ReLU activation functions kawaguchi2018deep .

Irrespective of its theoretical and practical capabilities, this model shows the power of smartly combining the two words of fully-trainable deep learning with randomized (or semi-randomized) models, which we believe heavily under-explored at the moment.

3.7 Weight-agnostic neural networks

Up to now, we considered deep networks wherein a majority of the connections are randomized. However, several of the ideas that we discussed can be extended by considering networks with fixed, albeit not randomized, weights. In fact, as we will discuss later, in the reservoir computing field this has become a fruitful research direction. In the feedforward case, we conclude here by showing a single notable result in the case of neural architecture search (NAS), the weight-agnostic neural network gaier2019weight .

NAS is the problem of finding an optimal architecture for a specific task. A single NAS run requires a large number of models’ training, and as such, it is one of the field that could benefit the most from advancements in this sense (also from an environmental point of view strubell2019energy ). The idea of weight-agnostic network is to design a network in which all weights are initialized to the same value, and the network should be robust to this value. It allows to try a huge number of architectures extremely quickly, obtaining in some scenarios very interesting results gaier2019weight . “Inspired by precocial behaviors evolved in nature, in this work, we develop neural networks with architectures that are naturally capable of performing a given task even when their weight parameters are randomly sampledgaier2019weight

. Note that ideas on randomized networks in NAS also have a long history, dating back to works on recurrent neural networks

schmidhuber2001evaluating .

3.8 Final considerations

We conclude this general overview with a small set of final remarks and considerations. Globally, we saw that deep randomized networks have attracted a large amount of interest lately as tools for the analysis and search of deep networks, going at the hearth of a historical dichotomy between the importance of the network’s architecture and the selection of its weights. Practically, several ideas and heuristics have been developed to make these randomized neural networks useful in real-world scenarios. All the ideas considered here have historical antecedents. Just to cite an example, “It has long been known that [randomized] convolutional nets have reasonable performance on MNIST and CIFAR-10. [randomized] nets that are fully-connected instead of convolutional, can also be thought of as ”multi-layer random kitchen sinks, which also have a long history” arora2019exact .

At the same time, we acknowledge that the performance of randomized networks have not been comparable to fully trained network on truly complex scenarios such as ImageNet. This can be due to an imperfect understanding of their behavior, or it can be a fundamental limitation of this class of models. One possibility to overcome this limit could be to combine the idea of fixing part of the network, but moving beyond pure randomization. An example of this is the PCANet chan2015pcanet , which we have not mentioned in the main text.

While deep RVFL networks show excellent accuracy / performance trade-offs on small and medium problems, this trade-off has yet to be thoroughly analyzed for larger problems. In this line, it would be interesting to evaluate deep RVFL variations on established benchmarks such as Stanford’s DAWN Deep Learning Benchmark.111

Finally, part of this criticism can be attributed to the lack of an established codebase for this class of models. This is also an important line of research for the immediate future.

We now turn to the topic of deep randomized recurrent neural networks.

4 Randomization in Dynamical Recurrent Networks

Dynamical recurrent neural models, or simply Recurrent Neural Networks kolen2001field ; mandic2001recurrent , are a widely popular paradigm for processing data that comes in the form of time-series, where each new input information is linked to the previous (and following) one by a temporal relation. Architecturally, the major difference in RNNs with respect to feed-forward neural processing systems analyzed so far is the presence of feedback among the hidden layer’s recurrent units. This is a crucial modification that makes it possible to elaborate each input in the context of its predecessors, i.e., it gives a memory to the operation of the system. Roughly speaking, apart from this architectural change, the basic description of the model does not change: a hidden layer (made up here by recurrent units) implements a representation function , whose outcome is tapped by a readout layer of linear units that calculate the output function . The overall operation can be described as the composition (as already seen in Section 3). A graphical description of this process is given in Fig. 1.

Figure 1: Schematic representation of the RNN operation on temporal data.

Going a step further into the mathematical description of the representation (hidden layer) component, we can see that its operation can be understood as that of an input-driven dynamical system. The state of such system is given by the activation of the hidden units, i.e. . The evolution of such state is ruled by a function that can be formulated in several ways. For instance, in continuous-time cases such evolution function is expressed in terms of a set of differential equations, as used, e.g., in the case of spiking neural network models gerstner2002spiking . Here, instead, we refer to the common case of discrete-time dynamical systems that evolve according to an iterated mapping of the form:


where and are weight matrices that parametrize the state update function, respectively modulating the impact of the current external input and that of the previous state of the system. Typically, the activation function comes in the form of a squashing non-linearity, as already examined in Section 2.

The readout comes often in the same linear form mentioned in Section 2, i.e., as layer of linear units that apply a linear combination of the components of the state vector: , where the elements in are the parameters of the readout layer.

Training RNN architectures implies gradient propagation across several steps: those corresponding to the length of the time-series on which the hidden layer’s architecture is unrolled. It is then easy to see that training algorithms for RNN face similar difficulties to those encountered when training deep neural networks. A major related downside is that learning is computationally intensive and requires long times (an aspect partially mitigated by the availability of GPU-accelerated algorithms). As such, also in this context, the use of partially untrained RNN architectures appears immediately very intriguing. While already early works in neural networks literature pointed out the possible benefits of having untrained dynamical systems as effective neural processing models (see, e.g., albers1996dynamical ), in the last decade a paradigm called Reservoir Computing hit the literature becoming very popular as an efficient alternative to the common fully-trained design of RNNs.

5 Reservoir Computing Neural Networks

Reservoir Computing (RC) lukovsevivcius2009reservoir ; verstraeten2007experimental is a nowadays popular approach for parsimonious design of RNNs. In the same spirit of randomized neural networks approaches described in Section 2, the basic idea of RC is to limit training to the readout part of the network, leaving the representation part unaltered after initialization. This means that the parameters (i.e., the weights) of the recurrent hidden layer are randomly initialized and then left untrained. This peculiar part of the architecture, responsible of implementing the representation function in Fig. 1, is in this context called the reservoir

. The reservoir is typically made up of a large number of non-linear neurons, and its role is essentially to provide a high-dimensional non-linear expansion of the input history into a possibly rich feature space, where the original learning problem can be more easily approached by a simple linear readout layer. This basic RC methodology for fast RNN set up and training has been (almost) contemporary independently proposed in literature under different names and perspectives, among which we mention Echo State Networks (ESNs)

jaeger2004harnessing ; jaeger2001echo , usually with discrete-time dynamics, Liquid State Machines (LSMs) maass2002real , in the context of biologically-inspired spiking neural network models, and Fractal Prediction machines (FPMs) tino2001predicting , originated from the study of contractive iterated function systems and fractals. Here we adopt formalism and terminology close to the prominently known ESN model.

5.1 Reservoir Initialization

Training of the readout is performed in the same way described in 2.2, and as such we are going to discuss it further in this part. The crucial aspect of RC networks is to guarantee a meaningful randomized initialization of the reservoir parameter, i.e., of the weight values in matrices and . As we are dealing in the case with the parameters of a dynamical system, a special care needs to be devoted to the aspect of stability of the determined dynamics. Indeed, if not properly instantiated, the reservoir system could exhibit undesired behaviors, such as instability or even chaos. If this occur, then the resulting learning model would likely respond deeply differently to very similar input time-series, thereby showing very poor generalization abilities. To account for this potential weakness, reservoirs are commonly initialized under stability properties that (in a way or another) ensure that the system dynamics will not fall into undesired regimes when put into operation. Perhaps, the most widely known of such properties is the so called Echo State Property (ESP) yildiz2012re ; jaeger2001echo ; manjunath2013echo . This is a global asymptotic (Lyapunov) stability condition on the input-driven reservoir, and essentially states that the state of the system will progressively forget its initial conditions and will depend solely on the driving input time-series. In formulas, denoting by the final state of the reservoir starting from initial state and being fed by the -long input time-series , the ESP can be formulated as:


Assuming reservoir neurons with non-linearity and bounded input spaces, some baseline conditions for reservoir initialization can be derived. Specifically, a sufficient condition originates by seeing the reservoir as a contraction mapping, and requires that . If this condition is met, then the reservoir will show contractive behavior (and hence stability) for all possible driving inputs. In this regard, it is worth recalling that the analysis of reservoirs as contraction mappings has also interesting connections to the resulting Markovian state space organization, the so-called architectural bias of RNNs tino2004markovian ; tivno2007markovian . Initializing reservoirs under a contractive constraint inherently enables reservoir systems to discriminate among different input histories in a suffix-based way gallicchio2011architectural . Interestingly, this observation explains - at least partially - the surprisingly good performance of reservoirs in many tasks (while at the same time also indicating classes of tasks that are more difficultly tackled by RC). A necessary condition for the ESP condition assumes an autonomous reservoir (i.e., with no input) and studies its stability around the zero state. The resulting condition is given by , where

denotes the spectral radius, i.e., the largest among the eigenvalues in modulus. Both the conditions are easy to implement, e.g., referring to the necessary one: after random initialization just scale the recurrent matrix by its spectral radius, and then multiply by the desired one. Although not ensuring stability in case of non-null input, the necessary condition on the spectral radius of

is typically the one used in RC applications.

5.2 Reservoir Richness

Another possible issue with untrained dynamics in RNNs is that of potential weakness of the developed temporal representations. Indeed, after contractive initialization, correlation between recurrent units activations could very high, thereby hampering the richness of the state dynamics. A simple rule of thumb here would prescribe to set the reservoir weights close to the limit of stability, e.g., by setting to a value very close to .

Just controlling the value of spectral radius, however, could not be informative enough on the quality of the developed reservoir dynamics verstraeten2009quantification ; ozturk2007analysis . Thereby, several attempts have been done in literature to identify quality measures for reservoirs. Notable examples are given by assessing (and trying to maximizing) information theoretic quantities, such as information storage lizier2012local , transfer entropy schreiber2000measuring , average state entropy ozturk2007analysis of the reservoir over time, and entropy of individual reservoir neurons’ distribution. For instance, maximizing the latter quantity led to the well-known intrinsic plasticity (IP) triesch2005synergies ; schrauwen2008improving unsupervised adaptation training algorithms for reservoirs.

From a perspective closer to the theory of dynamical systems, several works in literature (see, e.g., legenstein2007makes ; bertschinger2004real ) indicated that input-driven reservoirs that operate in a regime close to the boundary between stability and instability show higher quality dynamics. Such a region is commonly called edge of stability, edge of criticality, and also - with a slight abuse of terminology - edge of chaos. Relevantly, reservoirs close to such a critical behavior tend to show longer short-term memory legenstein2007edge ; boedecker2012information and improved predictive quality on certain tasks livi2017determination ; schrauwen2009computational ; verstraeten2007experimental . While on the one hand it could be questionable to assert that reservoirs should operate close to criticality for every learning task, on the other hand this seems a reasonable initialization condition to consider when nothing is known on the properties of the input-target relation for the task at hand. Furthermore, being able to identify the criticality would be useful to know the actual limits of reservoir stable initialization. While the identification of critical reservoir behaviour is still an open topic of research in RC community manjunath2020memory , some (more or less) practical approaches have been introduced in literature, e.g. relying on the spectrum of local Lyapunov exponents verstraeten2009quantification , recurrence plots bianchi2016investigating , Fisher information livi2017determination and visibility graphs bianchi2017multiplex . Some works also highlighted the relation between the criticality and information theoretic measures of the reservoir boedecker2012information ; torda2018evaluation .

Another stream of RC research focuses on the idea of enforcing architectural richness in reservoir systems. Typically, reservoir units are connected by following a sparse pattern of connectivity jaeger2001echo where, for instance, each unit is coupled only to a small constant number of others. Besides the original idea that such sparseness would have diversified the reservoir units activation (see, e.g., gallicchio2011architectural for a counterexample), the real advantage is actually the sparsification of the involved reservoir matrices, which can sensibly cut down the computational complexity of the prediction phase. However, a related common question arising in the community is the following: is it possible to get a reservoir organization that is better than just random? Several literature works seem giving a positive answer to the question, pointing out approaches for effective reservoir setup. Prominent examples here are given by initialization of recurrent connections based on a ring topology rodan2011minimum ; strauss2012design , i.e., where all the units in the reservoir are simply connected to form a cycle. This kind of organization implies a number of advantages: the recurrent matrix of the reservoir is highly sparse, the stability of the system is easily controllable, the performance in many tasks is often optimized, and the resulting memorization skills are improved (approaching the theoretic limit in the linear case) tino2019dynamical ; rodan2011minimum ; strauss2012design . Other common instances of constrained reservoir topologies include multi-ring reservoirs (where the recurrent neurons are connected to form more than one cycle), and chain reservoirs (where each recurrent neuron is connected only to the next one). On the one hand, these peculiar reservoir organziation can be studied from the perspective of architectural simplification strauss2012design ; boedecker2009studies , on the other hand they can find relations to the interesting concept of orthogonality in dynamical neural systems hajnal2006critical ; white2004short ; farkavs2016computational . E.g., ring and multi-ring reservoirs can be seen as a very simple approach to get orthogonal recurrent weight matrices.

Another way of achieving improved quality reservoirs is to introduce depth in their architectural construction, as described in the following.

6 Deep Reservoir Computing

The basic idea behind the advancements on deep RNN architectures is to develop richer temporal representations that are able to exploit compositionality in time to capture the multiple levels of temporal abstractions, i.e., multiple time-scales, present in the data. This led to great success in a number of human-level applications, e.g. in the fields of speech, music and language processing hermans2013training ; graves2013speech ; el1996hierarchical ; pascanu2013construct . Trying to extend the randomized RC approaches described in Section 5 towards deep architectures is thereby intriguing under multiple view-points. First of all, it would enable us to analyze the bias of deep recurrent neural systems (i.e., their capabilities before training of recurrent connections). Moreover, it would make it possible to design efficient deep neural network methodologies for learning in time-series domains.

The concept of depth in RNN design is sometimes considered questionable. Here we take a perspective similar to the authors of pascanu2013construct and observe that even if when unrolled in time the recurrent layer’s architecture becomes multi-layered, all the transitions i) from the input to the recurrent layer, ii) from the recurrent layer to the output, and iii) from the previous state to the current state are indeed shallow. Depth can be then introduced in all of these transitions. Interestingly, some works in RC literature attempted at bridging this gap. In particular, the authors of sun2017deep

proposed a hybrid architecture where an ESN module is stacked on top of a Deep Belief Network, which introduces depth into the input-to-reservoir transition. On the other hand, the authors of

bianchi2018bidirectional proposed a RC model where a bi-directional reservoir system is tapped by a deep readout network, hence introducing depth into the reservoir-to-readout transition. Here in the rest of this chapter we focus our analysis on the case of deep reservoir-to-reservoir transitions, where multiple reservoir layers are stacked on top of each other. In particular, we keep our focus on the ESN formalism, extended to the multi-layer setting by the Deep Echo State Network (DeepESN) model.

7 Deep Echo State Networks

DeepESNs, introduced in gallicchio2017deep , are RC models whose operation can be described by the composition of a dynamical reservoir and of a feed-forward readout. The crucial difference with respect to standard RC is that the dynamical part is a stacked composition of multiple reservoirs, i.e., the reservoir is deep as illustrated in Fig. 2.

Figure 2: Deep reservoir architecture.

The external input time-series drives the dynamics of the first reservoir in the stack, whose output then excites the dynamics of the second reservoir, and so on until the end of the pipeline. Interestingly, architecturally this corresponds to a simplification (sparsification) of a fully-connected unique reservoir (see gallicchio2017deep ).

From a mathematical perspective, the operation of the deep reservoir can be interpreted as that of a set of nested input-driven dynamical systems. The dynamics of the first reservoir layer are ruled by:


While the evolution of the temporal representations developed in successive layers is given by:


where denotes the weight matrix for the connections between layer and , and is the recurrent weight matrix for layer . Given such a mathematical formulation, it is possible to derive stability conditions for the ESP of deep RC models. This was achieved in gallicchio2017echo for a more general case of reservoir computing models with leaky integrator units. For the case of standard tanh neurons considered here, the sufficient condition is given by:


while the necessary one reads as follows:


Notice that both conditions generalize (for multi-layered settings) the respective ones for shallow reservoir systems already discussed in Section 5.

As illustrated in Fig. 3, there are two basic settings for the readout computation. In a all-layers setup, the readout is fed by the activation of all the reservoir layers. In a last-layer setup, the readout receives only the activations of the last layer in the stack. In the former case the learner is able to exploit the qualitatively different dynamics developed in the different layers of the recurrent architectures (possibly weighting them in a suitable way for the learning task at hand). In the latter case, the idea is that the stack of reservoirs has enriched the developed representations of the driving input in such a way that the readout operation can now be more effective. Again, training is limited to the connections pointing to the readout, and is performed as discussed in Section 2.2.

Figure 3: Readout settings for DeepESN. Trained connections are only those pointing to the readout (in red).

Interestingly, the structure that is imposed to the organization of the recurrent units in the reservoir is reflected by a corresponding structure of the developed temporal representation. This has been analyzed recently under several points of view, delineating a pool of potential advantages of deep recurrent architectures that are independent of the training algorithms and shedding light on the architectural bias of deep RNNs.

7.1 Enriched Deep Representations

A first inherent benefit of depth in RNNs is given by the possibility to develop progressively more abstract representations of the driving input. In the temporal domain this means that different layers are able to focus on different time-scales, and the networks as a whole is capable of representing temporal information at multiple time-scales. A first evidence in this sense was given in gallicchio2017deep , where it was shown that effects of input perturbations last longer in the higher layers of the deep reservoir architecture. This important observation was in line with what reported in hermans2013training for fully trained deep RNNs, and pointed out the great role played by the layering architectural factor in the emergence of multiple-times scales.

A further evidence of multiple-view representations in untrained RNN systems was given in gallicchio2018design

, where it was shown that the different layers in the deep reservoirs tend to develop different frequency responses (as emerging through a fast Fourier transform of the reservoir activations). This insights was exploited to develop an automatic algorithm for the setup of the depth in untrained deep RNN. The basic idea was to analyzing the behavior of each new reservoir layer in the architecture as a filter, stopping adding new layers when the filtering effect becomes negligible. The resulting approach, in conjunction with IP unsupervised adaptation of reservoirs, was shown to be extremely effective in speech and music processing, achieving state-of-the-art results and beating the accuracy of more complex fully-trained gated RNN architectures, requiring only a fraction of their respective training times

gallicchio2018design ; deepcomparison .

Richness of deep reservoir dynamics was also explored in the context of stability of dynamical systems and local Lyapunov exponents. In this regard, the major achievement is reported in gallicchio2018local where it was shown, both mathematically and experimentally, that organizing the same number of recurrent units into layers naturally (i.e., under easy conditions) has the effect of pushing the resulting system dynamics closer to the edge of criticality. Under a related view-point, deep RC settings were found to boost the short-term memory capacity in comparison to equivalent shallow architectures gallicchio2017deep .

More recent works on deep RC highlighted even further the role of certain aspects of network’s architectural construction in the enrichment of developed dynamics. In this concern, results in gallicchio2019richness pointed out the relevance of a proper scaling of inter-layer connections, i.e., of the weights in matrices , for , in eq.layerl. It was found that such scaling has a profound impact on the quality of dynamics in higher layers of the network, with larger (resp. smaller) values leading to higher (resp. smaller) average state entropy and number of linearly-uncoupled dynamics. The importance of inter-reservoirs connectivity patterns was also pointed out in the context of spiking neural networks in zajzon2018transferring

7.2 Deep Reservoirs for Structures

In many real-world domains the information under consideration presents forms of aggregation that can be naturally represented by complex forms of data structures, such as trees or graphs. Learning in such structured domains opens entire worlds of application opportunities and at the same time it implies a large number of difficulties. The interested reader is referred to bacciu2019gentle for a gentle introduction to the research field.

Here we briefly summarize the extension of RC models for dealing with trees and graphs. Starting with tree domains, the basic idea is inspired by the original concept of Recursive Neural Networks (RecNNs) sperduti1997supervised ; frasconi1998general , and consists in applying a reservoir system to each node in the input tree, starting from the leaves and ending up in the root. The overall process is again seen as a composition of a representation component followed by a readout layer. In this case, the representation component is implemented by the reservoir as a state transition system that operates on discrete tree structures. The nodes in the input tree take the role of time-steps in the computation of conventional reservoirs, and the states of children nodes takes the role of the previous state. With these concepts in mind, the state (or neural embedding) computed for each node at layer can be expressed as:


where is the state computed by layer for the i-th child of node . Note that is the input information that drives the state update at the current layer: the (external) input label attached to node for the first layer, and the state for node already computed at the previous layer, for layers .

For the case of graphs the reservoir operation is further generalized, and the embedding computed for each vertex in the input structure becomes a function of the embedding developed for its neighbors. The state transition of a deep graph reservoir system operating on a vertex at layer can be formulated as follows:


where is the neighborhood of and, as before, is the driving input information for vertex at layer .

The two deep reservoir models expressed by eq.tree and eq.graph are based on randomization as conventional RC approaches, and are formalized respectively in gallicchio2019deep and gallicchio2020fast . Experimental assessment in these papers indicate the great potentiality of the randomization approach also in dealing with complex data structures, often establishing new state-of-the-art accuracy results on problems in the areas of document processing, cheminformatics and social network analysis.

8 Conclusions

In the face of huge computational power and strong automatic differentiation capabilities exhibited by most computers and frameworks today, a focus on randomization as a quick alternative to full optimization can seem counter-productive. Yet, in this chapter we hope to have provided sufficient evidence that, despite the breakthroughs of fully-trained deep learning, randomized neural networks remain an area of research with strong promises. From a practical perspective, they can achieve significant accuracy / efficiency trade-offs in most problems, albeit strong performance on very large-scale problems currently remain difficult. From a theoretical perspective, they are an irreplaceable tool for the analysis of the properties and dynamics of classical neural networks. More importantly, we believe fully-trainable and fully-randomized networks stand at two extremes of a wide range of interesting architectures, a continuum that only today starts to be more thoroughly explored. We believe our exposition can summarize some of the most promising lines of research and provide a good entry point in this ever growing body of literature.


  • (1) Y. LeCun, Y. Bengio, G. Hinton, Deep learning, Nature 521 (7553) (2015) 436.
  • (2)

    D. Ulyanov, A. Vedaldi, V. Lempitsky, Deep image prior, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 9446–9454.

  • (3) F. Rosenblatt, The perceptron: a probabilistic model for information storage and organization in the brain., Psychological review 65 (6) (1958) 386.
  • (4) Y. Pao, Y. Takefji, Functional-link net computing: Theory, System Architecture, and Functionalities, IEEE Computer Journal 25 (5) (1992) 76–79.
  • (5) Y.-H. Pao, G.-H. Park, D. Sobajic, Learning and generalization characteristics of the random vector functional-link net, Neurocomputing 6 (2) (1994) 163–180.
  • (6) D. Wang, M. Li, Stochastic configuration networks: Fundamentals and algorithms, IEEE transactions on cybernetics 47 (10) (2017) 3466–3479.
  • (7)

    D. Wang, M. Li, Robust stochastic configuration networks with kernel density estimation for uncertain data regression, Information Sciences 412 (2017) 210–222.

  • (8) D. Wang, M. Li, Deep stochastic configuration networks with universal approximation property, in: 2018 International Joint Conference on Neural Networks (IJCNN), IEEE, 2018, pp. 1–8.
  • (9) R. Hamid, Y. Xiao, A. Gittens, D. Decoste, Compact random feature maps, arXiv preprint arXiv:1312.4626.
  • (10)

    P. Kar, H. Karnick, Random feature maps for dot product kernels, in: 15th International Conference on Artificial Intelligence and Statistics (AISTATS), 2012.

  • (11)

    Q. Le, T. Sarlós, A. Smola, Fastfood-computing hilbert space expansions in loglinear time, in: 30th International Conference on Machine Learning (ICML), 2013, pp. 244–252.

  • (12) A. Rahimi, B. Recht, Random features for large-scale kernel machines, in: Advances in Neural Information Processing Systems, 2007, pp. 1177–1184.
  • (13) A. Rahimi, B. Recht, Uniform approximation of functions with random bases, in: 46th Annual Allerton Conference on Communication, Control, and Computing, IEEE, 2008, pp. 555–561.
  • (14) A. Rahimi, B. Recht, Weighted sums of random kitchen sinks: Replacing minimization with randomization in learning, in: Advances in Neural Information Processing Systems, 2009, pp. 1313–1320.
  • (15) H. Jaeger, The “echo state” approach to analysing and training recurrent neural networks-with an erratum note, German National Research Center for Information Technology GMD Technical Report 148.
  • (16) M. Lukoševičius, H. Jaeger, Reservoir computing approaches to recurrent neural network training, Computer Science Review 3 (3) (2009) 127–149.
  • (17) S. Scardapane, D. Wang, Randomness in neural networks: an overview, Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 7 (2) (2017) e1200.
  • (18) C. Gallicchio, J. D. Martín-Guerrero, A. Micheli, E. Soria-Olivas, Randomized machine learning approaches: Recent developments and challenges., in: ESANN, 2017.
  • (19) D. Wang, Editorial: Randomized algorithms for training neural networks, Information Sciences 364–365 (2016) 126–128.
  • (20) A. Daniely, R. Frostig, Y. Singer, Toward deeper understanding of neural networks: The power of initialization and a dual view on expressivity, in: Advances In Neural Information Processing Systems, 2016, pp. 2253–2261.
  • (21) R. Giryes, G. Sapiro, A. M. Bronstein, Deep neural networks with random gaussian weights: A universal classification strategy?, IEEE Transactions on Signal Processing 64 (13) (2016) 3444–3457.
  • (22) V. Ramanujan, M. Wortsman, A. Kembhavi, A. Farhadi, M. Rastegari, What’s hidden in a randomly weighted neural network?, arXiv preprint arXiv:1911.13299.
  • (23) K. Kawaguchi, B. Xie, L. Song, Deep semi-random features for nonlinear function approximation, in: Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
  • (24) B. Igelnik, Y.-H. Pao, Stochastic choice of basis functions in adaptive function approximation and the functional-link net, IEEE Transactions on Neural Networks 6 (6) (1995) 1320–1329.
  • (25) B. Igelnik, Y.-H. Pao, S. LeClair, C. Shen, The ensemble approach to neural-network learning and generalization, IEEE Transactions on Neural Networks 10 (1) (1999) 19–30.
  • (26) M. Alhamdoosh, D. Wang, Fast decorrelated neural network ensembles with random weights, Information Sciences 264 (2014) 104–117.
  • (27) A. Gorban, I. Tyukin, D. Prokhorov, K. Sofeikov, Approximation with random bases: Pro et Contra, Information Sciences 364–365 (2016) 129–145.
  • (28) A. Rudi, R. Camoriano, L. Rosasco, Generalization Properties of Learning with Random Features, arXiv preprint arXiv:1602.04474.
  • (29) R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, C.-J. Lin, Liblinear: A library for large linear classification, Journal of machine learning research 9 (Aug) (2008) 1871–1874.
  • (30) G.-X. Yuan, C.-H. Ho, C.-J. Lin, Recent advances of large-scale linear classification, Proceedings of the IEEE 100 (9) (2012) 2584–2603.
  • (31) F. Bach, R. Jenatton, J. Mairal, G. Obozinski, Optimization with sparsity-inducing penalties, Foundations and Trends® in Machine Learning 4 (1) (2012) 1–106.
  • (32) F. Cao, Y. Tan, M. Cai, Sparse algorithms of Random Weight Networks and applications, Expert Systems with Applications 41 (5) (2014) 2457–2462.
  • (33) L. Zhang, P. Suganthan, A comprehensive evaluation of random vector functional link networks, Information Sciences 367–368 (2016) 1094–1105.
  • (34) M. Li, D. Wang, Insights into randomized algorithms for neural networks: Practical issues and common pitfalls, Information Sciences 382-383 (3) (2017) 170–178.
  • (35) B. Widrow, A. Greenblatt, Y. Kim, D. Park, The no-prop algorithm: A new learning algorithm for multilayer neural networks, Neural Networks 37 (2013) 182–188.
  • (36) Y. Cho, L. K. Saul, Kernel methods for deep learning, in: Advances in neural information processing systems, 2009, pp. 342–350.
  • (37) J. Mairal, P. Koniusz, Z. Harchaoui, C. Schmid, Convolutional kernel networks, in: Advances in neural information processing systems, 2014, pp. 2627–2635.
  • (38) F. Anselmi, L. Rosasco, C. Tan, T. Poggio, Deep convolutional networks are hierarchical kernel machines, arXiv preprint arXiv:1508.01084.
  • (39) G. Yang, Scaling limits of wide neural networks with weight sharing: Gaussian process behavior, gradient independence, and neural tangent kernel derivation, arXiv preprint arXiv:1902.04760.
  • (40) S. Arora, S. S. Du, W. Hu, Z. Li, R. Salakhutdinov, R. Wang, On exact computation with an infinitely wide neural net, arXiv preprint arXiv:1904.11955.
  • (41) J. Pons, X. Serra, Randomly weighted cnns for (music) audio classification, in: 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2019, pp. 336–340.
  • (42) A. Rosenfeld, J. K. Tsotsos, Intriguing properties of randomly weighted networks: Generalizing while learning next to nothing, in: 2019 16th Conference on Computer and Robot Vision (CRV), IEEE, 2019, pp. 9–16.
  • (43) J. Frankle, M. Carbin, The lottery ticket hypothesis: Finding sparse, trainable neural networks, arXiv preprint arXiv:1803.03635.
  • (44) J. Frankle, G. K. Dziugaite, D. M. Roy, M. Carbin, The lottery ticket hypothesis at scale, arXiv preprint arXiv:1903.01611.
  • (45) A. S. Morcos, H. Yu, M. Paganini, Y. Tian, One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers, arXiv preprint arXiv:1906.02773.
  • (46) Z. Liu, M. Sun, T. Zhou, G. Huang, T. Darrell, Rethinking the value of network pruning, arXiv preprint arXiv:1810.05270.
  • (47) C. Zhang, S. Bengio, Y. Singer, Are all layers created equal?, arXiv preprint arXiv:1902.01996.
  • (48)

    P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, P.-A. Manzagol, Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion, Journal of Machine Learning Research 11 (Dec) (2010) 3371–3408.

  • (49) H. Cecotti, Deep random vector functional link network for handwritten character recognition, in: 2016 International Joint Conference on Neural Networks (IJCNN), IEEE, 2016, pp. 3628–3633.
  • (50) A. Gaier, D. Ha, Weight agnostic neural networks, arXiv preprint arXiv:1906.04358.
  • (51) E. Strubell, A. Ganesh, A. McCallum, Energy and policy considerations for deep learning in nlp, arXiv preprint arXiv:1906.02243.
  • (52) J. Schmidhuber, S. Hochreiter, Y. Bengio, Evaluating benchmark problems by random guessing, A Field Guide to Dynamical Recurrent Networks (2001) 231–235.
  • (53) T.-H. Chan, K. Jia, S. Gao, J. Lu, Z. Zeng, Y. Ma, Pcanet: A simple deep learning baseline for image classification?, IEEE transactions on image processing 24 (12) (2015) 5017–5032.
  • (54) J. F. Kolen, S. C. Kremer, A field guide to dynamical recurrent networks, John Wiley & Sons, 2001.
  • (55) D. P. Mandic, J. Chambers, Recurrent neural networks for prediction: learning algorithms, architectures and stability, John Wiley & Sons, Inc., 2001.
  • (56) W. Gerstner, W. M. Kistler, Spiking neuron models: Single neurons, populations, plasticity, Cambridge university press, 2002.
  • (57) D. Albers, J. Sprott, W. Dechert, Dynamical behavior of artificial neural networks with random weights, Intelligent Engineering Systems Through Artificial Neural Networks 6 (1996) 17–22.
  • (58) D. Verstraeten, B. Schrauwen, M. D’Haene, D. Stroobandt, An experimental unification of reservoir computing methods, Neural Networks 20 (3) (2007) 391–403.
  • (59) H. Jaeger, H. Haas, Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication, Science 304 (5667) (2004) 78–80.
  • (60) W. Maass, T. Natschläger, H. Markram, Real-time computing without stable states: A new framework for neural computation based on perturbations, Neural Computation 14 (11) (2002) 2531–2560.
  • (61) P. Tino, G. Dorffner, Predicting the future of discrete sequences from fractal representations of the past, Machine Learning 45 (2) (2001) 187–217.
  • (62) I. Yildiz, H. Jaeger, S. Kiebel, Re-visiting the echo state property, Neural networks 35 (2012) 1–9.
  • (63) G. Manjunath, H. Jaeger, Echo state property linked to an input: Exploring a fundamental characteristic of recurrent neural networks, Neural Computation 25 (3) (2013) 671–696.
  • (64) P. Tino, M. Cernansky, L. Benuskova, Markovian architectural bias of recurrent neural networks, IEEE Transactions on Neural Networks 15 (1) (2004) 6–15.
  • (65) P. Tiňo, B. Hammer, M. Bodén, Markovian bias of neural-based architectures with feedback connections, in: Perspectives of neural-symbolic integration, Springer, 2007, pp. 95–133.
  • (66) C. Gallicchio, A. Micheli, Architectural and markovian factors of echo state networks, Neural Networks 24 (5) (2011) 440–456.
  • (67) D. Verstraeten, B. Schrauwen, On the quantification of dynamics in reservoir computing, in: International Conference on Artificial Neural Networks, Springer, 2009, pp. 985–994.
  • (68) M. Ozturk, D. Xu, J. Príncipe, Analysis and design of echo state networks, Neural Computation 19 (1) (2007) 111–138.
  • (69) J. T. Lizier, M. Prokopenko, A. Y. Zomaya, Local measures of information storage in complex distributed computation, Information Sciences 208 (2012) 39–54.
  • (70) T. Schreiber, Measuring information transfer, Physical review letters 85 (2) (2000) 461.
  • (71) J. Triesch, Synergies between intrinsic and synaptic plasticity in individual model neurons, in: Advances in neural information processing systems, 2005, pp. 1417–1424.
  • (72) B. Schrauwen, M. Wardermann, D. Verstraeten, J. Steil, D. Stroobandt, Improving reservoirs using intrinsic plasticity, Neurocomputing 71 (7) (2008) 1159–1171.
  • (73) R. Legenstein, W. Maass, What makes a dynamical system computationally powerful, New directions in statistical signal processing: From systems to brain (2007) 127–154.
  • (74) N. Bertschinger, T. Natschläger, Real-time computation at the edge of chaos in recurrent neural networks, Neural Computation 16 (7) (2004) 1413–1436.
  • (75) R. Legenstein, W. Maass, Edge of chaos and prediction of computational performance for neural circuit models, Neural Networks 20 (3) (2007) 323–334.
  • (76) J. Boedecker, O. Obst, J. Lizier, N. Mayer, M. Asada, Information processing in echo state networks at the edge of chaos, Theory in Biosciences 131 (3) (2012) 205–213.
  • (77) L. Livi, F. M. Bianchi, C. Alippi, Determination of the edge of criticality in echo state networks through fisher information maximization, IEEE transactions on neural networks and learning systems 29 (3) (2017) 706–717.
  • (78)

    B. Schrauwen, L. Büsing, R. A. Legenstein, On computational power and the order-chaos phase transition in reservoir computing, in: Advances in Neural Information Processing Systems, 2009, pp. 1425–1432.

  • (79) G. Manjunath, Memory-loss is fundamental for stability and distinguishes the echo state property threshold in reservoir computing & beyond, arXiv preprint arXiv:2001.00766.
  • (80) F. Bianchi, L. Livi, C. Alippi, Investigating echo state networks dynamics by means of recurrence analysis, arXiv preprint arXiv:1601.07381.
  • (81) F. M. Bianchi, L. Livi, C. Alippi, R. Jenssen, Multiplex visibility graphs to investigate recurrent neural network dynamics, Scientific reports 7 (2017) 44037.
  • (82) M. Torda, I. Farkas, Evaluation of information-theoretic measures in echo state networks on the edge of stability, in: 2018 International Joint Conference on Neural Networks (IJCNN), IEEE, 2018, pp. 1–6.
  • (83) A. Rodan, P. Tiňo, Minimum complexity echo state network, IEEE Transactions on Neural Networks 22 (1) (2011) 131–144.
  • (84) T. Strauss, W. Wustlich, R. Labahn, Design strategies for weight matrices of echo state networks, Neural Computation 24 (12) (2012) 3246–3276.
  • (85) P. Tino, Dynamical systems as temporal feature spaces, arXiv preprint arXiv:1907.06382.
  • (86) J. Boedecker, O. Obst, N. M. Mayer, M. Asada, Studies on reservoir initialization and dynamics shaping in echo state networks., in: ESANN, 2009.
  • (87) M. A. Hajnal, A. Lőrincz, Critical echo state networks, in: International Conference on Artificial Neural Networks, Springer, 2006, pp. 658–667.
  • (88) O. L. White, D. D. Lee, H. Sompolinsky, Short-term memory in orthogonal neural networks, Physical review letters 92 (14) (2004) 148102.
  • (89) I. Farkaš, R. Bosák, P. Gergel’, Computational analysis of memory capacity in echo state networks, Neural Networks 83 (2016) 109–120.
  • (90) M. Hermans, B. Schrauwen, Training and analysing deep recurrent neural networks, in: Advances in neural information processing systems, 2013, pp. 190–198.
  • (91) A. Graves, A.-r. Mohamed, G. Hinton, Speech recognition with deep recurrent neural networks, in: 2013 IEEE international conference on acoustics, speech and signal processing, IEEE, 2013, pp. 6645–6649.
  • (92) S. El Hihi, Y. Bengio, Hierarchical recurrent neural networks for long-term dependencies, in: Advances in neural information processing systems, 1996, pp. 493–499.
  • (93) R. Pascanu, C. Gulcehre, K. Cho, Y. Bengio, How to construct deep recurrent neural networks, arXiv preprint arXiv:1312.6026.
  • (94) X. Sun, T. Li, Q. Li, Y. Huang, Y. Li, Deep belief echo-state network and its application to time series prediction, Knowledge-Based Systems 130 (2017) 17–29.
  • (95) F. M. Bianchi, S. Scardapane, S. Lokse, R. Jenssen, Bidirectional deep-readout echo state networks, in: Proceedings of the 26th European Symposium on Artificial Neural Networks (ESANN), 2018, pp. 425–430.
  • (96) C. Gallicchio, A. Micheli, L. Pedrelli, Deep reservoir computing: a critical experimental analysis, Neurocomputing 268 (2017) 87–99.
  • (97) C. Gallicchio, A. Micheli, Echo state property of deep reservoir computing networks, Cognitive Computation 9 (3) (2017) 337–350.
  • (98) C. Gallicchio, A. Micheli, L. Pedrelli, Design of deep echo state networks, Neural Networks 108 (2018) 33–47.
  • (99) C. Gallicchio, A. Micheli, L. Pedrelli, Comparison between deepesns and gated rnns on multivariate time-series prediction, in: ESANN 2019 - Proceedings, 27th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, 2019, pp. 619–624.
  • (100) C. Gallicchio, A. Micheli, L. Silvestri, Local lyapunov exponents of deep echo state networks, Neurocomputing 298 (2018) 34–45.
  • (101) C. Gallicchio, A. Micheli, Richness of deep echo state network dynamics, in: International Work-Conference on Artificial Neural Networks, Springer, 2019, pp. 480–491.
  • (102) B. Zajzon, R. Duarte, A. Morrison, Transferring state representations in hierarchical spiking neural networks, in: 2018 International Joint Conference on Neural Networks (IJCNN), IEEE, 2018, pp. 1–9.
  • (103) D. Bacciu, F. Errica, A. Micheli, M. Podda, A gentle introduction to deep learning for graphs, arXiv preprint arXiv:1912.12693.
  • (104) A. Sperduti, A. Starita, Supervised neural networks for the classification of structures, IEEE Transactions on Neural Networks 8 (3) (1997) 714–735.
  • (105) P. Frasconi, M. Gori, A. Sperduti, A general framework for adaptive processing of data structures, IEEE transactions on Neural Networks 9 (5) (1998) 768–786.
  • (106) C. Gallicchio, A. Micheli, Deep reservoir neural networks for trees, Information Sciences 480 (2019) 174–193.
  • (107) C. Gallicchio, A. Micheli, Fast and deep graph neural networks, Proceedings of AAAI 2020. arXiv preprint arXiv:1911.08941.