Deep Randomized Neural Networks

02/27/2020
by   Claudio Gallicchio, et al.
0

Randomized Neural Networks explore the behavior of neural systems where the majority of connections are fixed, either in a stochastic or a deterministic fashion. Typical examples of such systems consist of multi-layered neural network architectures where the connections to the hidden layer(s) are left untrained after initialization. Limiting the training algorithms to operate on a reduced set of weights inherently characterizes the class of Randomized Neural Networks with a number of intriguing features. Among them, the extreme efficiency of the resulting learning processes is undoubtedly a striking advantage with respect to fully trained architectures. Besides, despite the involved simplifications, randomized neural systems possess remarkable properties both in practice, achieving state-of-the-art results in multiple domains, and theoretically, allowing to analyze intrinsic properties of neural architectures (e.g. before training of the hidden layers' connections). In recent years, the study of Randomized Neural Networks has been extended towards deep architectures, opening new research directions to the design of effective yet extremely efficient deep learning models in vectorial as well as in more complex data domains. This chapter surveys all the major aspects regarding the design and analysis of Randomized Neural Networks, and some of the key results with respect to their approximation capabilities. In particular, we first introduce the fundamentals of randomized neural models in the context of feed-forward networks (i.e., Random Vector Functional Link and equivalent models) and convolutional filters, before moving to the case of recurrent systems (i.e., Reservoir Computing networks). For both, we focus specifically on recent results in the domain of deep randomized systems, and (for recurrent models) their application to structured domains.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/21/2017

Feed-forward approximations to dynamic recurrent network architectures

Recurrent neural network architectures can have useful computational pro...
research
08/13/2018

iNNvestigate neural networks!

In recent years, deep neural networks have revolutionized many applicati...
research
06/04/2020

Sparsity in Reservoir Computing Neural Networks

Reservoir Computing (RC) is a well-known strategy for designing Recurren...
research
07/17/2022

Improving Deep Neural Network Random Initialization Through Neuronal Rewiring

The deep learning literature is continuously updated with new architectu...
research
04/16/2020

Hcore-Init: Neural Network Initialization based on Graph Degeneracy

Neural networks are the pinnacle of Artificial Intelligence, as in recen...
research
02/13/2019

Which Neural Network Architecture matches Human Behavior in Artificial Grammar Learning?

In recent years artificial neural networks achieved performance close to...
research
06/30/2019

Random Vector Functional Link Neural Network based Ensemble Deep Learning

In this paper, we propose a deep learning framework based on randomized ...

Please sign up or login with your details

Forgot password? Click here to reset