Understanding Recurrent Neural Networks Using Nonequilibrium Response Theory

06/22/2020
by   Soon Hoe Lim, et al.
2

Recurrent neural networks (RNNs) are brain-inspired models widely used in machine learning for analyzing sequential data. The present work is a contribution towards a deeper understanding of how RNNs process input signals using the response theory from nonequilibrium statistical mechanics. For a class of continuous-time stochastic RNNs (SRNNs) driven by an input signal, we derive a Volterra type series representation for their output. This representation is interpretable and disentangles the input signal from the SRNN architecture. The kernels of the series are certain recursively defined correlation functions with respect to the unperturbed dynamics that completely determine the output. Exploiting connections of this representation and its implications to rough paths theory, we identify a universal feature -- the response feature, which turns out to be the signature of tensor product of the input signal and a natural support basis. In particular, we show that the SRNNs can be viewed as kernel machines operating on a reproducing kernel Hilbert space associated with the response feature.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/02/2021

Framing RNN as a kernel method: A neural ODE approach

Building on the interpretation of a recurrent neural network (RNN) as a ...
research
12/09/2020

Scalable Neural Tangent Kernel of Recurrent Architectures

Kernels derived from deep neural networks (DNNs) in the infinite-width p...
research
02/04/2019

Can SGD Learn Recurrent Neural Networks with Provable Generalization?

Recurrent Neural Networks (RNNs) are among the most popular models in se...
research
08/06/2021

Path classification by stochastic linear recurrent neural networks

We investigate the functioning of a classifying biological neural networ...
research
06/07/2019

Recurrent Kernel Networks

Substring kernels are classical tools for representing biological sequen...
research
06/02/2021

Transformers are Deep Infinite-Dimensional Non-Mercer Binary Kernel Machines

Despite their ubiquity in core AI fields like natural language processin...
research
12/23/2019

Learning functionals via LSTM neural networks for predicting vessel dynamics in extreme sea states

Predicting motions of vessels in extreme sea states represents one of th...

Please sign up or login with your details

Forgot password? Click here to reset