Stability of Internal States in Recurrent Neural Networks Trained on Regular Languages

06/18/2020
by   Christian Oliva, et al.
0

We provide an empirical study of the stability of recurrent neural networks trained to recognize regular languages. When a small amount of noise is introduced into the activation function, the neurons in the recurrent layer tend to saturate in order to compensate the variability. In this saturated regime, analysis of the network activation shows a set of clusters that resemble discrete states in a finite state machine. We show that transitions between these states in response to input symbols are deterministic and stable. The networks display a stable behavior for arbitrarily long strings, and when random perturbations are applied to any of the states, they are able to recover and their evolution converges to the original clusters. This observation reinforces the interpretation of the networks as finite automata, with neurons or groups of neurons coding specific and meaningful input patterns.

READ FULL TEXT

page 4

page 5

research
05/17/2020

Separation of Memory and Processing in Dual Recurrent Neural Networks

We explore a neural network architecture that stacks a recurrent layer a...
research
11/20/2020

Low-Dimensional Manifolds Support Multiplexed Integrations in Recurrent Neural Networks

We study the learning dynamics and the representations emerging in Recur...
research
10/06/2022

A Step Towards Uncovering The Structure of Multistable Neural Networks

We study the structure of multistable recurrent neural networks. The act...
research
10/28/2017

Inducing Regular Grammars Using Recurrent Neural Networks

Grammar induction is the task of learning a grammar from a set of exampl...
research
02/02/2021

Stronger Separation of Analog Neuron Hierarchy by Deterministic Context-Free Languages

We analyze the computational power of discrete-time recurrent neural net...
research
10/31/2021

Minimum Description Length Recurrent Neural Networks

We train neural networks to optimize a Minimum Description Length score,...
research
11/15/2017

The Neural Network Pushdown Automaton: Model, Stack and Learning Simulations

In order for neural networks to learn complex languages or grammars, the...

Please sign up or login with your details

Forgot password? Click here to reset