DeepAI AI Chat
Log In Sign Up

On Scrambling Phenomena for Randomly Initialized Recurrent Networks

10/11/2022
by   Vaggos Chatziafratis, et al.
University of California, Irvine
University of California Santa Cruz
Columbia University
0

Recurrent Neural Networks (RNNs) frequently exhibit complicated dynamics, and their sensitivity to the initialization process often renders them notoriously hard to train. Recent works have shed light on such phenomena analyzing when exploding or vanishing gradients may occur, either of which is detrimental for training dynamics. In this paper, we point to a formal connection between RNNs and chaotic dynamical systems and prove a qualitatively stronger phenomenon about RNNs than what exploding gradients seem to suggest. Our main result proves that under standard initialization (e.g., He, Xavier etc.), RNNs will exhibit Li-Yorke chaos with constant probability independent of the network's width. This explains the experimentally observed phenomenon of scrambling, under which trajectories of nearby points may appear to be arbitrarily close during some timesteps, yet will be far away in future timesteps. In stark contrast to their feedforward counterparts, we show that chaotic behavior in RNNs is preserved under small perturbations and that their expressive power remains exponential in the number of feedback iterations. Our technical arguments rely on viewing RNNs as random walks under non-linear activations, and studying the existence of certain types of higher-order fixed points called periodic points that lead to phase transitions from order to chaos.

READ FULL TEXT

page 1

page 2

page 3

page 4

01/19/2021

Implicit Bias of Linear RNNs

Contemporary wisdom based on empirical studies suggests that standard re...
05/31/2019

Improved memory in recurrent neural networks with sequential non-normal dynamics

Training recurrent neural networks (RNNs) is a hard problem due to degen...
10/19/2021

Expressivity of Neural Networks via Chaotic Itineraries beyond Sharkovsky's Theorem

Given a target function f, how large must a neural network be in order t...
10/14/2021

How to train RNNs on chaotic data?

Recurrent neural networks (RNNs) are wide-spread machine learning tools ...
10/25/2022

Learning Low Dimensional State Spaces with Overparameterized Recurrent Neural Network

Overparameterization in deep learning typically refers to settings where...
12/20/2022

Empirical Analysis of Limits for Memory Distance in Recurrent Neural Networks

Common to all different kinds of recurrent neural networks (RNNs) is the...