Representing Formal Languages: A Comparison Between Finite Automata and Recurrent Neural Networks

02/27/2019
by   Joshua J. Michalenko, et al.
0

We investigate the internal representations that a recurrent neural network (RNN) uses while learning to recognize a regular formal language. Specifically, we train a RNN on positive and negative examples from a regular language, and ask if there is a simple decoding function that maps states of this RNN to states of the minimal deterministic finite automaton (MDFA) for the language. Our experiments show that such a decoding function indeed exists, and that it maps states of the RNN not to MDFA states, but to states of an abstraction obtained by clustering small sets of MDFA states into "superstates". A qualitative analysis reveals that the abstraction often has a simple interpretation. Overall, the results suggest a strong structural relationship between internal representations used by RNNs and finite automata, and explain the well-known ability of RNNs to recognize formal grammatical structure.

READ FULL TEXT
research
01/20/2021

Synthesizing Context-free Grammars from Recurrent Neural Networks (Extended Version)

We present an algorithm for extracting a subclass of the context free gr...
research
10/16/2021

Learning Continuous Chaotic Attractors with a Reservoir Computer

Neural systems are well known for their ability to learn and store infor...
research
10/25/2018

Learning with Interpretable Structure from RNN

In structure learning, the output is generally a structure that is used ...
research
06/29/2023

On the Relationship Between RNN Hidden State Vectors and Semantic Ground Truth

We examine the assumption that the hidden-state vectors of recurrent neu...
research
06/12/2020

A Formal Language Approach to Explaining RNNs

This paper presents LEXR, a framework for explaining the decision making...
research
11/29/2018

Learning Finite State Representations of Recurrent Policy Networks

Recurrent neural networks (RNNs) are an effective representation of cont...
research
05/03/2020

Teaching Recurrent Neural Networks to Modify Chaotic Memories by Example

The ability to store and manipulate information is a hallmark of computa...

Please sign up or login with your details

Forgot password? Click here to reset