Inducing Regular Grammars Using Recurrent Neural Networks

10/28/2017
by   Mor Cohen, et al.
0

Grammar induction is the task of learning a grammar from a set of examples. Recently, neural networks have been shown to be powerful learning machines that can identify patterns in streams of data. In this work we investigate their effectiveness in inducing a regular grammar from data, without any assumptions about the grammar. We train a recurrent neural network to distinguish between strings that are in or outside a regular language, and utilize an algorithm for extracting the learned finite-state automaton. We apply this method to several regular languages and find unexpected results regarding the connections between the network's states that may be regarded as evidence for generalization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/23/2022

A Neural Model for Regular Grammar Induction

Grammatical inference is a classical problem in computational learning t...
research
05/25/2017

Neural Attribute Machines for Program Generation

Recurrent neural networks have achieved remarkable success at generating...
research
06/18/2020

Stability of Internal States in Recurrent Neural Networks Trained on Regular Languages

We provide an empirical study of the stability of recurrent neural netwo...
research
08/16/2023

Benchmarking Neural Network Generalization for Grammar Induction

How well do neural networks generalize? Even for grammar induction tasks...
research
09/21/2021

Shape Inference and Grammar Induction for Example-based Procedural Generation

Designers increasingly rely on procedural generation for automatic gener...
research
06/21/2016

Criticality in Formal Languages and Statistical Physics

We show that the mutual information between two symbols, as a function o...
research
07/17/2017

The Power of Constraint Grammars Revisited

Sequential Constraint Grammar (SCG) (Karlsson, 1990) and its extensions ...

Please sign up or login with your details

Forgot password? Click here to reset