A Neural Model for Regular Grammar Induction

09/23/2022
by   Peter Belcak, et al.
0

Grammatical inference is a classical problem in computational learning theory and a topic of wider influence in natural language processing. We treat grammars as a model of computation and propose a novel neural approach to induction of regular grammars from positive and negative examples. Our model is fully explainable, its intermediate results are directly interpretable as partial parses, and it can be used to learn arbitrary regular grammars when provided with sufficient data. We find that our method consistently attains high recall and precision scores across a range of tests of varying complexity.

READ FULL TEXT
research
05/22/2018

Bayesian Inference of Regular Expressions from Human-Generated Example Strings

In programming by example, users "write" programs by generating a small ...
research
10/28/2017

Inducing Regular Grammars Using Recurrent Neural Networks

Grammar induction is the task of learning a grammar from a set of exampl...
research
05/24/2017

Matroids Hitting Sets and Unsupervised Dependency Grammar Induction

This paper formulates a novel problem on graphs: find the minimal subset...
research
11/16/2017

One Model for the Learning of Language

A major target of linguistics and cognitive science has been to understa...
research
08/09/2016

Neural Generation of Regular Expressions from Natural Language with Minimal Domain Knowledge

This paper explores the task of translating natural language queries int...
research
05/31/2021

Neural Bi-Lexicalized PCFG Induction

Neural lexicalized PCFGs (L-PCFGs) have been shown effective in grammar ...
research
09/09/2020

Discovering Textual Structures: Generative Grammar Induction using Template Trees

Natural language generation provides designers with methods for automati...

Please sign up or login with your details

Forgot password? Click here to reset