RL-GRIT: Reinforcement Learning for Grammar Inference

05/17/2021
by   Walt Woods, et al.
0

When working to understand usage of a data format, examples of the data format are often more representative than the format's specification. For example, two different applications might use very different JSON representations, or two PDF-writing applications might make use of very different areas of the PDF specification to realize the same rendered content. The complexity arising from these distinct origins can lead to large, difficult-to-understand attack surfaces, presenting a security concern when considering both exfiltration and data schizophrenia. Grammar inference can aid in describing the practical language generator behind examples of a data format. However, most grammar inference research focuses on natural language, not data formats, and fails to support crucial features such as type recursion. We propose a novel set of mechanisms for grammar inference, RL-GRIT, and apply them to understanding de facto data formats. After reviewing existing grammar inference solutions, it was determined that a new, more flexible scaffold could be found in Reinforcement Learning (RL). Within this work, we lay out the many algorithmic changes required to adapt RL from its traditional, sequential-time environment to the highly interdependent environment of parsing. The result is an algorithm which can demonstrably learn recursive control structures in simple data formats, and can extract meaningful structure from fragments of the PDF format. Whereas prior work in grammar inference focused on either regular languages or constituency parsing, we show that RL can be used to surpass the expressiveness of both classes, and offers a clear path to learning context-sensitive languages. The proposed algorithm can serve as a building block for understanding the ecosystems of de facto data formats.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

10/31/2017

Mildly context sensitive grammar induction and variational bayesian inference

We define a generative model for a minimalist grammar formalism. We pres...
12/07/2020

Describing the syntax of programming languages using conjunctive and Boolean grammars

A classical result by Floyd ("On the non-existence of a phrase structure...
10/07/2019

Reinforcement Learning with Structured Hierarchical Grammar Representations of Actions

From a young age humans learn to use grammatical principles to hierarchi...
05/24/2021

Fast and Space-Efficient Construction of AVL Grammars from the LZ77 Parsing

Grammar compression is, next to Lempel-Ziv (LZ77) and run-length Burrows...
11/04/2021

RLDS: an Ecosystem to Generate, Share and Use Datasets in Reinforcement Learning

We introduce RLDS (Reinforcement Learning Datasets), an ecosystem for re...
04/28/2018

Specifying and Verbalising Answer Set Programs in Controlled Natural Language

We show how a bi-directional grammar can be used to specify and verbalis...
07/30/2021

Extracting Grammars from a Neural Network Parser for Anomaly Detection in Unknown Formats

Reinforcement learning has recently shown promise as a technique for tra...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.