Leveraging Grammar and Reinforcement Learning for Neural Program Synthesis

05/11/2018
by   Rudy Bunel, et al.
0

Program synthesis is the task of automatically generating a program consistent with a specification. Recent years have seen proposal of a number of neural approaches for program synthesis, many of which adopt a sequence generation paradigm similar to neural machine translation, in which sequence-to-sequence models are trained to maximize the likelihood of known reference programs. While achieving impressive results, this strategy has two key limitations. First, it ignores Program Aliasing: the fact that many different programs may satisfy a given specification (especially with incomplete specifications such as a few input-output examples). By maximizing the likelihood of only a single reference program, it penalizes many semantically correct programs, which can adversely affect the synthesizer performance. Second, this strategy overlooks the fact that programs have a strict syntax that can be efficiently checked. To address the first limitation, we perform reinforcement learning on top of a supervised model with an objective that explicitly maximizes the likelihood of generating semantically correct programs. For addressing the second limitation, we introduce a training procedure that directly maximizes the probability of generating syntactically correct programs that fulfill the specification. We show that our contributions lead to improved accuracy of the models, especially in cases where the training data is limited.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/28/2022

Learning from Self-Sampled Correct and Partially-Correct Programs

Program synthesis aims to generate executable programs that are consiste...
research
03/23/2020

Creating Synthetic Datasets via Evolution for Neural Program Synthesis

Program synthesis is the task of automatically generating a program cons...
research
11/22/2022

Genetic Algorithm for Program Synthesis

A deductive program synthesis tool takes a specification as input and de...
research
10/22/2019

Decidable Synthesis of Programs with Uninterpreted Functions

We identify a decidable synthesis problem for a class of programs of unb...
research
05/18/2023

Evidence of Meaning in Language Models Trained on Programs

We present evidence that language models can learn meaning despite being...
research
04/25/2017

From Language to Programs: Bridging Reinforcement Learning and Maximum Marginal Likelihood

Our goal is to learn a semantic parser that maps natural language uttera...
research
03/02/2021

Dual Reinforcement-Based Specification Generation for Image De-Rendering

Advances in deep learning have led to promising progress in inferring gr...

Please sign up or login with your details

Forgot password? Click here to reset