Memory Augmented Policy Optimization for Program Synthesis with Generalization

07/06/2018 ∙ by Chen Liang, et al. ∙ Google Mosaix Tel Aviv University 6

This paper presents Memory Augmented Policy Optimization (MAPO): a novel policy optimization formulation that incorporates a memory buffer of promising trajectories to reduce the variance of policy gradient estimates for deterministic environments with discrete actions. The formulation expresses the expected return objective as a weighted sum of two terms: an expectation over a memory of trajectories with high rewards, and a separate expectation over the trajectories outside the memory. We propose 3 techniques to make an efficient training algorithm for MAPO: (1) distributed sampling from inside and outside memory with an actor-learner architecture; (2) a marginal likelihood constraint over the memory to accelerate training; (3) systematic exploration to discover high reward trajectories. MAPO improves the sample efficiency and robustness of policy gradient, especially on tasks with a sparse reward. We evaluate MAPO on weakly supervised program synthesis from natural language with an emphasis on generalization. On the WikiTableQuestions benchmark we improve the state-of-the-art by 2.5 benchmark, MAPO achieves an accuracy of 74.9 outperforming several strong baselines with full supervision. Our code is open sourced at https://github.com/crazydonkey200/neural-symbolic-machines.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

Code Repositories

neural-symbolic-machines

Neural Symbolic Machines is a framework to integrate neural networks and symbolic representations using reinforcement learning, with applications in program synthesis and semantic parsing.


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

There has been a recent surge of interest in applying policy gradient methods to various application domains including program synthesis liang2017nsm ; guu2017language ; zhong2017seq2sql ; bunel2018leveraging , dialogue generation li2016deep ; dasdialog2017 , architecture search zoph2016 ; zoph2017 , game silver2017mastering ; mnih2016asynchronous and continuous control peters2006 ; trpo2015 . Simple policy gradient methods like REINFORCE Williams92simplestatistical use Monte Carlo samples from the current policy to perform an on-policy optimization of the expected return objective. This often leads to unstable learning dynamics and poor sample efficiency, sometimes even underperforming random search rs2018 .

The difficulty of gradient based policy optimization stems from a few sources: (1) policy gradient estimates have a large variance; (2) samples from a randomly initialized policy often attain small rewards, resulting in a slow training progress in the initial phase (cold start); (3) random policy samples do not explore the search space efficiently and systematically. These issues can be especially prohibitive in applications such as program synthesis and robotics andrychowicz2017hindsight where the search space is large and the rewards are delayed and sparse. In such domains, a high reward is only achieved after a long sequence of correct actions. For instance, in program synthesis, only a few programs in the large combinatorial space of programs may correspond to the correct functional form. Unfortunately, relying on policy samples to explore the space often leads to forgetting a high reward trajectory unless it is re-sampled frequently liang2017nsm ; pqt2018 .

Learning through reflection on past experiences (“experience replay”) is a promising direction to improve data efficiency and learning stability. It has recently been widely adopted in various deep RL algorithms, but its theoretical analysis and empirical comparison are still lacking. As a result, defining the optimal strategy for prioritizing and sampling from past experiences remain an open question. There has been various attempts to incorporate off-policy samples within the policy gradient framework to improve the sample efficiency of the REINFORCE and actor-critic algorithms (e.g.,  degris2012 ; wang2016sample ; ppo2017 ; espeholt2018impala ). Most of these approaches utilize samples from an old policy through (truncated) importance sampling to obtain a low variance, but biased estimate of the gradients. Previous work has aimed to incorporate a replay buffer into policy gradient in the general RL setting of stochastic dynamics and possibly continuous actions. By contrast, we focus on deterministic environments with discrete actions and develop an unbiased policy gradient estimator with low variance (Figure 1).

This paper presents MAPO: a simple and novel way to incorporate a memory buffer of promising trajectories within the policy gradient framework. We express the expected return objective as a weighted sum of an expectation over the trajectories inside the memory buffer and a separate expectation over unknown trajectories outside of the buffer. The gradient estimates are unbiased and attain lower variance. Because high-reward trajectories remain in the memory, it is not possible to forget them. To make an efficient algorithm for MAPO, we propose 3 techniques: (1) memory weight clipping to accelerate and stabilize training; (2) systematic exploration of the search space to efficiently discover the high-reward trajectories; (3) distributed sampling from inside and outside of the memory buffer to scale up training;

We assess the effectiveness of MAPO on weakly supervised program synthesis from natural language (see Section 2). Program synthesis presents a unique opportunity to study generalization in the context of policy optimization, besides being an important real world application. On the challenging WikiTableQuestions pasupat2015tables benchmark, MAPO achieves an accuracy of on the test set, significantly outperforming the previous state-of-the-art of  zhang2017macro . Interestingly, on the WikiSQL zhong2017seq2sql benchmark, MAPO achieves an accuracy of without the supervision of gold programs, outperforming several strong fully supervised baselines.

2 The Problem of Weakly Supervised Contextual Program Synthesis

Year Venue Position Event Time
2001 Hungary 2nd 400m 47.12
2003 Finland 1st 400m 46.69
2005 Germany 11th 400m 46.62
2007 Thailand 1st relay 182.05
2008 China 7th relay 180.32
Table 1: x: Where did the last 1st place finish occur? y: Thailand

Consider the problem of learning to map a natural language question to a structured query in a programming language such as SQL (e.g., zhong2017seq2sql ), or converting a textual problem description into a piece of source codeas in programming competitions (e.g.,  balog2017deepcoder ). We call these problems contextual program synthesis and aim at tackling them in a weakly supervised setting – i.e., no correct action sequence , which corresponds to a gold program, is given as part of the training data, and training needs to solve the hard problem of exploring a large program space. Table 1 shows an example question-answer pair. The model needs to first discover the programs that can generate the correct answer in a given context, and then learn to generalize to new contexts.

We formulate the problem of weakly supervised contextual program synthesis as follows: to generate a program by using a parametric function, , where denotes the model parameters. The quality of a program is measured by a scoring or reward function . The reward function may evaluate a program by executing it on a real environment and comparing the output against the correct answer. For example, it is natural to define a binary reward that is 1 when the output equals the answer and 0 otherwise. We assume that the context includes both a natural language input and an environment, for example an interpreter or a database, on which the program will be executed. Given a dataset of context-answer pairs, , the goal is to find optimal parameters that parameterize a mapping of with maximum empirical return on a heldout test set.

One can think of the problem of contextual program synthesis as an instance of reinforcement learning (RL) with sparse terminal rewards and deterministic transitions, for which generalization plays a key role. There has been some recent attempts in the RL community to study generalization to unseen initial conditions (e.g. rajeswaran2017towards ; gotta2018gotta ). However, most prior work aims to maximize empirical return on the training environment arcade2013 ; gym2016 . The problem of contextual program synthesis presents a natural application of RL for which generalization is the main concern.

3 Optimization of Expected Return via Policy Gradients

To learn a mapping of (context ) (program ), we optimize the parameters of a conditional distribution

that assigns a probability to each program given the context. That is,

is a distribution over the countable set of all possible programs, denoted . Thus and . Then, to synthesize a program for a novel context, one finds the most likely program under the distribution via exact or approximate inference

Autoregressive models present a tractable family of distributions that estimates the probability of a sequence of tokens, one token at a time, often from left to right. To handle variable sequence length, one includes a special end-of-sequence token at the end of the sequences. We express the probability of a program given as where denotes a prefix of the program

. One often uses a recurrent neural network (

e.g. hochreiter1997long ) to predict the probability of each token given the prefix and the context.

In the absence of ground truth programs, policy gradient techniques present a way to optimize the parameters of a stochastic policy via optimization of expected return. Given a training dataset of context-answer pairs, , the objective is expressed as . The reward function evaluates a complete program , based on the context and the correct answer . These assumptions characterize the problem of program synthesis well, but they also apply to many other discrete optimization and structured prediction domains.

Simplified notation. In what follows, we simplify the notation by dropping the dependence of the policy and the reward on and . We use a notation of instead of and instead of , to make the formulation less cluttered, but the equations hold in the general case.

We express the expected return objective in the simplified notation as,

(1)

The REINFORCE Williams92simplestatistical algorithm presents an elegant and convenient way to estimate the gradient of the expected return (1) using Monte Carlo (MC) samples. Using trajectories sampled i.i.d. from the current policy , denoted , the gradient estimate can be expressed as,

(2)

where a baseline is subtracted from the returns to reduce the variance of gradient estimates. This formulation enables direct optimization of via MC sampling from an unknown search space, which also serves the purpose of exploration. To improve such exploration behavior, one often includes the entropy of the policy as an additional term inside the objective to prevent early convergence. However, the key limitation of the formulation stems from the difficulty of estimating the gradients accurately only using a few fresh samples.

4 MAPO: Memory Augmented Policy Optimization

We consider an RL environment with a finite number of discrete actions, deterministic transitions, and deterministic terminal returns. In other words, the set of all possible action trajectories is countable, even though possibly infinite, and re-evaluating the return of a trajectory twice results in the same value. These assumptions characterize the problem of program synthesis well, but also apply to many structured prediction problems ross2011reduction ; nowozin2011structured and combinatorial optimization domains (e.g.,  bello2016neural ).

To reduce the variance in gradient estimation and prevent forgetting high-reward trajectories, we introduce a memory buffer, which saves a set of promising trajectories denoted . Previous works liang2017nsm ; abolafia2018neural ; wu2016gnmt utilized a memory buffer by adopting a training objective similar to

(3)

which combines the expected return objective with a maximum likelihood objective over the memory buffer . This training objective is not directly optimizing the expected return any more because the second term introduces bias into the gradient. When the trajectories in are not gold trajectories but high-reward trajectories collected during exploration, uniformly maximizing the likelihood of each trajectory in could be problematic. For example, in program synthesis, there can sometimes be spurious programs pasupat2016inferring that get the right answer, thus receiving high reward, for a wrong reason, e.g., using to answer the question “what is two times two”. Maximizing the likelihood of those high-reward but spurious programs will bias the gradient during training.

We aim to utilize the memory buffer in a principled way. Our key insight is that one can re-express the expected return objective as a weighted sum of two terms: an expectation over the trajectories inside the memory buffer, and a separate expectation over the trajectories outside the buffer,

(4)
(5)

where denotes the set of trajectories not included in the memory buffer, denote the total probability of the trajectories in the buffer, and and

denote a normalized probability distribution inside and outside of the buffer,

(6)

The policy gradient can be expressed as,

(7)

The second expectation can be estimated by sampling from , which can be done through rejection sampling by sampling from and rejecting the sample if . If the memory buffer only contains a small number of trajectories, the first expectation can be computed exactly by enumerating all the trajectories in the buffer. The variance in gradient estimation is reduced because we get an exact estimate of the first expectation while sampling from a smaller stochastic space of measure . If the memory buffer contains a large number of trajectories, the first expectation can be approximated by sampling. Then, we get a stratified sampling estimator of the gradient. The trajectories inside and outside the memory buffer are two mutually exclusive and collectively exhaustive strata, and the variance reduction still holds. The weights for the first and second expectations are and respectively. We call the memory weight.

Figure 1: Overview of MAPO compared with experience replay using importance sampling.

In the following we present techniques to make an efficient algorithm of MAPO.

4.1 Memory Weight Clipping

Policy gradient methods usually suffer from a cold start problem. A key observation is that a “bad” policy, one that achieves low expected return, will assign small probabilities to the high-reward trajectories, which in turn causes them to be ignored during gradient estimation. So it is hard to improve from a random initialization, i.e., the cold start problem, or to recover from a bad update, i.e., the brittleness problem. Ideally we want to force the policy gradient estimates to pay at least some attention to the high-reward trajectories. Therefore, we adopt a clipping mechanism over the memory weight , which ensures that the memory weight is greater or equal to , i.e. , otherwise clips it to . So the new gradient estimate is,

(8)

where is the clipped memory weight. At the beginning of training, the clipping is active and introduce a bias, but accelerates and stabilizes training. Once the policy is off the ground, the memory weights are almost never clipped given that they are naturally larger than and the gradients are not biased any more. See section 5.4 for an empirical analysis of the clipping.

4.2 Systematic Exploration

Input: context , policy , fully explored sub-sequences , high-reward sequences
Initialize: empty sequence
while true do
     
     if  then
         
         break      
     sample
     
     if  then
         if  then
                        
         
         break      
Algorithm 1 Systematic Exploration

To discover high-reward trajectories for the memory buffer , we need to efficiently explore the search space. Exploration using policy samples suffers from repeated samples, which is a waste of computation in deterministic environments. So we propose to use systematic exploration to improve the efficiency. More specifically we keep a set of fully explored partial sequences, which can be efficiently implemented using a bloom filter. Then, we use it to enforce a policy to only take actions that lead to unexplored sequences. Using a bloom filter we can store billions of sequences in with only several gigabytes of memory. The pseudo code of this approach is shown in Algorithm 1. We warm start the memory buffer using systematic exploration from random policy as it can be trivially parallelized. In parallel to training, we continue the systematic exploration with the current policy to discover new high reward trajectories.

4.3 Distributed Sampling

Input: data , memories , constants , ,
repeat for all actors
     Initialize training batch
     Get a batch of inputs
     for  do
         
         Sample over
         
         
         Sample
         if  then
              
                             
     Push to training queue
until converge or early stop
repeat for the learner
     Get a batch from training queue
     for   do
               
     update using
      once every M batches
until converge or early stop
Output: final parameters
Algorithm 2 MAPO

An exact computation of the first expectation of (5) requires an enumeration over the memory buffer. The cost of gradient computation will grow linearly w.r.t the number of trajectories in the buffer, so it can be prohibitively slow when the buffer contains a large number of trajectories. Alternatively, we can approximate the first expectation using sampling. As mentioned above, this can be viewed as stratified sampling and the variance is still reduced. Although the cost of gradient computation now grows linearly w.r.t the number of samples instead of the total number of trajectories in the buffer, the cost of sampling still grows linearly w.r.t the size of the memory buffer because we need to compute the probability of each trajectory with the current model.

A key insight is that if the bottleneck is in sampling, the cost can be distributed through an actor-learner architecture similar to espeholt2018impala . See the Supplemental Material D for a figure depicting the actor-learner architecture. The actors each use its model to sample trajectories from inside the memory buffer through renormalization ( in (6)), and uses rejection sampling to pick trajectories from outside the memory ( in (6)). It also computes the weights for these trajectories using the model. These trajectories and their weights are then pushed to a queue of samples. The learner fetches samples from the queue and uses them to compute gradient estimates to update the parameters. By distributing the cost of sampling to a set of actors, the training can be accelerated almost linearly w.r.t the number of actors. In our experiments, we got a 20 times speedup from distributed sampling with 30 actors.

4.4 Final Algorithm

The final training procedure is summarized in Algorithm 2. As mentioned above, we adopt the actor-learner architecture for distributed training. It uses multiple actors to collect training samples asynchronously and one learner for updating the parameters based on the training samples. Each actor interacts with a set of environments to generate new trajectories. For efficiency, an actor uses a stale policy (), which is often a few steps behind the policy of the learner and will be synchronized periodically. To apply MAPO, each actor also maintains a memory buffer to save the high-reward trajectories. To prepare training samples for the learner, the actor picks samples from inside and also performs rejection sampling with on-policy samples, both according to the actor’s policy . We then use the actor policy to compute a weight for the samples in the memory buffer, and use for samples outside of the buffer. These samples are pushed to a queue and the learner reads from the queue to compute gradients and update the parameters.

5 Experiments

We evaluate MAPO on two program synthesis from natural language (also known as semantic parsing) benchmarks, WikiTableQuestions and WikiSQL, which requires generating programs to query and process data from tables to answer natural language questions. We first compare MAPO to four common baselines, and ablate systematic exploration and memory weight clipping to show their utility. Then we compare MAPO to the state-of-the-art on these two benchmarks. On WikiTableQuestions, MAPO is the first RL-based approach that significantly outperforms the previous state-of-the-art. On WikiSQL, MAPO trained with weak supervision (question-answer pairs) outperforms several strong models trained with full supervision (question-program pairs).

5.1 Experimental setup

Datasets. WikiTableQuestions  pasupat2015tables contains tables extracted from Wikipedia and question-answer pairs about the tables. See Table 1 as an example. There are 2,108 tables and 18,496 question-answer pairs splitted into train/dev/test set.. We follow the construction in pasupat2015tables for converting a table into a directed graph that can be queried, where rows and cells are converted to graph nodes while column names become labeled directed edges. For the questions, we use string match to identify phrases that appear in the table. We also identify numbers and dates using the CoreNLP annotation released with the dataset. The task is challenging in several aspects. First, the tables are taken from Wikipedia and cover a wide range of topics. Second, at test time, new tables that contain unseen column names appear. Third, the table contents are not normalized as in knowledge-bases like Freebase, so there are noises and ambiguities in the table annotation. Last, the semantics are more complex comparing to previous datasets like WebQuestionsSP yih2016webquestionssp . It requires multiple-step reasoning using a large set of functions, including comparisons, superlatives, aggregations, and arithmetic operations pasupat2015tables . See Supplementary Material A for more details about the functions.

WikiSQL zhong2017seq2sql is a recent large scale dataset on learning natural language interfaces for databases. It also uses tables extracted from Wikipedia, but is much larger and is annotated with programs (SQL). There are 24,241 tables and 80,654 question-program pairs splitted into train/dev/test set. Comparing to WikiTableQuestions, the semantics are simpler because the SQLs use fewer operators (column selection, aggregation, and conditions). We perform similar preprocessing as for WikiTableQuestions. Most of the state-of-the-art models relies on question-program pairs for supervised training, while we only use the question-answer pairs for weakly supervised training.

Model architecture. We adopt the Neural Symbolic Machines frameworkliang2017nsm , which combines (1) a neural “programmer”, which is a seq2seq model augmented by a key-variable memory that can translate a natural language utterance to a program as a sequence of tokens, and (2) a symbolic “computer”, which is an Lisp interpreter that implements a domain specific language with built-in functions and provides code assistance by eliminating syntactically or semantically invalid choices.

For the Lisp interpreter, we added functions according to zhang2017macro ; Neelakantan2016LearningAN for WikiTableQuestions experiments and used the subset of functions equivalent to column selection, aggregation, and conditions for WikiSQL. See the Supplementary Material A for more details about functions used.

We implemented the seq2seq model augmented with key-variable memory from liang2017nsm

in TensorFlow 

abadi2016tensorflow . Some minor differences are: (1) we used a bi-directional LSTM for the encoder; (2) we used two-layer LSTM with skip-connections in both the encoder and decoder. GloVe Pennington2014GloveGV embeddings are used for the embedding layer in the encoder and also to create embeddings for column names by averaging the embeddings of the words in a name. Following Neelakantan2016LearningAN ; krishnamurthy2017neural , we also add a binary feature in each step of the encoder, indicating whether this word is found in the table, and an integer feature for a column name counting how many of the words in the column name appear in the question. For the WikiTableQuestions dataset, we use the CoreNLP annotation of numbers and dates released with the dataset. For the WikiSQL dataset, only numbers are used, so we use a simple parser to identify and parse the numbers in the questions, and the tables are already preprocessed. The tokens of the numbers and dates are anonymized as two special tokens <NUM> and <DATE>. The hidden size of the LSTM is . We keep the GloVe embeddings fixed during training, but project it to

dimensions using a trainable linear transformation. The same architecture is used for both datasets.

Training Details. We first apply systematic exploration using a random policy to discover high-reward programs to warm start the memory buffer of each example. For WikiTableQuestions, we generated 50k programs per example using systematic exploration with pruning rules inspired by the grammars from zhang2017macro (see Supplementary E). We apply 0.2 dropout on both encoder and decoder. Each batch includes samples from 25 examples. For experiments on WikiSQL, we generated 1k programs per example due to computational constraint. Because the dataset is much larger, we don’t use any regularization. Each batch includes samples from 125 examples. We use distributed sampling for WikiTableQuestions. For WikiSQL, due to computational constraints, we truncate each memory buffer to top 5 and then enumerate all 5 programs for training. For both experiments, the samples outside memory buffer are drawn using rejection sampling from 1 on-policy sample per example. At inference time, we apply beam search of size 5. We evaluate the model periodically on the dev set to select the best model. We apply a distributed actor-learner architecture for training. The actors use CPUs to generate new trajectories and push the samples into a queue. The learner reads batches of data from the queue and uses GPU to accelerate training (see Supplementary D). We use Adam optimizer for training and the learning rate is

. All the hyperparameters are tuned on the dev set. We train the model for 25k steps on WikiTableQuestions and 15k steps on WikiSQL.

Figure 2: Comparison of MAPO and 3 baselines’ dev set accuracy curves. Results on WikiTableQuestions is on the left and results on WikiSQL

is on the right. The plot is average of 5 runs with a bar of one standard deviation. The horizontal coordinate (training steps) is in log scale.

5.2 Comparison to baselines

We first compare MAPO against the following baselines using the same neural architecture.
REINFORCE: We use on-policy samples to estimate the gradient of expected return as in (2), not utilizing any form of memory buffer.
MML: Maximum Marginal Likelihood maximizes the marginal probability of the memory buffer as in . Assuming binary rewards and assuming that the memory buffer contains almost all of the trajectories with a reward of , MML optimizes the marginal probability of generating a rewarding program. Note that under these assumptions, expected return can be expressed as . Comparing the two objectives, we can see that MML maximizes the product of marginal probabilities, whereas expected return maximizes the sum. More discussion of these two objectives can be found in guu2017language ; norouzi2016reward ; roux2016tighter .
Hard EM:Expectation-Maximization algorithm is commonly used to optimize the marginal likelihood in the presence of latent variables. Hard EM uses the samples with the highest probability to approximate the gradient to .
IML: Iterative Maximum Likelihood training liang2017nsm ; abolafia2018neural uniformly maximizes the likelihood of all the trajectories with the highest rewards .

Because the memory buffer is too large to enumerate, we use samples from the buffer to approximate the gradient for MML and IML, and uses samples with highest for Hard EM.

We show the result in Table 4 and the dev accuracy curves in Figure 2. Removing systematic exploration or the memory weight clipping significantly weaken MAPO because high-reward trajectories are not found or easily forgotten. REINFORCE barely learns anything because starting from a random policy, most samples result in a reward of zero. MML and Hard EM converge faster, but the learned models underperform MAPO, which suggests that the expected return is a better objective. IML runs faster because it randomly samples from the buffer, but the objective is prone to spurious programs.

WikiTable WikiSQL
REINFORCE
MML (Soft EM)
Hard EM
IML
MAPO
MAPO w/o SE
MAPO w/o MWC
Table 2: Ablation study for Systematic Exploration (SE) and Memory Weight Clipping (MWC). We report mean accuracy %, and its standard deviation on dev sets based on 5 runs.
E.S. Dev. Test
Pasupat & Liang (2015) pasupat2015tables - 37.0 37.1
Neelakantan et al. (2017) Neelakantan2016LearningAN 1 34.1 34.2
Neelakantan et al. (2017) Neelakantan2016LearningAN 15 37.5 37.7
Haug et al. (2017) haug2018NeuralMR 1 - 34.8
Haug et al. (2017) haug2018NeuralMR 15 - 38.7
Zhang et al. (2017) zhang2017macro - 40.4 43.7
MAPO 1 42.7
MAPO (mean of 5 runs) -
MAPO (std of 5 runs) -
MAPO (ensembled) 10 - 46.3
Table 3: Results on WikiTableQuestions. E.S. is the ensemble size, when applicable.
Fully supervised Dev. Test
Zhong et al. (2017) zhong2017seq2sql 60.8 59.4
Wang et al. (2017) wang2017pointing 67.1 66.8
Xu et al. (2017) xu2018sqlnet 69.8 68.0
Huang et al. (2018) huang2018NaturalLT 68.3 68.0
Yu et al. (2018) Yu2018TypeSQLKT 74.5 73.5
Sun et al. (2018) Sun2018SemanticPW 75.1 74.6
Dong & Lapata (2018) Dong2018CoarsetoFineDF 79.0 78.5
Weakly supervised Dev. Test
MAPO 72.4 72.6
MAPO (mean of 5 runs)
MAPO (std of 5 runs)
MAPO (ensemble of 10) - 74.9
Table 4: Results on WikiSQL. Unlike other methods, MAPO only uses weak supervision.

5.3 Comparison to state-of-the-art

On WikiTableQuestions (Table 4), MAPO is the first RL-based approach that significantly outperforms the previous state-of-the-art by 2.6%. Unlike previous work, MAPO does not require manual feature engineering or additional human annotation111Krishnamurthy et al. krishnamurthy2017neural achieved 45.9 accuracy when trained on the data collected with dynamic programming and pruned with more human annotations pasupat2016denotations ; mudrakarta2018training .. On WikiSQL (Table 4), even though MAPO does not exploit ground truth programs (weak supervision), it is able to outperform many strong baselines trained using programs (full supervision). The techniques introduced in other models can be incorporated to further improve the result of MAPO, but we leave that as future work. We also qualitatively analyzed a trained model and see that it can generate fairly complex programs. See the Supplementary Material B for some examples of generated programs. We select the best model from 5 runs based on validation accuracy and report its test accuracy. We also report the mean accuracy and standard deviation based on 5 runs given the variance caused by the non-linear optimization procedure, although it is not available from other models.

5.4 Analysis of Memory Weight Clipping

In this subsection, we present an analysis of the bias introduced by memory weight clipping. We define the clipping fraction as the percentage of examples where the clipping is active. In other words, it is the percentage of examples with a non-empty memory buffer, for which . It is also the fraction of examples whose gradient computation will be biased by the clipping, so the higher the value, the more bias, and the gradient is unbiased when the clip fraction is zero. In figure 3, one can observe that the clipping fraction approaches zero towards the end of training and is negatively correlated with the training accuracy. In the experiments, we found that a fixed clipping threshold works well, but we can also gradually decrease the clipping threshold to completely remove the bias.

6 Related work

Figure 3: The clipping fraction and training accuracy w.r.t the training steps (log scale).

Program synthesis & semantic parsing. There has been a surge of recent interest in applying reinforcement learning to program synthesis bunel2018leveraging ; abolafia2018neural ; zaremba2015reinforcement ; nachum2017bridging and combinatorial optimization zoph2016neural ; bello2016neural . Different from these efforts, we focus on the contextualized program synthesis where generalization to new contexts is important. Semantic parsing zelle96geoquery ; zettlemoyer05ccg ; liang11dcs maps natural language to executable symbolic representations. Training semantic parsers through weak supervision is challenging because the model must interact with a symbolic interpreter through non-differentiable operations to search over a large space of programs berant2013semantic ; liang2017nsm . Previous work guu2017language ; Neelakantan2016LearningAN reports negative results when applying simple policy gradient methods like REINFORCE Williams92simplestatistical , which highlights the difficulty of exploration and optimization when applying RL techniques. MAPO takes advantage of discrete and deterministic nature of program synthesis and significantly improves upon REINFORCE.

Experience replay. An experience replay buffer lin1992self enables storage and usage of past experiences to improve the sample efficiency of RL algorithms. Prioritized experience replay schaul2016prioritized prioritizes replays based on temporal-difference error for more efficient optimization. Hindsight experience replay andrychowicz2017hindsight incorporates goals into replays to deal with sparse rewards. MAPO also uses past experiences to tackle sparse reward problems, but by storing and reusing high-reward trajectories, similar to liang2017nsm ; oh2018self . Previous workliang2017nsm assigns a fixed weight to the trajectories, which introduces bias into the policy gradient estimates. More importantly, the policy is often trained equally on the trajectories that have the same reward, which is prone to spurious programs. By contrast, MAPO uses the trajectories in a principled way to obtain an unbiased low variance gradient estimate.

Variance reduction. Policy optimization via gradient descent is challenging because of: (1) large variance in gradient estimates; (2) small gradients in the initial phase of training. Prior variance reduction approaches wu2018variance ; Williams92simplestatistical ; liu2017sample ; grathwohl2017backpropagation mainly relied on control variate techniques by introducing a critic model konda2000actor ; mnih2016asynchronous ; ppo2017 . MAPO takes a different approach to reformulate the gradient as a combination of expectations inside and outside a memory buffer. Standard solutions to the small gradient problem involves supervised pretraining silver2016mastering ; hester2017demonstrations ; ranzato2015sequence or using supervised data to generate rewarding samples norouzi2016reward ; ding2017cold , which cannot be applied when supervised data are not available. MAPO reduces the variance by sampling from a smaller stochastic space or through stratified sampling, and accelerates and stabilizes training by clipping the weight of the memory buffer.

Exploration. Recently there has been a lot of work on improving exploration pathak2017curiosity ; tang2017count ; houthooft2016vime by introducing additional reward based on information gain or pseudo count. For program synthesis balog2017deepcoder ; Neelakantan2016LearningAN ; bunel2018leveraging , the search spaces are enumerable and deterministic. Therefore, we propose to conduct systematic exploration, which ensures that only novel trajectories are generated.

7 Conclusion

We present memory augmented policy optimization (MAPO) that incorporates a memory buffer of promising trajectories to reduce the variance of policy gradients. We propose 3 techniques to enable an efficient algorithm for MAPO: (1) memory weight clipping to accelerate and stabilize training; (2) systematic exploration to efficiently discover high-reward trajectories; (3) distributed sampling from inside and outside memory buffer to scale up training. MAPO is evaluated on real world program synthesis from natural language / semantic parsing tasks. On WikiTableQuestions, MAPO is the first RL approach that significantly outperforms previous state-of-the-art; on WikiSQL, MAPO trained with only weak supervision outperforms several strong baselines trained with full supervision.

Acknowledgments

We would like to thank Dan Abolafia, Ankur Taly, Thanapon Noraset, Arvind Neelakantan, Wenyun Zuo, Chenchen Pan and Mia Liang for helpful discussions. Jonathan Berant was partially supported by The Israel Science Foundation grant 942/16.

References

Appendix A Domain Specific Language

We adopt a Lisp-like domain specific language with certain built-in functions. A program is a list of expressions , where each expression is either a special token “EOS" indicating the end of the program, or a list of tokens enclosed by parentheses “". is a function, which takes as input arguments of specific types. Table 5 defines the arguments, return value and semantics of each function. In the table domain, there are rows and columns. The value of the table cells can be number, date time or string, so we also categorize the columns into number columns, date time columns and string columns depending on the type of the cell values in the column.

Function Arguments Returns Description
(hop v p) v: a list of rows. a list of cells. Select the given column of the
p: a column. given rows.
(argmax v p) v: a list of rows. a list of rows. From the given rows, select the
(argmin v p) p: a number or date ones with the largest / smallest
column. value in the given column.
(filter v q p) v: a list of rows. a list of rows. From the given rows, select the ones
(filter v q p) q: a number or date. whose given column has certain
(filter v q p) p: a number or date order relation with the given value.
(filter v q p) column.
(filter v q p)
(filter v q p)
(filter v q p) v: a list of rows. a list of rows. From the given rows, select the
(filter v q p) q: a string. ones whose given column contain
p: a string column. / do not contain the given string.
(first v) v: a list of rows. a row. From the given rows, select the one
(last v) with the smallest / largest index.
(previous v) v: a row. a row. Select the row that is above
(next v) / below the given row.
(count v) v: a list of rows. a number. Count the number of given rows.
(max v p) v: a list of rows. a number. Compute the maximum / minimum
(min v p) p: a number column. / average / sum of the given column
(average v p) in the given rows.
(sum v p)
(mode v p) v: a list of rows. a cell. Get the most common value of the
p: a column. given column in the given rows.
(same_as v p) v: a row. a list of rows. Get the rows whose given column is
p: a column. the same as the given row.
(diff v0 v1 p) v0: a row. a number. Compute the difference in the given
v1: a row. column of the given two rows.
p: a number column.
Table 5: Functions used in the experiments.

In the WikiTableQuestions experiments, we used all the functions in the table. In the WikiSQL experiments, because the semantics of the questions are simpler, we used a subset of the functions (hop, filter, filter, filter, filter, count, maximum, minimum, average and sum). We created the functions according to [67, 34].222The only function we have added to capture some complex semantics is the same_as function, but it only appears in of the generated programs (among which are correct and the other are incorrect), so even if we remove it, the significance of the difference in Table 4 will not change.

Appendix B Examples of Generated Programs

The following table shows examples of several types of programs generated by a trained model.

Statement Comment
Superlative
nt-13901: the most points were scored by which player?
(argmax all_rows r.points-num) Sort all rows by column ‘points’ and take the first row.
(hop v0 r.player-str) Output the value of column ‘player’ for the rows in v0.
Difference
nt-457: how many more passengers flew to los angeles than to saskatoon?
(filter all_rows [’saskatoon’] r.city-str) Find the row with ‘saskatoon’ matched in column ‘city’.
(filter all_rows [‘los angeles’] r.city-str) Find the row with ‘los angeles’ matched in column ‘city’.
(diff v1 v0 r.passengers-num) Calculate the difference of the values
in the column ‘passenger’ of v0 and v1.
Before / After
nt-10832: which nation is before peru?
(filter all_rows [‘peru’] r.nation-str) Find the row with ‘peru’ matched in ‘nation’ column.
(previous v0) Find the row before v0.
(hop v1 r.nation-str) Output the value of column ‘nation’ of v1.
Compare & Count
nt-647: in how many games did sri lanka score at least 2 goals?
(filter all_rows [2] r.score-num) Select the rows whose value in the ‘score’ column >= 2.
(count v0) Count the number of rows in v0.
Exclusion
nt-1133: other than william stuart price, which other businessman was born in tulsa?
(filter all_rows [‘tulsa’] r.hometown-str) Find rows with ‘tulsa’ matched in column ‘hometown’.
(filter v0 [‘william stuart price’] r.name-str) Drop rows with ‘william stuart price’ matched in the
value of column ‘name’.
(hop v1 r.name-str) Output the value of column ‘name’ of v1.
Table 6: Example programs generated by a trained model.

Appendix C Analysis of Sampling from Inside and Outside Memory Buffer

In the following we give theoretical analysis of the distributed sampling approaches. For the purpose of the analysis we assume binary rewards, and exhaustive exploration, that the buffer contains all the high reward samples, and contains all the non-rewarded samples. It provides a general guidance of how examples should be allocated on the experiences and whether to use baseline or not so that the variance of gradient estimations can be minimized. In our work, we take the simpler approach to not use baseline and leave the empirical investigation to future work.

c.1 Variance: baseline vs no baseline

Here we compare baseline strategies based on their variances of gradient estimations. We thank Wenyun Zuo’s suggestion in approximating the variances.

Assume and are the variance of the gradient of the log likelihood inside and outside the buffer. If we don’t use a baseline, the the optimal sampling strategy is to only sample from . The variance of the gradient estimation is

(9)

If we use a baseline and apply the optimal sampling allocation (Section C.2), then the variance of the gradient estimation is

(10)

We can prove that both of these estimators reduce the variance for the gradient estimation. To compare the two, we can see that the ratio of the variance with and without baseline is

(11)

So using baseline provides lower variance when , which roughly corresponds to the later stage of training, and when is not much larger than ; it is better to not use baselines when is not close to 1.0 or when is much larger than .

c.2 Optimal Sample Allocation

Given that we want to apply stratified sampling to estimate the gradient of REINFORCE with baseline 2, here we show that the optimal sampling strategy is to allocate the same number of samples to and .

Assume that the gradient of log likelihood has similar variance on and :

(12)

The variance of gradient estimation on and are:

(13)

When performing stratified sampling, the optimal sample allocation is to let the number of samples from a stratum be proportional to its probability mass times the standard deviation In other words, the more probability mass and the more variance a stratum has, the more samples we should draw from a stratum. So the ratio of the number of samples allocated to each stratum under the optimal sample allocation is

(14)

Using equation 13, we can see that

(15)

So the optimal strategy is to allocate the same number of samples to and .

Appendix D Distributed Actor-Learner Architecture

Figure 4: Distributed actor-learner architecture.

Using 30 CPUs, each running one actor, and 2 GPUs, one for training and one for evaluating on dev set, the experiment finishes in about 3 hours on WikiTableQuestions and about 7 hours on WikiSQL.

Appendix E Pruning Rules for Random Exploration on WikiTableQuestions

The pruning rules are inspired by the grammar [67]. It can be seen as trigger words or POS tags for a subset of the functions. For the functions included, they are only allowed when at least one of the trigger words / tags appears in the sentence. For the other functions that are not included, there isn’t any constraints. Also note that these rules are only used during random exploration. During training and evaluation, the rules are not applied.

Function Triggers
count how, many, total, number
filter not, other, besides
first first, top
last last, bottom
argmin JJR, JJS, RBR, RBS, top, first, bottom, last
argmax JJR, JJS, RBR, RBS, top, first, bottom, last
sum all, combine, total
average average
maximum JJR, JJS, RBR, RBS
minimum JJR, JJS, RBR, RBS
mode most
previous next, previous, after, before, above, below
next next, previous, after, before, above, below
same same
diff difference, more, than
filter RBR, JJR, more, than, least, above, after
filter RBR, JJR, less, than, most, below, before, under
filter RBR, JJR, more, than, least, above, after
filter RBR, JJR, less, than, most, below, before, under
Table 7: Pruning rules used during random exploration on WikiTableQuestions.