Weakly-supervised Semantic Parsing with Abstract Examples

11/14/2017 ∙ by Omer Goldman, et al. ∙ 0

Semantic parsers translate language utterances to programs, but are often trained from utterance-denotation pairs only. Consequently, parsers must overcome the problem of spuriousness at training time, where an incorrect program found at search time accidentally leads to a correct denotation. We propose that in small well-typed domains, we can semi-automatically generate an abstract representation for examples that facilitates information sharing across examples. This alleviates spuriousness, as the probability of randomly obtaining a correct answer from a program decreases across multiple examples. We test our approach on CNLVR, a challenging visual reasoning dataset, where spuriousness is central because denotations are either TRUE or FALSE, and thus random programs have high probability of leading to a correct denotation. We develop the first semantic parser for this task and reach 83.5 15.7 far.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The goal of semantic parsing is to map language utterances to executable programs. Early work on statistical learning of semantic parsers utilized supervised learning, where training examples included pairs of language utterances and programs

Zelle and Mooney (1996); Kate et al. (2005); Zettlemoyer and Collins (2005, 2007). However, collecting such training examples at scale has quickly turned out to be difficult, because expert annotators who are familiar with formal languages are required. This has led to a body of work on weakly-supervised semantic parsing Clarke et al. (2010); Liang et al. (2011); Krishnamurthy and Mitchell (2012); Kwiatkowski et al. (2013); Berant et al. (2013); Cai and Yates (2013); Artzi and Zettlemoyer (2013). In this setup, training examples correspond to utterance-denotation pairs, where a denotation is the result of executing a program against the environment (see Fig. 1). Naturally, collecting denotations is much easier, because it can be performed by non-experts.

Figure 1: Overview of our visual reasoning setup for the CNLVR dataset. Given an image rendered from a KB and an utterance , our goal is to parse to a program that results in the correct denotation . Our training data includes triplets.

Training semantic parsers from denotations rather than programs complicates training in two ways: (a) Search: The algorithm must learn to search through the huge space of programs at training time, in order to find the correct program. This is a difficult search problem due to the combinatorial nature of the search space. (b) Spuriousness: Incorrect programs can lead to correct denotations, and thus the learner can go astray based on these programs. Of the two mentioned problems, spuriousness has attracted relatively less attention Pasupat and Liang (2016); Guu et al. (2017).

Recently, the Cornell Natural Language for Visual Reasoning corpus (CNLVR) was released Suhr et al. (2017), and has presented an opportunity to better investigate the problem of spuriousness. In this task, an image with boxes that contains objects of various shapes, colors and sizes is shown. Each image is paired with a complex natural language statement, and the goal is to determine whether the statement is true or false (Fig. 1). The task comes in two flavors, where in one the input is the image (pixels), and in the other it is the knowledge-base (KB) from which the image was synthesized. Given the KB, it is easy to view CNLVR as a semantic parsing problem: our goal is to translate language utterances into programs that will be executed against the KB to determine their correctness Johnson et al. (2017b); Hu et al. (2017). Because there are only two return values, it is easy to generate programs that execute to the right denotation, and thus spuriousness is a major problem compared to previous datasets.

: “There are exactly 3 yellow squares touching the wall.”
: Equal(3, Count(Filter(ALL_ITEMS, And (And (IsYellow(), IsSquare(), IsTouchingWall())))))
: “There are C-QuantMod C-Num C-Color C-Shape touching the wall.”
: C-QuantMod(C-Num, Count(Filter(ALL_ITEMS, And (And (IsC-Color(), IsC-Shape(), IsTouchingWall())))))
Table 1: An example for an utterance-program pair and its abstract counterpart

In this paper, we present the first semantic parser for CNLVR. Semantic parsing can be coarsely divided into a lexical task (i.e., mapping words and phrases to program constants), and a structural

task (i.e., mapping language composition to program composition operators). Our core insight is that in closed worlds with clear semantic types, like spatial and visual reasoning, we can manually construct a small lexicon that clusters language tokens and program constants, and create a partially abstract representation for utterances and programs (Table 

1) in which the lexical problem is substantially reduced. This scenario is ubiquitous in many semantic parsing applications such as calendar, restaurant reservation systems, housing applications, etc: the formal language has a compact semantic schema and a well-defined typing system, and there are canonical ways to express many program constants.

We show that with abstract representations we can share information across examples and better tackle the search and spuriousness challenges. By pulling together different examples that share the same abstract representation, we can identify programs that obtain high reward across multiple examples, thus reducing the problem of spuriousness. This can also be done at search time, by augmenting the search state with partial programs that have been shown to be useful in earlier iterations. Moreover, we can annotate a small number of abstract utterance-program pairs, and automatically generate training examples, that will be used to warm-start our model to an initialization point in which search is able to find correct programs.

We develop a formal language for visual reasoning, inspired by johnson2017inferring, and train a semantic parser over that language from weak supervision, showing that abstract examples substantially improve parser accuracy. Our parser obtains an accuracy of 82.5%, a 14.7% absolute accuracy improvement compared to state-of-the-art. All our code is publicly available at https://github.com/udiNaveh/nlvr_tau_nlp_final_proj.

2 Setup

Problem Statement

Given a training set of examples , where is an utterance, is a KB describing objects in an image and denotes whether the utterance is true or false in the KB, our goal is to learn a semantic parser that maps a new utterance to a program such that when is executed against the corresponding KB , it yields the correct denotation (see Fig. 1).

Programming language

: “There is a small yellow item not touching any wall.”
: Exist(Filter(ALL_ITEMS, .And(And(IsYellow(), IsSmall()), Not(IsTouchingWall(, Side.Any)))))
: “One tower has a yellow base.”
: GreaterEqual(1, Count(Filter(ALL_ITEMS, .And(IsYellow(), IsBottom()))))
Table 2: Examples for utterance-program pairs. Commas and parenthesis provided for readability only.

The original KBs in CNLVR describe an image as a set of objects, where each object has a color, shape, size and location in absolute coordinates. We define a programming language over the KB that is more amenable to spatial reasoning, inspired by work on the CLEVR dataset Johnson et al. (2017b). This programming language provides access to functions that allow us to check the size, shape, and color of an object, to check whether it is touching a wall, to obtain sets of items that are above and below a certain set of items, etc.111We leave the problem of learning the programming language functions from the original KB for future work. More formally, a program is a sequence of tokens describing a possibly recursive sequence of function applications in prefix notation. Each token is either a function with fixed arity (all functions have either one or two arguments), a constant, a variable or a term used to define Boolean functions. Functions, constants and variables have one of the following atomic types: Int, Bool, Item, Size, Shape, Color, Side (sides of a box in the image); or a composite type Set(?), and Func(?,?). Valid programs have a return type Bool. Tables 1 and 2 provide examples for utterances and their correct programs. The supplementary material provides a full description of all program tokens, their arguments and return types.

Unlike CLEVR, CNLVR

requires substantial set-theoretic reasoning (utterances refer to various aspects of sets of items in one of the three boxes in the image), which required extending the language described by johnson2017inferring to include set operators and lambda abstraction. We manually sampled 100 training examples from the training data and estimate that roughly 95% of the utterances in the training data can be expressed with this programming language.

3 Model

We base our model on the semantic parser of Guu et al. (2017). In their work, they used an encoder-decoder architecture Sutskever et al. (2014) to define a distribution . The utterance is encoded using a bi-directional LSTM Hochreiter and Schmidhuber (1997) that creates a contextualized representation for every utterance token , and the decoder is a feed-forward network combined with an attention mechanism over the encoder outputs Bahdanau et al. (2015). The feed-forward decoder takes as input the last tokens that were decoded.

More formally the probability of a program is the product of the probability of its tokens given the history: , and the probability of a decoded token is computed as follows. First, a Bi-LSTM encoder converts the input sequence of utterance embeddings into a sequence of forward and backward states . The utterance representation is . Then decoding produces the program token-by-token:

where is an embedding for program token ,

is a bag-of-words vector for the tokens in

, is a history vector of size K, the matrices are learned parameters (along with the LSTM parameters and embedding matrices), and ’’ denotes concatenation.

Search:

Searching through the large space of programs is a fundamental challenge in semantic parsing. To combat this challenge we apply several techniques. First, we use beam search at decoding time and when training from weak supervision (see Sec. 4), similar to prior work Liang et al. (2017); Guu et al. (2017). At each decoding step we maintain a beam of program prefixes of length , expand them exhaustively to programs of length and keep the top- program prefixes with highest model probability.

Figure 2: An example for the state of the type stack while decoding a program for an utterance .

Second, we utilize the semantic typing system to only construct programs that are syntactically valid, and substantially prune the program search space (similar to type constraints in krishnamurthy2017neural,xiao2016sequence,liang2017nsm). We maintain a stack that keeps track of the expected semantic type at each decoding step. The stack is initialized with the type Bool. Then, at each decoding step, only tokens that return the semantic type at the top of the stack are allowed, the stack is popped, and if the decoded token is a function, the semantic types of its arguments are pushed to the stack. This dramatically reduces the search space and guarantees that only syntactically valid programs will be produced. Fig. 2 illustrates the state of the stack when decoding a program for an input utterance.

Given the constrains on valid programs, our model is defined as:

where indicates whether a certain program token is valid given the program prefix.

Discriminative re-ranking:

The above model is a locally-normalized model that provides a distribution for every decoded token, and thus might suffer from the label bias problem Andor et al. (2016); Lafferty et al. (2001). Thus, we add a globally-normalized re-ranker that scores all programs in the final beam produced by . Our globally-normalized model is:

and is normalized over all programs in the beam. The scoring function

is a neural network with identical architecture to the locally-normalized model, except that (a) it feeds the decoder with the candidate program

and does not generate it. (b) the last hidden state is inserted to a feed-forward network whose output is . Our final ranking score is .

4 Training

We now describe our basic method for training from weak supervision, which we extend upon in Sec. 5 using abstract examples. To use weak supervision, we treat the program as a latent variable that is approximately marginalized. To describe the objective, define to be one if executing program on KB results in denotation , and zero otherwise. The objective is then to maximize given by:

where is the space of all programs and are the programs found by beam search.

In most semantic parsers there will be relatively few that generate the correct denotation . However, in CNLVR, is binary, and so spuriousness is a central problem. To alleviate it, we utilize a property of CNLVR: the same utterance appears 4 times with 4 different images.222 We used the KBs in CNLVR, for which there are 4 KBs per utterance. When working over pixels there are 24 images per utterance, as 6 images were generated from each KB. If a program is spurious it is likely that it will yield the wrong denotation in one of those 4 images.

Thus, we can re-define each training example to be , where each utterance is paired with 4 different KBs and the denotations of the utterance with respect to these KBs. Then, we maximize by maximizing the objective above, except that iff the denotation of is correct for all four KBs. This dramatically reduces the problem of spuriousness, as the chance of randomly obtaining a correct denotation goes down from to . This is reminiscent of pasupat2016inferring, where random permutations of Wikipedia tables were shown to crowdsourcing workers to eliminate spurious programs.

We train the discriminative ranker analogously by maximizing the probability of programs with correct denotation .

This basic training method fails for CNLVR (see Sec. 6), due to the difficulties of search and spuriousness. Thus, we turn to learning from abstract examples, which substantially reduce these problems.

5 Learning from Abstract Examples

Utterance Program Cluster #
“yellow” IsYellow C-Color 3
“big” IsBig C-Size 3
“square” IsSquare C-Shape 4
“3” 3 C-Num 2
“exactly” EqualInt C-QuantMod 5
“top” Side.Top C-Location 2
“above” GetAbove C-SpaceRel 6
Total: 25
Table 3: Example mappings from utterance tokens to program tokens for the seven clusters used in the abstract representation. The rightmost column counts the number of mapping in each cluster, resulting in a total of 25 mappings.

The main premise of this work is that in closed, well-typed domains such as visual reasoning, the main challenge is handling language compositionality, since questions may have a complex and nested structure. Conversely, the problem of mapping lexical items to functions and constants in the programming language can be substantially alleviated by taking advantage of the compact KB schema and typing system, and utilizing a small lexicon that maps prevalent lexical items into typed program constants. Thus, if we abstract away from the actual utterance into a partially abstract representation, we can combat the search and spuriousness challenges as we can generalize better across examples in small datasets.

Consider the utterances:

  1. [nosep]

  2. “There are exactly 3 yellow squares touching the wall.”

  3. “There are at least 2 blue circles touching the wall.”

While the surface forms of these utterances are different, at an abstract level they are similar and it would be useful to leverage this similarity.

We therefore define an abstract representation for utterances and logical forms that is suitable for spatial reasoning. We define seven abstract clusters (see Table 3) that correspond to the main semantic types in our domain. Then, we associate each cluster with a small lexicon that contains language-program token pairs associated with this cluster. These mappings represent the canonical ways in which program constants are expressed in natural language. Table 3 shows the seven clusters we use, with an example for an utterance-program token pair from the cluster, and the number of mappings in each cluster. In total, 25 mappings are used to define abstract representations.

As we show next, abstract examples can be used to improve the process of training semantic parsers. Specifically, in sections 5.1-5.3, we use abstract examples in several ways, from generating new training data to improving search accuracy. The combined effect of these approaches is quite dramatic, as our evaluation demonstrates.

5.1 High Coverage via Abstract Examples

We begin by demonstrating that abstraction leads to rather effective coverage of the types of questions asked in a dataset. Namely, that many questions in the data correspond to a small set of abstract examples. We created abstract representations for all 3,163 utterances in the training examples by mapping utterance tokens to their cluster label, and then counted how many distinct abstract utterances exist. We found that as few as 200 abstract utterances cover roughly half of the training examples in the original training set.

The above suggests that knowing how to answer a small set of abstract questions may already yield a reasonable baseline. To test this baseline, we constructured a “rule-based” parser as follows. We manually annotated 106 abstract utterances with their corresponding abstract program (including alignment between abstract tokens in the utterance and program). For example, Table 1 shows the abstract utterance and program for the utterance “There are exactly 3 yellow squares touching the wall”. Note that the utterance “There are at least 2 blue circles touching the wall” will be mapped to the same abstract utterance and program.

Given this set of manual annotations, our rule-based semantic parser operates as follows: Given an utterance , create its abstract representation . If it exactly matches one of the manually annotated utterances, map it to its corresponding abstract program . Replace the abstract program tokens with real program tokens based on the alignment with the utterance tokens, and obtain a final program . If does not match return True, the majority label. The rule-based parser will fail for examples not covered by the manual annotation. However, it already provides a reasonable baseline (see Table 4). As shown next, manual annotations can also be used for generating new training data.

5.2 Data Augmentation

While the rule-based semantic parser has high precision and gauges the amount of structural variance in the data, it cannot generalize beyond observed examples. However, we can automatically generate non-abstract utterance-program pairs from the manually annotated abstract pairs and train a semantic parser with strong supervision that can potentially generalize better. E.g., consider the utterance

“There are exactly 3 yellow squares touching the wall”, whose abstract representation is given in Table 1. It is clear that we can use this abstract pair to generate a program for a new utterance “There are exactly 3 blue squares touching the wall”. This program will be identical to the program of the first utterance, with IsBlue replacing IsYellow.

More generally, we can sample any abstract example and instantiate the abstract clusters that appear in it by sampling pairs of utterance-program tokens for each abstract cluster. Formally, this is equivalent to a synchronous context-free grammar Chiang (2005) that has a rule for generating each manually-annotated abstract utterance-program pair, and rules for synchronously generating utterance and program tokens from the seven clusters.

We generated 6,158 examples using this method and trained a standard sequence to sequence parser by maximizing in the model above. Although these are generated from a small set of 106 abstract utterances, they can be used to learn a model with higher coverage and accuracy compared to the rule-based parser, as our evaluation demonstrates.333Training a parser directly over the 106 abstract examples results in poor performance due to the small number of examples.

The resulting parser can be used as a standalone semantic parser. However, it can also be used as an initialization point for the weakly-supervised semantic parser. As we observe in Sec. 6, this results in further improvement in accuracy.

5.3 Caching Abstract Examples

1:procedure Decode()
2:      // C is a map where the key is an abstract utterance and the value is a pair of a list of abstract programs and their average rewards . is an integer.
3:       Abstract utterance of x
4:       programs in with top reward values
5:       compute beam of programs of length
6:      for  do // Decode with cache
7:             construct beam from
8:            
9:                   
10:      for  do //Update cache
11:            Update rewards in using       
12:      return .
Algorithm 1 Decoding with an Abstract Cache
Figure 3: A visualization of the caching mechanism. At each decoding step, prefixes of high-reward abstract programs are added to the beam from the cache.

We now describe a caching mechanism that uses abstract examples to combat search and spuriousness when training from weak supervision. As shown in Sec. 5.1, many utterances are identical at the abstract level. Thus, a natural idea is to keep track at training time of abstract utterance-program pairs that resulted in a correct denotation, and use this information to direct the search procedure.

Concretely, we construct a cache that maps abstract utterances to all abstract programs that were decoded by the model, and tracks the average reward obtained for those programs. For every utterance , after obtaining the final beam of programs, we add to the cache all abstract utterance-program pairs , and update their average reward (Alg. 1, line 10). To construct an abstract example from an utterance-program pair in the beam, we perform the following procedure. First, we create by replacing utterance tokens with their cluster label, as in the rule-based semantic parser. Then, we go over every program token in , and replace it with an abstract cluster if the utterance contains a token that is mapped to this program token according to the mappings from Table 3. This also provides an alignment from abstract program tokens to abstract utterance tokens that is necessary when utilizing the cache.

We propose two variants for taking advantage of the cache . Both are shown in Algorithm 1.
1. Full program retrieval (Alg. 1, line 12): Given utterance , construct an abstract utterance , retrieve the top abstract programs from the cache, compute the de-abstracted programs using alignments from program tokens to utterance tokens, and add the programs to the final beam.
2. Program prefix retrieval (Alg. 1, line 9): Here, we additionally consider prefixes of abstract programs to the beam, to further guide the search process. At each step , let be the beam of decoded programs at step . For every abstract program add the de-abstracted prefix to and expand accordingly. This allows the parser to potentially construct new programs that are not in the cache already. This approach combats both spuriousness and the search challenge, because we add promising program prefixes to the beam that might have fallen off of it earlier. Fig. 3 visualizes the caching mechanism.

A high-level overview of our entire approach for utilizing abstract examples at training time for both data augmentation and model training is given in Fig. 4.

Figure 4: An overview of our approach for utilizing abstract examples for data augmentation and model training.

6 Experimental Evaluation

Model and Training Parameters

The Bi-LSTM state dimension is . The decoder has one hidden layer of dimension , that takes the last 4 decoded tokens as input as well as encoder states. Token embeddings are of dimension 12, beam size is and programs are used in Algorithm 1. Word embeddings are initialized from CBOW Mikolov et al. (2013) trained on the training data, and are then optimized end-to-end. In the weakly-supervised parser we encourage exploration with meritocratic gradient updates with Guu et al. (2017). In the weakly-supervised parser we warm-start the parameters with the supervised parser, as mentioned above. For optimization, Adam is used Kingma and Ba (2014)), with learning rate of , and mini-batch size of .

Pre-processing

Because the number of utterances is relatively small for training a neural model, we take the following steps to reduce sparsity. We lowercase all utterance tokens, and also use their lemmatized form. We also use spelling correction to replace words that contain typos. After pre-processing we replace every word that occurs less than 5 times with an UNK symbol.

Evaluation

We evaluate on the public development and test sets of CNLVR

as well as on the hidden test set. The standard evaluation metric is accuracy, i.e., how many examples are correctly classified. In addition, we report

consistency, which is the proportion of utterances for which the decoded program has the correct denotation for all 4 images/KBs. It captures whether a model consistently produces a correct answer.

Baselines

We compare our models to the Majority baseline that picks the majority class (True in our case). We also compare to the state-of-the-art model reported by suhr2017nlvr when taking the KB as input, which is a maximum entropy classifier (MaxEnt). For our models, we evaluate the following variants of our approach:

  • [nosep,leftmargin=0.4cm]

  • Rule: The rule-based parser from Sec. 5.1.

  • Sup.: The supervised semantic parser trained on augmented data as in Sec. 5.2 ( examples for training and for validation).

  • WeakSup.: Our full weakly-supervised semantic parser that uses abstract examples.

  • +Disc: We add a discriminative re-ranker (Sec. 3) for both Sup. and WeakSup.

Main results

Dev. Test-P Test-H

Model
Acc. Con. Acc. Con. Acc. Con.
Majority 55.3 - 56.2 - 55.4 -
MaxEnt 68.0 - 67.7 - 67.8 -
Rule 66.0 29.2 66.3 32.7 - -
Sup. 67.7 36.7 66.9 38.3 - -
Sup.+Disc 77.7 52.4 76.6 51.8 - -
WeakSup. 84.3 66.3 81.7 60.1 - -
W.+Disc 85.7 67.4 84.0 65.0 82.5 63.9
Table 4: Results on the development, public test (Test-P) and hidden test (Test-H) sets. For each model, we report both accuracy and consistency.

Table 4 describes our main results. Our weakly-supervised semantic parser with re-ranking (W.+Disc) obtains accuracy and consistency on the public test set and accuracy and on the hidden one, improving accuracy by points compared to state-of-the-art. The accuracy of the rule-based parser (Rule) is less than points below MaxEnt, showing that a semantic parsing approach is very suitable for this task. The supervised parser obtains better performance (especially in consistency), and with re-ranking reaches accuracy, showing that generalizing from generated examples is better than memorizing manually-defined patterns. Our weakly-supervised parser significantly improves over Sup., reaching an accuracy of before re-ranking, and after re-ranking (on the public test set). Consistency results show an even crisper trend of improvement across the models.

6.1 Analysis

Dev.
Model Acc. Con.

Randomer
53.2 7.1
Abstraction 58.2 17.6
DataAugmentation 71.4 41.2
BeamCache 77.2 56.1
EveryStepBeamCache 82.3 62.2
OneExampleReward 58.2 11.2
Table 5: Results of ablations of our main models on the development set. Explanation for the nature of the models is in the body of the paper.

We analyze our results by running multiple ablations of our best model W.+Disc on the development set.

To examine the overall impact of our procedure, we trained a weakly-supervised parser from scratch without pre-training a supervised parser nor using a cache, which amounts to a re-implementation of the Randomer algorithm Guu et al. (2017). We find that the algorithm is unable to bootstrap in this challenging setup and obtains very low performance. Next, we examined the importance of abstract examples, by pre-training only on examples that were manually annotated (utterances that match the abstract patterns), but with no data augmentation or use of a cache (Abstraction). This results in performance that is similar to the Majority baseline.

To further examine the importance of abstraction, we decoupled the two contributions, training once with a cache but without data augmentation for pre-training (DataAugmentation), and again with pre-training over the augmented data, but without the cache (BeamCache). We found that the former improves by a few points over the MaxEnt baseline, and the latter performs comparably to the supervised parser, that is, we are still unable to improve learning by training from denotations.

Lastly, we use a beam cache without line 9 in Alg. 1 (EveryStepBeamCache). This already results in good performance, substantially higher than Sup. but is still points worse than our best performing model on the development set.

Orthogonally, to analyze the importance of tying the reward of all four examples that share an utterance, we trained a model without this tying, where the reward is 1 iff the denotation is correct (OneExampleReward). We find that spuriousness becomes a major issue and weakly-supervised learning fails.

Error Analysis

We sampled 50 consistent and 50 inconsistent programs from the development set to analyze the weaknesses of our model. By and large, errors correspond to utterances that are more complex syntactically and semantically. In about half of the errors an object was described by two or more modifying clauses: “there is a box with a yellow circle and three blue items”; or nesting occurred: “one of the gray boxes has exactly three objects one of which is a circle”. In these cases the model either ignored one of the conditions, resulting in a program equivalent to “there is a box with three blue items” for the first case, or applied composition operators wrongly, outputting an equivalent to “one of the gray boxes has exactly three circles” for the second case. However, in some cases the parser succeeds on such examples and we found that 12% of the sampled utterances that were parsed correctly had a similar complex structure. Other, less frequent reasons for failure were problems with cardinality interpretation, i.e. ,“there are 2” parsed as “exactly 2” instead of “at least 2”; applying conditions to items rather than sets, e.g., “there are 2 boxes with a triangle closely touching a corner” parsed as “there are 2 triangles closely touching a corner”; and utterances with questionable phrasing, e.g., “there is a tower that has three the same blocks color”.

Other insights are that the algorithm tended to give higher probability to the top ranked program when it is correct (average probability ), compared to cases when it is incorrect (average probability ), indicating that probabilities are correlated with confidence. In addition, sentence length is not predictive for whether the model will succeed: average sentence length of an utterance is when the model is correct, and when it errs.

We also note that the model was successful with sentences that deal with spatial relations, but struggled with sentences that refer to the size of shapes. This is due to the data distribution, which includes many examples of the former case and fewer examples of the latter.

7 Related Work

Training semantic parsers from denotations has been one of the most popular training schemes for scaling semantic parsers since the beginning of the decade. Early work focused on traditional log-linear models Clarke et al. (2010); Liang et al. (2011); Kwiatkowski et al. (2013), but recently denotations have been used to train neural semantic parsers Liang et al. (2017); Krishnamurthy et al. (2017); Rabinovich et al. (2017); Cheng et al. (2017).

Visual reasoning has attracted considerable attention, with datasets such as VQA Antol et al. (2015) and CLEVR Johnson et al. (2017a). The advantage of CNLVR is that language utterances are both natural and compositional. Treating visual reasoning as an end-to-end semantic parsing problem has been previously done on CLEVR Hu et al. (2017); Johnson et al. (2017b).

Our method for generating training data resembles data re-combination ideas in Jia and Liang (2016), where examples are generated automatically by replacing entities with their categories.

While spuriousness is central to semantic parsing when denotations are not very informative, there has been relatively little work on explicitly tackling it. pasupat2015compositional used manual rules to prune unlikely programs on the WikiTableQuestions dataset, and then later utilized crowdsourcing Pasupat and Liang (2016) to eliminate spurious programs. guu2017bridging proposed Randomer, a method for increasing exploration and handling spuriousness by adding randomness to beam search and a proposing a “meritocratic” weighting scheme for gradients. In our work we found that random exploration during beam search did not improve results while meritocratic updates slightly improved performance.

8 Discussion

In this work we presented the first semantic parser for the CNLVR dataset, taking structured representations as input. Our main insight is that in closed, well-typed domains we can generate abstract examples that can help combat the difficulties of training a parser from delayed supervision. First, we use abstract examples to semi-automatically generate utterance-program pairs that help warm-start our parameters, thereby reducing the difficult search challenge of finding correct programs with random parameters. Second, we focus on an abstract representation of examples, which allows us to tackle spuriousness and alleviate search, by sharing information about promising programs between different examples. Our approach dramatically improves performance on CNLVR, establishing a new state-of-the-art.

In this paper, we used a manually-built high-precision lexicon to construct abstract examples. This is suitable for well-typed domains, which are ubiquitous in the virtual assistant use case. In future work we plan to extend this work and automatically learn such a lexicon. This can reduce manual effort and scale to larger domains where there is substantial variability on the language side.

Acknowledgements

This research was partially supported by The Israel Science Foundation grant 942/16.

References

  • Andor et al. (2016) D. Andor, C. Alberti, D. Weiss, A. Severyn, A. Presta, K. Ganchev, S. Petrov, and M. Collins. 2016. Globally normalized transition-based neural networks. arXiv preprint arXiv:1603.06042 .
  • Antol et al. (2015) S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. L. Zitnick, and D. Parikh. 2015. Vqa: Visual question answering. In

    International Conference on Computer Vision (ICCV)

    . pages 2425–2433.
  • Artzi and Zettlemoyer (2013) Y. Artzi and L. Zettlemoyer. 2013. Weakly supervised learning of semantic parsers for mapping instructions to actions. Transactions of the Association for Computational Linguistics (TACL) 1:49–62.
  • Bahdanau et al. (2015) D. Bahdanau, K. Cho, and Y. Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations (ICLR).
  • Berant et al. (2013) J. Berant, A. Chou, R. Frostig, and P. Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In

    Empirical Methods in Natural Language Processing (EMNLP)

    .
  • Cai and Yates (2013) Q. Cai and A. Yates. 2013. Large-scale semantic parsing via schema matching and lexicon extension. In Association for Computational Linguistics (ACL).
  • Cheng et al. (2017) J. Cheng, S. Reddy, V. Saraswat, and M. Lapata. 2017. Learning structured natural language representations for semantic parsing. In Association for Computational Linguistics (ACL).
  • Chiang (2005) D. Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Association for Computational Linguistics (ACL). pages 263–270.
  • Clarke et al. (2010) J. Clarke, D. Goldwasser, M. Chang, and D. Roth. 2010. Driving semantic parsing from the world’s response. In Computational Natural Language Learning (CoNLL). pages 18–27.
  • Guu et al. (2017) K. Guu, P. Pasupat, E. Z. Liu, and P. Liang. 2017.

    From language to programs: Bridging reinforcement learning and maximum marginal likelihood.

    In Association for Computational Linguistics (ACL).
  • Hochreiter and Schmidhuber (1997) S. Hochreiter and J. Schmidhuber. 1997. Long short-term memory. Neural Computation 9(8):1735–1780.
  • Hu et al. (2017) R. Hu, J. Andreas, M. Rohrbach, T. Darrell, and K. Saenko. 2017. Learning to reason: End-to-end module networks for visual question answering. In International Conference on Computer Vision (ICCV).
  • Jia and Liang (2016) R. Jia and P. Liang. 2016. Data recombination for neural semantic parsing. In Association for Computational Linguistics (ACL).
  • Johnson et al. (2017a) J. Johnson, B. Hariharan, L. van der Maaten, L. Fei-Fei, C. L. Zitnick, and R. Girshick. 2017a. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In

    Computer Vision and Pattern Recognition (CVPR)

    .
  • Johnson et al. (2017b) J. Johnson, B. Hariharan, L. van der Maaten, J. Hoffman, L. Fei-Fei, C. L. Zitnick, and R. Girshick. 2017b. Inferring and executing programs for visual reasoning. In International Conference on Computer Vision (ICCV).
  • Kate et al. (2005) R. J. Kate, Y. W. Wong, and R. J. Mooney. 2005. Learning to transform natural to formal languages. In

    Association for the Advancement of Artificial Intelligence (AAAI)

    . pages 1062–1068.
  • Kingma and Ba (2014) D. Kingma and J. Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 .
  • Krishnamurthy et al. (2017) J. Krishnamurthy, P. Dasigi, and M. Gardner. 2017. Neural semantic parsing with type constraints for semi-structured tables. In Empirical Methods in Natural Language Processing (EMNLP).
  • Krishnamurthy and Mitchell (2012) J. Krishnamurthy and T. Mitchell. 2012. Weakly supervised training of semantic parsers. In Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP/CoNLL). pages 754–765.
  • Kwiatkowski et al. (2013) T. Kwiatkowski, E. Choi, Y. Artzi, and L. Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly ontology matching. In Empirical Methods in Natural Language Processing (EMNLP).
  • Lafferty et al. (2001) J. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling data. In

    International Conference on Machine Learning (ICML)

    . pages 282–289.
  • Liang et al. (2017) C. Liang, J. Berant, Q. Le, K. D. Forbus, and N. Lao. 2017. Neural symbolic machines: Learning semantic parsers on Freebase with weak supervision. In Association for Computational Linguistics (ACL).
  • Liang et al. (2011) P. Liang, M. I. Jordan, and D. Klein. 2011. Learning dependency-based compositional semantics. In Association for Computational Linguistics (ACL). pages 590–599.
  • Mikolov et al. (2013) T. Mikolov, K. Chen, G. Corrado, and Jeffrey. 2013. Efficient estimation of word representations in vector space. arXiv .
  • Pasupat and Liang (2015) P. Pasupat and P. Liang. 2015. Compositional semantic parsing on semi-structured tables. In Association for Computational Linguistics (ACL).
  • Pasupat and Liang (2016) P. Pasupat and P. Liang. 2016. Inferring logical forms from denotations. In Association for Computational Linguistics (ACL).
  • Rabinovich et al. (2017) M. Rabinovich, M. Stern, and D. Klein. 2017. Abstract syntax networks for code generation and semantic parsing. In Association for Computational Linguistics (ACL).
  • Suhr et al. (2017) A. Suhr, M. Lewis, J. Yeh, and Y. Artzi. 2017. A corpus of natural language for visual reasoning. In Association for Computational Linguistics (ACL).
  • Sutskever et al. (2014) I. Sutskever, O. Vinyals, and Q. V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems (NIPS). pages 3104–3112.
  • Xiao et al. (2016) C. Xiao, M. Dymetman, and C. Gardent. 2016. Sequence-based structured prediction for semantic parsing. In Association for Computational Linguistics (ACL).
  • Zelle and Mooney (1996) M. Zelle and R. J. Mooney. 1996.

    Learning to parse database queries using inductive logic programming.

    In Association for the Advancement of Artificial Intelligence (AAAI). pages 1050–1055.
  • Zettlemoyer and Collins (2005) L. S. Zettlemoyer and M. Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In Uncertainty in Artificial Intelligence (UAI). pages 658–666.
  • Zettlemoyer and Collins (2007) L. S. Zettlemoyer and M. Collins. 2007. Online learning of relaxed CCG grammars for parsing to logical form. In Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP/CoNLL). pages 678–687.