TISP
Type-driven Incremental Semantic Parser
view repo
Semantic parsing has made significant progress, but most current semantic parsers are extremely slow (CKY-based) and rather primitive in representation. We introduce three new techniques to tackle these problems. First, we design the first linear-time incremental shift-reduce-style semantic parsing algorithm which is more efficient than conventional cubic-time bottom-up semantic parsers. Second, our parser, being type-driven instead of syntax-driven, uses type-checking to decide the direction of reduction, which eliminates the need for a syntactic grammar such as CCG. Third, to fully exploit the power of type-driven semantic parsing beyond simple types (such as entities and truth values), we borrow from programming language theory the concepts of subtype polymorphism and parametric polymorphism to enrich the type system in order to better guide the parsing. Our system learns very accurate parses in GeoQuery, Jobs and Atis domains.
READ FULL TEXT VIEW PDFType-driven Incremental Semantic Parser
Most existing semantic parsing efforts employ a CKY-style bottom-up parsing strategy to generate a meaning representation in simply typed lambda calculus Zettlemoyer and Collins (2005); Lu and Ng (2011) or its variants Wong and Mooney (2007); Liang et al. (2011). Although these works led to fairly accurate semantic parsers, there are two major drawbacks: efficiency and expressiveness.
First, as many researches in syntactic parsing Nivre (2008); Zhang and Clark (2011) have shown, compared to cubic-time CKY-style parsing, incremental parsing can achieve comparable accuracies while being linear-time, which means orders of magnitude faster in practice. We therefore introduce the first incremental parsing algorithm for semantic parsing. More interestingly, unlike syntactic parsing, our incremental semantic parsing algorithm, being strictly type-driven, directly employs type checking to automatically determine the direction of function application on-the-fly, thus reducing the search space and eliminating the need for a syntactic grammar such as CCG which explicitly encodes the direction of function application.
However, to fully exploit the power of type-driven incremental parsing, we need a more sophisticated type system than simply typed lambda calculus. We argue that it is beneficial to incorporate an explicit subtype hierarchy, such that ambiguous terms can be grounded based on context in a more explicit and declarative fashion. Compare the following two phrases:
the mayor of New York?
the capital of New York? If we know that mayor is a function from city to person, then the first New York can only be of type city; similarly knowing capital maps states to cities disambiguates the second New York to be of type state. This can not be done using a simple type system with just entities and booleans.
Now let us consider a more complex question which will be our running example in this paper:
What is the capital of the largest state by area? Since we know capital takes a state as input, we expect the largest state by area to return a state. But does largest always return a state type? Notice that it is polymorphic, for example, largest city by population, or largest lake by perimeter. So there is no unique type for largest: its return type should depend on the type of its first argument (city, state, or lake). This observation motivates us to introduce the powerful mechanism of parametric polymorphism from programming languages into the type system for natural language. For example, we can define the type of largest to be a template
where is a type variable that can match any type (for formal details see Section 3).
Just like in functional programming languages such as ML or Haskell, type variables can be bound to a real type (or a range of types) during function application, using the technique of type inference. In the above example, when largest is applied to city, we know that type variable is bound to type city (or its subtype), so that largest would eventually return a city.
We make the following contributions:
We design a linear-time incremental semantic parsing algorithm (Section 2), which is much more efficient than the majority of existing semantic parsers that are cubic-time CKY-based.
We introduce parametric polymorphism into natural language semantics (Section 3), along with proper treatment of subtype polymorphism, and implement Hindley-Milner style type inference (Pierce, 2005, Chap. 10) during parsing (Section 3.2).^{1}^{1}1There are three kinds of polymorphisms in programming languages: parametric (e.g., C++ templates), subtyping, and ad-hoc (e.g., operator overloading). See (Pierce, 2002, Chap. 15) for details.
We adapt the latent-variable max-violation perceptron training from machine translation
Yu et al. (2013), which is a perfect fit for semantic parsing due to its huge search space (Section 4).Experiments on GeoQuery, Jobs and Atis domains show close to state-of-the-art performances, and demonstrate the advantage of a powerful type system.
We start with the simplest meaning representation (MR), untyped lambda calculus, and then introduce typing and the incremental parsing algorithm for it. Later in Section 3, we add subtyping and type polymorphism to enrich the system.
The untyped MR for the running example is:
Q: What is the capital of the largest state by area? |
MR: |
Note the binary function argmax is a higher-order function that takes two other functions as input: the first argument is a “domain” function that defines the set to search for, and second argument is an “evaluation” function that returns a integer for an element in that domain. In other words
The simply typed lambda calculus Heim and Kratzer (1998); Lu and Ng (2011) augments the system with types, including base types (entities , truth values , or numbers ), and function types (e.g., ). So function capital is of type , state is of type , and size is of type . The argmax function is of type .^{2}^{2}2Note that the type notation is always curried, i.e., we represent a binary function as a unary function that returns another unary function. Also the type notation is always right-associative, so is also written as . The simply typed MR is now written as
step | action | stack after action | queue |
0 | - | what … | |
1–3 | skip | capital … | |
4 | capital: | of … | |
7 | capital: argmax: | state … | |
8 | capital: argmax: state: | by … | |
9 | capital: (argmax state): | by … | |
11 | capital: (argmax state): size: | ? | |
12 | capital: (argmax state size): | ? | |
13 | (capital (argmax state size)): | ? |
(a) type-driven incremental parsing with simple types (entities , truth values , and integers ); see Section 2.
step | action | stack after action | queue | typing |
0 | - | what… | ||
1–3 | skip | capital… | ||
4 | capital: | of… | ||
7 | capital: | state… | ||
8 | capital: | by… | ||
9 | capital: | by… | binding: | |
11 | capital: | ? | ||
12 | capital: | ? | ||
13 | ? |
(b) type-driven incremental parsing with subtyping () and type polymorphism (e.g., type variable ); see Section 3.2.
We use the above running example to explain our type-driven incremental semantic parsing algorithm. Figure 1 (a) illustrates the full derivation.
Similar to a standard shift-reduce parser, we maintain a stack and a queue. The queue contains words to be parsed, while the stack contains subexpressions of the final MR, where each subexpression is a valid typed lambda expression. At each step, the parser choose to shift or reduce, but unlike standard shift-reduce parser, there is also a third possible action, skip, which skips a semantically vacuous word (e.g., “the”, “of”, “is”, etc.). For example, the first three words of the example question “What is the …” are all skipped (steps 1–3 in Figure 1 (a)).
The parser then shifts the next word, “capital”, from the queue to the stack. But unlike incremental syntactic parsing where the word itself is moved onto the stack, here we need to find a grounded predicate in the GeoQuery domain for the current word. In this example we find the predicate:
and put it on the stack (step 4).
Next, words “of the” are skipped (steps 5–6). Then for word “largest”, we shift the predicate
onto the stack (step 7), which becomes
At this step we have two expressions on the stack and we could attempt to reduce. But type checking fails because for left reduce, argmax expects an argument (its “domain” function) of type which is different from capital’s type , so is the case for right reduce.
So we have to shift again. This time for word “state” we shift the predicate
onto the stack, which becomes:
At this step we can finally perform a reduce action, since the top two expressions on the stack pass the type-checking for rightward function application (a partial application): argmax expects an argument, which is exactly the type of state. So we conduct a right-reduce, applying argmax on state, and the resulting expression is:
while the stack becomes (step 9)
Now if we want to continue reduction, it does not type check for either left or right reduction, so we have to shift again.
So we move on to shift the final word “area” with the grounded predicate in GeoQuery database:
and the stack becomes (step 11):
Now apparently we can do a right reduce supported by type checking (step 12):
followed by another, final, right reduce (step 13):
Here we can see the novelty of our shift-reduce parser: its decisions are largely driven by the type system. When we attempt a reduce, at most one of the two reduce actions (left, right) is possible thanks to type checking, and when neither is allowed, we have to shift (or skip). This observation suggests that our incremental parser is more deterministic than those syntactic incremental parsers whose each step always faces a three-way decision (shift, left-reduce, right-reduce). We also note that this type-checking mechanism, inspired by the classical type-driven theory in linguistics Heim and Kratzer (1998), eliminates the need for an explicit encoding of direction as in CCG, which makes our formalism much simpler than the synchronous syntactic-semantic ones in most other semantic parsing efforts Zettlemoyer and Collins (2005, 2007); Wong and Mooney (2007).
As a side note, besides function application, reduce also occurs when the top two expressions on the stack can be combined to represent a more specific meaning, which we call union.
For example, when parsing the phrase “major city”, we have the top two expressions on the stack
We can combine the two expressions using predicate and since their types match, and get
where type takes two booleans and return one (again, using currying notation).
As mentioned in Section 1, simply typed lambda calculus representation can not distinguish between Mississippi the river and Mississippi the state since they both have the same type . Furthermore, currently function capital can apply to any entity type, for example capital(boston), which should have been disallowed by the type checker. So we need a more sophisticated type system that helps ground terms to real-world entities, and this refined type system will in turn help type-driven parsing.
We first augment the meaning representation with a type hierarchy which is domain specific. For example Figure 2 shows a (slightly simplified) version of the type hierarchy for GeoQuery domain. Here the root type has a subtype of locations, , which consists of two different kinds of locations, administrative units () including states () and cities (), and nature units () including rivers () and lakes (). We use to denote the (transitive, reflexive, and antisymmetric) subtyping relation between types; for example in GeoQuery we have , , and for any type .
In addition we have an integer type derived from the root type . The boolean type does not belong to the type hierarchy, because it does not represent the semantics from the task domain.
Each constant in the GeoQuery domain is well typed. For example, there are states (mississippi:), cities (boston:), rivers (mississippi:), and lakes (tahoe:). Note that the names like mississippi appears twice for two different entities. The fact that we can distinguish them by type is a crucial advantage of a typed semantic formalism.
Similarly each predicate is also typed. For example, we can query the length of a river, len:, or the population of some administrative unit, population:. Notice that population can be applied to both states and cities, since they are subtypes of administrative unit, i.e., and . This is because, as in Java and C++, a function that expects a type argument can always take an argument of another type which is a subtype of . More formally:
(1) |
where means substituting all occurrences of variable in expression with expression . For example, we can query whether two locations are adjacent, using next_to:(), and similarly the next_to function can be applied to two states, or to a river and a city, etc.
The above type system works smoothly for first-order functions (i.e., predicates taking atomic type arguments), but the situation with higher-order functions (i.e., predicates that take functions as input) is more involved. What is the type of argmax? One possibility is to define it to be as general as possible, as in the simply typed version (and many conventional semantic parsers):
But this actually no longer works for our sophisticated type system for the following reason.
Intuitively, remember that capital: is now a function that takes a state as input, so the return type of argmax must be a state or its subtype, rather than which is a supertype of . But we can not simply replace by , since argmax can also be applied in other scenarios such as “the largest city” or “the longest river”. In other words, argmax is a polymorphic function, and to assign a correct type for it we have to introduce type variables (widely used in functional programming languages such as Haskell and ML, and also in C++ templates). We define
where the type variable is a place-holder for “any type”.
Before we move on, there is an important consequence of polymorphism worth mentioning here. For the types of unary predicates such as city() and state() that characterize its argument, we define theirs argument types to be the required type, i.e., , and . This might look a little weird since everything in the domain of those functions are always mapped to true; i.e., is either undefined or true, and never false for such ’s. This is different from classical simply-typed Montague semantics Heim and Kratzer (1998) which defines such predicates as type so that returns false. The reason for our design is, again, due to subtyping and polymorphism: capital takes a state type as input, so argmax must returns a state, and therefore its first argument, the state function, must have type so that the matched type variable will be bound to . This more refined design will also help prune unnecessary argument matching using type checking.
We modify the previous incremental parsing algorithm with simple types (Section 2) to accommodate subtyping and polymorphic types. Figure 1 (b) shows the derivation of the running example using the new parsing algorithm. Below we focus on the differences brought by the new algorithm.
In step 4, unlike , we shift the predicate
and in step 7, we shift the polymorphic expression for “largest”
And after the shift in step 8, the stack becomes
At step 9, in order to apply argmax onto , we simply bind type variable to type , i.e.,
results in
After the shift in step 11, the stack becomes:
Can we still apply right reduce here? According to the subtyping rule (Eq. 1), we want
to hold, knowing that . Luckily, there is a rule about function types in type theory that exactly fits here:
(2) |
which states the input side is reversed (contravariant). This might look counterintuitive at the first glance, but the intuition is that, it is safe to allow the function size of type to be used in the context where another type is expected, since in that context the argument passed to size will be state type (), which is a subtype of location type () that size expects, which in turn will not surprise size. See the classical type theory textbook (Pierce, 2002, Chap. 15.2) for details. See Figure 1 (b) for the full derivation.
We follow the Latent Variable Violation-Fixing Perceptron framework Huang et al. (2012); Yu et al. (2013) for the training.
The key challenge in the training is that, for each question, there might be many different unknown derivations that lead to its annotated MR, which is known as the spurious ambiguity. In our type-driven incremental semantic parsing task, the spurious ambiguity is caused by how the expression templates are chosen and grounded during the shift step, and the different reduce orders that lead to the same result. We treat this unknown information as latent variable.
More formally, we denote to be the set of all partial and full parsing derivations for an input sentence , and to be the MR yielded by a full derivation . Then we define the sets of (partial and full) reference derivations as:
Those “bad” partial and full derivations that do not lead to the annotated MR can be defined as:
At step , the best reference partial derivation is
(3) |
while the Viterbi partial derivation is
(4) |
where is the defined feature set for derivation .
In practice, to compute Eq. 4 exactly is intractable, and we resort to beam search.
We use forced decoding to retrieve the reference derivations for each question/MR pair in Eq. 3.
Unlike syntactic incremental parsing, where the forced decoding can be done in polynomial time Goldberg et al. (2014), we do not have an algorithm designed for efficient forced decoding. We apply exponential-time brute-force search to calculate , during which we do pruning based on the predicate application orders.
However, this requires heavy computation we can not afford. In practice we choose multi-pass forced decoding. First we use brute-force search to decode, but with a time limit. Then we train a Perceptron using successfully decoded reference derivations, and use the trained Perceptron to decode the unfinished questions with a large beam. We then add the reference derivations newly discovered into the next step training.
We implement our type-driven incremental semantic parser (Tisp) using Python, and evaluate its performance of both speed and accuracy on GeoQuery and Jobs datasets.
Our feature design is inspired by the very effective Word-Edge features in syntactic parsing Charniak and Johnson (2005) and MT He et al. (2008). From each parsing state, we collect atomic features including the types and the leftmost and rightmost words of the span of the top 3 MR expressions on the stack, the top 3 words on the queue, the grounded predicate names and the ID of the expression template used in the shift action.
To ease the overfitting problem caused by the feature sparsity, we assign different budgets to different kinds of features and only generate feature combinations within a budget limit. We get 84 combined feature templates in total.
We first evaluate Tisp on GeoQuery dataset.
Following the scheme of Zettlemoyer and Collins (2007), we use the first 600 sentences of Geo880 as the training set and the rest 280 sentences as the testing set.
Note that we do not have a separate development set, due to the relatively small size of Geo880. So to find the best number of iterations to stop the training, we do a 10-fold cross-validation training over the training set, and choose to train 20 iterations and then evaluate.
We use two-pass forced decoding. In the initial brute-force pass we set the time limit to 1,200 seconds, and find the reference derivations for 530 of the total 600 training sentences, a coverage of . In the second pass we set beam size to 16,384 and get 581 sentences covered ().
In the training and evaluating time, we use a very small beam size of 16, which gives us very fast decoding. In serial mode, our parser takes s to decode the 280 sentences (2,147 words) in the testing set, which means s per sentence, or s per word.
We compare the our accuracy performance with existing methods in Table 1. Given that all other methods use CKY-style parsing, our method is well balanced between accuracy and speed.
GeoQuery | Jobs | Atis | |||||||
System | P | R | F1 | P | R | F1 | P | R | F1 |
Z&C’05 | 96.3 | 79.3 | 87.0 | 97.3 | 79.3 | 87.4 | - | - | - |
Z&C’07 | 91.6 | 86.1 | 88.8 | - | - | - | 85.8 | 84.6 | 85.2 |
UBL | 94.1 | 85.0 | 89.3 | - | - | - | 72.1 | 71.4 | 71.7 |
FUBL | 88.6 | 88.6 | 88.6 | - | - | - | 82.8 | 82.8 | 82.8 |
Tisp (simple type) | 89.7 | 86.8 | 88.2 | 76.4 | 76.4 | 76.4 | - | - | - |
Tisp | 92.9 | 88.9 | 90.9 | 85.0 | 85.0 | 85.0 | 84.7 | 84.2 | 84.4 |
-WASP | 92.0 | 86.6 | 94.1 | - | - | - | - | - | - |
In addition, to unveil the helpfulness of our type system, we train a parser with only simple types. (Table 1) In this setting, the predicates only have primitive types of location , integer , and boolean , while the constants still keep their types. It still has the type system, but it is weaker than the polymorphic one. Its accuracy is lower than the standard one, mostly caused by that the type system can not help pruning the wrong applications like
The Jobs domain contains descriptions about required and desired qualifications of a job. The qualifications include programming language (), years of experience (), diplomat degree (), area of fields (), platform (), title of the job (), etc. We show a simplified version of the type hierarchy for Jobs in Figure 3.
Following the splitting scheme of Zettlemoyer and Collins (2005), we use 500 sentences as training set and 140 sentences as testing set.
Table 1 shows that our algorithm achieves significantly higher recall than existing method of Zettlemoyer and Collins (2005), although our precision is not as high as theirs. This is actually because our method parses a lot more questions in the dataset, as the column of the percentage of successfully parsed sentences suggests.
We also evaluate the performance of Tisp on Atis dataset as in Table 1. Atis dataset contains more than 5,000 examples and is a lot larger than GeoQuery and Jobs. Our method achieves comparable performance on this dataset. Due to space constraints, we do not show its type hierarchy here.
Zettlemoyer and Collins (2005) introduce a type hierarchy to semantic parsing and parse with typed lambda calculus combined with CCG. However, simply introducing subtyped predicates without polymorphism will cause type checking failures in handling high-order functions, as shown in Section 3. Furthermore, our system, being type-driven, almost completely rely on the types of MR expressions to guide parsing (except for some simple POS tag triggers) while their system is heavily CCG-based and syntax-driven.
Kwiatkowski et al. (2013) use “on-the-fly” matching to fetch the most possible predicate in the dataset for some MR subexpression. The matching happens at the end of parsing, and is constrained by the type of the subexpression. We do matching and parsing jointly, both of which are constrained by the typing, and affect the typing, which is more similar to how human do semantic parsing, i.e., we parse part of the sentence and bind that part to some specific meaning, and continue parsing using grounded meaning.
Wong and Mooney (2007) also use type information to help reduce unnecessary tree joining in decoding. However, their types are static, while our type system is stronger so that we can infer type from polymorphism, which gives use better search quality in decoding.
We have presented an incremental semantic parser that is guided by a powerful type system of subtyping and parametric polymorphism. This polymorphism greatly reduced the number of templates and effectively pruned search space during the parsing. Our parser is competitive with state-of-the-art accuracies, but, being linear-time, is orders of magnitude faster than CKY-based parsers in theory and in practice.
For future work, we would like to work on weakly supervised learning that learn from question-answer pairs instead of question-MR pairs, where the datasets are larger, and
Tisp should benefit more on such problems.
Comments
There are no comments yet.