Log In Sign Up

A Generate-Validate Approach to Answering Questions about Qualitative Relationships

Qualitative relationships describe how increasing or decreasing one property (e.g. altitude) affects another (e.g. temperature). They are an important aspect of natural language question answering and are crucial for building chatbots or voice agents where one may enquire about qualitative relationships. Recently a dataset about question answering involving qualitative relationships has been proposed, and a few approaches to answer such questions have been explored, in the heart of which lies a semantic parser that converts the natural language input to a suitable logical form. A problem with existing semantic parsers is that they try to directly convert the input sentences to a logical form. Since the output language varies with each application, it forces the semantic parser to learn almost everything from scratch. In this paper, we show that instead of using a semantic parser to produce the logical form, if we apply the generate-validate framework i.e. generate a natural language description of the logical form and validate if the natural language description is followed from the input text, we get a better scope for transfer learning and our method outperforms the state-of-the-art by a large margin of 7.93


page 1

page 2

page 3

page 4


Integrating question answering and text-to-SQL in Portuguese

Deep learning transformers have drastically improved systems that automa...

QuaRel: A Dataset and Models for Answering Questions about Qualitative Relationships

Many natural language questions require recognizing and reasoning with q...

Building an Application Independent Natural Language Interface

Traditional approaches to building natural language (NL) interfaces typi...

Learning Dependency-Based Compositional Semantics

Suppose we want to build a system that answers a natural language questi...

Parser Extraction of Triples in Unstructured Text

The web contains vast repositories of unstructured text. We investigate ...

Natural Language Generation for Non-Expert Users

Motivated by the difficulty in presenting computational results, especia...

Learning an Executable Neural Semantic Parser

This paper describes a neural semantic parser that maps natural language...

1 Introduction and Motivation

The importance of Natural language question answering (NLQA) has greatly accelerated in recent years. It is not only used in benchmarking various NLP tasks and their combinations, but some NLQA challenges, such as Winograd Schema challenge Levesque et al. (2012) and Aristo Clark (2015) have been proposed for benchmarking progress in AI as a whole. In terms of applications, NLQA plays an important role in human-computer interactions via speech and text and the recent surge in chatbot development, deployment, and usage has further increased its importance.

In various natural language question answering domains, applications, and challenge corpora one often encounters textual content and questions about qualitative relationships. For example, a chatbot developer developing a chatbot for a company dealing with windows and curtains would need the chatbot to be able to answer questions such as: “Will a larger window make the room warmer?”, and “Will a white curtain in the window make the room cooler?”. Similarly, in the Aristo Clark (2015) corpus there are several items that involve qualitative relationships. An example from that corpus is as follows:

In a large forest with many animals, there are only a small number of bears. Which of these most likely limits the population of bears in the forest?
(A) supply of food
(B) type of tree
(C) predation by carnivores
(D) amount of suitable shelter

Considering the importance of being able to answer questions about qualitative relationships in an NLQA setting, recently the QUAREL corpus Tafjord et al. (2018) has been proposed. Table 1 shows some examples from the QUAREL corpus.

I: A boomerang thrown into a windy sky heats up quite a bit, but one thrown into a calm sky stays about the same temperature. Which surface puts the least amount of friction on the boomerang? (A) windy sky (B) calm sky
II: Tank the kitten learned from trial and error that carpet is rougher then skin. When he scratches his claws over carpet it generates ________ then when he scratches his claws over skin (A) more heat (B) less heat
III: The propeller on Kate’s boat moved slower in the ocean compared to the river. This means the propeller heated up less in the (A) ocean (B) river
IV: Juan is injured in a car accident, which necessitates a hospital stay where he is unable to maintain the strength in his arm. Juan notices that his throwing arm feels extremely frail compared to the level of strength it had when he was healthy. If Juan decides to throw a ball with his friend, when will his throw travel less distance? (A) When Juan’s arm is healthy (B) When Juan’s arm is weak after the hospital stay.
Table 1: Example problems form the QUAREL corpus

Our goal in this paper is to develop a method for answering questions about qualitative relationships, especially with respect to the QUAREL dataset. There are several challenges associated with question answering in this domain. First, it requires reasoning with external knowledge about qualitative relations. Although a small knowledge base related to QUAREL has been provided by the QUAREL authors, which we refer to as QRKB (Qualitative Relations Knowledge Base), incorporating that knowledge into the question answering process is a challenge. Second, as pointed out in Tafjord et al. (2018) direct IR based methods, and word association based methods do not do well in this domain. That is because neither of them properly capture reasoning with external knowledge. A Knowledge Representation and Reasoning (KR&R) based approach, that can use reasoning modules from the qualitative reasoning literature Bobrow (2012); Weld and De Kleer (2013) can be employed. For e.g., the problem I from table 1 can be translated to the following tuple: (qrel(friction, higher, carpet),qrel(heat, higher, carpet),qrel(heat, lower, carpet))111This is for illustration purpose.This is not exactly same as the logical form that QUASP or QUASP+ translates to.. The first component of the tuple qrel(friction, higher, carpet) denotes the given fact i.e. “friction is more on carpet”. The second component denotes the claim corresponding to option A i.e. “more heat is generated on carpet” and the third component captures the claim corresponding to option B which is “less heat is generated on carpet”. The reasoning module using the qualitative knowledge that more friction results in more heat can then decide that option A is true. However such approach requires accurate semantic parsing of the text and the question and that is a big challenge. Nevertheless, the authors of QUAREL provide annotations that can facilitate a limited semantic parsing and use that to develop a type constrained neural semantic parser (QUASP) which together with delexicalization results in their best performing system (QUASP+).

Our approach aims to address the drawbacks of using a traditional semantic parser for obtaining the logical representation. Existing semantic parsers are trained to translate the natural language sentences into an application specfic logical representation. Before training, the semantic parsers have some prior knowledge of the input (natural) language, which is normally captured by the word vectors, existing knowledge bases such as WordNet, ConceptNet or parse trees. The target language however is a complete unknown. The model must learn the meaning of the symbols in the target language (i.e. the association between the symbols in the target vocabulary to the ones in input vocabulary) and how to combine these symbols given the input sentence solely from the annotated training data. These expectations naturally increase the demand for more annotated data and these models often suffer if some of the symbols from the output vocabulary do not appear in the training dataset but appear in test set.

To address these challenges we apply the generate-validate framework Mitra et al. (2019) which promotes the following idea:

If a reasoning algorithm requires facts to be given in a logical form and the application developer has natural language texts at hand, then instead of employing a semantic parser to convert the text to suitable logical facts, generate a natural language description of the logical fact and validate if the text entails the natural language description.

Thus instead of generating the logical form from the input problem as is done in Tafjord et al. (2018), we ‘roughly iterate’ over the space of possible logical forms, generate a natural language description for each logical form, validate (score) each of those natural language descriptions using multiple “textual entailment” calls and then finally use those scores to detect the correct answer choice. Since, the space of possible logical forms can be quite big, instead of performing a brute-force search we perform an efficient search, which we describe later in section 3.

Our contributions in this paper are as follows: (1) We show how to apply generate-validate framework to solve the qualitative word problems from QUAREL; (2) We show through experiments that an existing Natural Language Inference dataset, namely SNLI and pre-trained models like BERT can significantly boost the performance on QUAREL when instead of directly generating the logical form, semantic parsing is done through generate-validate. Our method obtains an accuracy of which is better than QUASP+ model and better than QUASP model. We believe that this work will motivate fellow researchers to think differently about semantic parsing and will aid in the development of new models that have a generate-validate architecture at their core and is powered by transfer learning.

2 Background

2.1 The QUAREL dataset

The QUAREL dataset Tafjord et al. (2018) has 2771 annotated multiple choice story questions. Table 1 shows some sample questions from the QUAREL dataset. Each question in the QUAREL dataset has annotation in the form of logical forms and world literals which we show here for items I and II of Table 1:

Annotation for Problem I: Logical Form Literals :“windy sky” : “calm sky” Annotation for Problem II: Logical Form Literals : “carpet” : “skin’

The two examples show two types of logical forms. Syntactically, the logical forms have two parts: the setup part that describes the set of explicitly given facts and the answer choice part that gives two claims, one for option A (here after claimA) and another for option B (here after claimB). The setup part and the answer choice part are separated by the ‘’ symbol whereas ‘;’ separates the two claims inside the answer choice part.

Both the claims and the given facts are represented by the two predicates, and . In the first example the setup part provides two facts: which should be read as: heat is high in world1 and heat is low in world2. The claimA is which should be read as friction is lower in world1 compared to the other world whereas claimB is which represents friction is lower in world2 compared to the other world. Here, and are two special symbols which refer to “windy sky” and “calm sky” respectively. This information is given through the world literal annotation. Each logical form in QUAREL has at max two worlds however the meaning of the worlds i.e. and changes with each problem. Both the predicate and has three arguments. The first one is a qualitative property, the second one is called direction which could be either low or high and the third one is the special variable world which also takes two values or . In this work, we treat and uniformly and same natural language description is generated for both of them as there only two worlds and thus the ‘absolute’ () and the ‘relative’ () descriptions are equivalent.

The QRKB of QUAREL has the following 19 qualitative properties: friction, speed, distance, smoothness, heat, loudness, brightness, apparentSize, time, weight, strength, mass, flexibility, exerciseIntensity, acceleration, thickness, gravity, breakability, and amountSweat. The QRKB has 25 qualitative relations about pairs of these properties. These relations use the predicates q+ and q-. Some example relations are: q-(friction, speed), and q+(friction, heat). Intuitively, q-(X,Y) means that the amount of X is inversely proportional to the amount of Y and q+(X,Y) means that the amount of X is proportional to the amount of Y. Every possible relation pairs are precomputed and stored in QRKB.

2.2 Textual Entailment and NLI

As briefly mentioned in Section 1 our approach uses Textual Entailment Dagan et al. (2013) and Natural Language Inference Bowman et al. (2015) models. Natural language inference (NLI) is the task of determining the truth value of a natural language text, called hypothesis given another piece of text called premise. The list of possible truth values include entailment, contradiction and neutral. Entailment means the hypothesis must be true if the premise is true. Contradiction indicates that the hypothesis can never be true if the premise is true. Neutral pertains to the scenario where the hypothesis can be both true and false as the premise does not provide enough information. Textual Entailment is a binary version of NLI task, where one has to decide if the truth value is entailment or not. Table 2 shows some examples.

Recently, several large scale NLI dataset has been developed. One of which is SNLI Bowman et al. (2015) which we use in this work. Any NLI dataset can be converted to a textual entailment dataset by replacing the contradiction and neutral label with not-entailment label. Among the recent NLI models, the two most popular models are BERT Devlin et al. (2018) and ESIM Chen et al. (2016) which we use in our implementation.

premise: Tank the kitten learned from trial and error that carpet is rougher then skin.
hypothesis: Carpet is less smooth.
label: entailment.
premise: Tank the kitten learned from trial and error that carpet is rougher then skin.
hypothesis: skin is less smooth.
label: not-entailment.
Table 2: Example premise-hypothesis pairs with annotated labels.

3 Proposed approach

A qualitative problem in QUAREL is a sequence of sentences followed by two option choices. Let denote the sequence of sentences and and be the two answer choices. The last sentence in is a question and is denoted by . For e.g., for the problem in Table, = A boomerang thrown into a windy sky heats up quite a bit, but one thrown into a calm sky stays about the same temperature. Which surface puts the least amount of friction on the boomerang?, = windy sky, = calm sky and = Which surface puts the least amount of friction on the boomerang? Given such a problem , the task is to decide if is a better answer choice or . Our algorithm, namely generate validate qualitative problem solver (gvQPS), has three key steps, namely generate, validate and inference, which are discussed in this section.

Step 1: Generate

Given and a set of hypothesis such as “windy sky has more friction” is created using templates such as “X has more friction”. Our algorithm uses a total of manually authored templates. Each template has only one variable which is substituted by the noun phrases in the , , and parts to create the set .

Table 4 shows the templates. Each template pertains to a predicate where is a qualitative property from QUAREL, , is a variable representing the textual description of the world. All the properties except speed and distance have two templates, one for and another for . The two properties speed and distance however have more than two templates to capture different senses.

For the example 2 from Table 1, there are a total of noun-phrases222according to Spacy constituency parser, namely “heat”, “trial and error”, “claws”, “kitten”, “carpet”, “skin”, “tank kitten”, “error”, “tank”, “trial”. Thus the set contains a total of (= ) hypothesis. Among these the ones related to friction and high are as follows: heat has more friction, trial and error has more friction, kitten has more friction, claws has more friction, carpet has more friction, skin has more friction, tank kitten has more friction, error has more friction, tank has more friction, trail has more friction.

Step 2: Validate

Recall that the logical form has three parts: the given facts, the claimA and the claimB all of which are represented by the or predicate. In step the system has generated the set of natural language descriptions of all possible grounded predicates, some of which are the given facts, the claimA or claimB. The goal of step is to precisely identify which statement from is claimA, which statement pertains to claimB and which statements represents the given facts. To do this, the system scores the statements in using two different Textual Entailment functions. Let , and respectively denote the score for a hypothesis to be a given fact, the claimA and the claimB. These scores are then computed as follows:

Here, and respectively denotes the concatenation of ,“(option)”, and ,“(option)”, and and are the two different Textual Entailment functions. and might have same architecture but they are trained on different datasets and take different inputs. For the example II from Table 1 which has a logical representation of , we expect the textual entailment functions to produce the following scores for the sample inputs of table 3.

Table 3: Example of expected scores and sample inputs. To save space we do not show the arguments and which takes the following value: = Tank the kitten learned from trial and error that carpet is rougher then skin. When he scratches his claws over carpet it generates ________ then when he scratches his claws over skin, = When he scratches his claws over carpet it generates ________ then when he scratches his claws over skin, = more heat, = less heat.
(Property, Direction) Template(s)
(Friction, high) X has more friction
(Friction, low) X has less friction
(Smoothness, high) X is more smooth
(Smoothness, low) X is less smooth
(Heat, high) more heat is generated on X
(Heat, low) small amount of heat is generated on X
(Loudness, high) X sounds louder
(Loudness, low) X sounds softer
(Brightness, high) X shines more
(Brightness, low) X looks dim
(apparentSize, high) X appears big
(apparentSize, low) X appears small
(Speed, high) X is fast
moves fast through X
(Speed, low) X is slow
moves slowly through X
(time, high) X takes more time
(time, low) X takes less time
(weight, high) X has more weight
(weight, low) X has less weight
(acceleration, high) acceleration is more for X
(acceleration, low) acceleration is less for X
(strength, high) X has more strength
(strength, low) X has little strength
(distance, high) travelled more on X
X is far
X travelled more
X threw the object far
(distance, low) travelled less on X
X is near
X travelled less
X could not throw the object far
(thickness, high) X is thicker
(thickness, low) X is thin
(mass, high) X has more mass
(mass, low) X has less mass
(gravity, high) X has stronger gravity
(gravity, low) X has weaker gravity
(flexibility, high) X is more flexible
(flexibility, low) X is less flexible
(breakability, high) X is more likely to break
(breakability, low) X is less likely to break
(amountSweat, high) X is exercising more
(amountSweat, low) X is almost idle
(exerciseIntensity, high) X is sweating more
(exerciseIntensity, low) X is sweating less
Table 4: Associated templates for each qualitative property.

Step 3: Answer Generation

In this step, the system computes the final answer by using the scores that are computed in step . Let and be the hypothesis in which has respectively the highest and the highest score. The answer is option A if is more than , otherwise the answer is option B. Here, we assume that the will learn to capture the qualitative relationship. For e.g., if it assigns a high score to the hypothesis skin has less friction, it will also assign high score to the hypothesis less heat is generated on skin.

4 Textual Entailment Dataset Generation

Our algorithm uses two textual entailment functions namely, and both of which needs to be trained. In this section we describe the process that generates labeled premise-hypothesis pairs from the QUAREL annotations.

4.1 Dataset for

Let or be the claimA and or be claimB as per the associated logical form. We use this information to create following annotated premise-hypothesis pairs (we use to denote entailment and to denote not-entailment):

  1. premise = , hypothesis = generate() and label =

  2. premise = , hypothesis = generate() and label =

  3. premise = , hypothesis = generate() and label =

  4. premise = , hypothesis = generate() and label =

  5. If , premise = , hypothesis = generate() and label =

  6. If , premise = , hypothesis = generate() and label =

  7. premise = , hypothesis = generate() and label = where and ,

  8. premise = , hypothesis = generate() and label = where and ,

  9. premise = , hypothesis = generate() and label = where

  10. premise = , hypothesis = generate() and label = where

Here, denotes the string that is created for the given input of the type (qualitative property, direction, world_literal) using the templates in table 4; returns the only member of the set and is set of noun phrases from the problem P which does not have any word overlap with either world1_literal or world2_literal. For the problem II in table 1, world1_literal = “carpet” and world1_literal = “skin” and the noun phrases are = “heat”, “trial and error”, “claws”, “kitten”, “carpet”, “skin”, “tank kitten”, “error”, “tank”, “trial”. Thus the set contain the following elements: “heat”, “trial and error”, “claws”, “kitten”, “tank kitten”, “error”, “tank”, “trial”.

4.2 Dataset for

Similar to , we create the following annotated premise-hypothesis pairs for each given fact :

  1. premise = T, hypothesis = generate() and label =

  2. premise = T, hypothesis = generate() and label =

  3. premise = T, hypothesis = generate() and label =

  4. premise = T, hypothesis = generate() and label = , for all

  5. premise = T, hypothesis = generate() and label = , for all property where none of q+(), q-(),q+(), q-() is in QRKB, is either high or low, .

However, unlike , we also create the following annotated premise-hypothesis pairs for each given fact using QRKB:

  1. premise = T, hypothesis = generate() and label = , for all property such that q+() QRKB.

  2. premise = T, hypothesis = generate() and label = , for all property such that q-() QRKB.

  3. premise = T, hypothesis = generate() and label = , for all property such that q-() QRKB.

  4. premise = T, hypothesis = generate() and label = , for all property such that q+() QRKB.

Let , and respectively denote the dataset that are created for from train, dev and test split of the QUAREL dataset. Similarly, let , and denote the dataset that are created for from train, dev and test split of the QUAREL dataset. , and respectively contains , and premise-hypothesis pairs. On the other hand, , and respectively contains , and premise-hypothesis pairs. Note that, to make the dataset balanced, the pairs with label are oversampled. We also use the two-class version of the SNLI dataset to further increase the dataset size.

5 Related Work

Our work is related to both the works in semantic parsing Zelle and Mooney (1996); Kwiatkowski et al. (2011); Berant et al. (2013); Krishnamurthy et al. (2017); Reddy et al. (2014) and question answering using semantic parsing Lev et al. (2004); Berant et al. (2014); Mitra et al. (2019).

The problem of QUAREL is quite similar to the word math problems Hosseini et al. (2014); Kushman et al. (2014) in the sense that both are story problems and use semantic parsing to translate the input problem to a suitable representation.

Our work is also related to the work in Mitra et al. (2019) that uses generate-validate framework to answer questions w.r.t life cycle text. Mitra et al. (2019) uses generate-validate framework to verify “given facts”. Particularly, it shows how rules can be used to infer new information over raw text without using a semantic parser to create a structured knowledge base. The work in Mitra et al. (2019) uses a semantic parser to translate the question into one of the predefined forms. In our work, however we use generate-validate for both question and “given fact” understanding.

The work of Tafjord et al. (2018) is most related to us. Tafjord et al. (2018) proposes two models for QUAREL. One uses a state-of-the-art semantic parser Krishnamurthy et al. (2017) to convert the input problem to the desired logical representation. They call this model QUASP, which obtains an accuracy of . The other model, called QUASP+ uses a delexicalization step before giving the input to the semantic parser. The delexicalization step identifies the value(s) of world1_literal and word2_literal and then replaces all the occurrences of those strings in the text by the symbol “world1” and “world2”. The modified input is then passed to the semantic parser. The delexicalization helps the semantic parser by giving explicit pointers to world1 and world2, which results in an accuracy of 68.7%. Our model does not use such preprocessing and still performs significantly better than QUASP+ model.

6 Experimental Evaluation

We use the notation to denote that the textual entailment model in use is M which can be either ESIM or BERT and the model M is trained on the dataset D which can be any of following: , , , . Correspondingly there are a total of possible values for namely , , and . Similarly, there are a total of possible values for namely , , and . Table 5 shows the results of our algorithm for all these = combinations.

Dev(%) Test(%)
67.27 71.2
62.23 69.12
66.54 69.57
59.71 67.39
67.99 71.56
67.62 69.38
62.95 69.2
68.35 67.93
68.34 67.21
59.35 66.49
66.55 66.3
58.63 64.3
73.38 76.63
72.66 75.36
70.50 73.55
73.02 70.29
Table 5: shows the accuracy on dev and test set of QUAREL for various choice of and . Here, and respectively represents , , , .
  • The best performance is achieved when, is used as and is used as . We refer to this as gvQPSB+E. The performance of this combination is more than the combination of and which shows the boost offered by BERT and SNLI.

  • The accuracy normally drops when SNLI dataset is used in the training for the function irrespective of the model on both dev and test set. We speculate that this happens because the premise in SNLI contain proper sentences whereas the premise in the are options appended to questions and thus have different distributions.

  • ESIM models perform consistently better as than BERT models irrespective of the training dataset on both dev and test set.

Table 6 compares our best performing method with other approaches. As shown, in table 6 our model provides an improvement of over the previous state-of-the-art QUASP+.

Model Accuracy(%)
IR 48.6
PMI 50.5
QUASP 56.1
QUASP+ 68.7
gvQPSB+E 76.63
Table 6: Comparing our best performing model with existing solvers of QUAREL.

Error Analysis

Our best model, gvQPSB+E fails to solve problems. The majority of the error occurs due to the error in . The following figure shows two examples of error with and and the scores of the relevant hypothesis by .

Error Example I: Nell has very thick hair; Lynn’s hair is much thinner. Whose hair is stronger? (A) Nell (B) Lynn : (, ‘Nell’) : (, ‘Lynn’s hair’) Sample scores lynn ’s hair has more strength, nell has more strength, Error Example II: David noticed that it was harder to push his snow blower on snowy pavement than on dry pavement. This is because the dry pavement has (A) more friction or (B) less friction : (, ‘dry pavement’) : (, ‘dry pavement’) Sample scores dry pavement has more friction, dry pavement has less friction,

As seen in the above figure, for both the error examples, the and have been identified correctly, however the predicts wrongly which results in an error.

7 Conclusion

Semantic Parsing has been quite useful in solving problems that require sophisticated reasoning such as math word problems, logic puzzles, qualitative word problems, question-answering over database or query understanding and has been extensively used in many applications. However, traditional semantic parser has certain drawbacks which can be potentially addressed with the generate-validate framework. In this work, we have shown how to successfully apply the generate-validate framework to solve qualitative word problems and have shown the opportunities of transfer learning that are available in this framework. Our future work is to apply the generate-validate framework to other applications which use semantic parsing. Our work also connects the popular task of Natural Language Inference to the applications of semantic parsing and any improvements in the Natural Language Inference models will naturally improve the performance of our models.


  • Berant et al. (2013) Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In

    Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing

    , pages 1533–1544.
  • Berant et al. (2014) Jonathan Berant, Vivek Srikumar, Pei-Chun Chen, Abby Vander Linden, Brittany Harding, Brad Huang, Peter Clark, and Christopher D Manning. 2014. Modeling biological processes for reading comprehension. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1499–1510.
  • Bobrow (2012) Daniel G Bobrow. 2012. Qualitative reasoning about physical systems, volume 24. Elsevier.
  • Bowman et al. (2015) Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326.
  • Chen et al. (2016) Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2016. Enhanced lstm for natural language inference. arXiv preprint arXiv:1609.06038.
  • Clark (2015) Peter Clark. 2015. Elementary school science and math tests as a driver for ai: take the aristo challenge! In Twenty-Seventh IAAI Conference.
  • Dagan et al. (2013) Ido Dagan, Dan Roth, Mark Sammons, and Fabio Massimo Zanzotto. 2013. Recognizing textual entailment: Models and applications. Synthesis Lectures on Human Language Technologies, 6(4):1–220.
  • Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
  • Hosseini et al. (2014) Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. 2014. Learning to solve arithmetic word problems with verb categorization. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 523–533.
  • Krishnamurthy et al. (2017) Jayant Krishnamurthy, Pradeep Dasigi, and Matt Gardner. 2017. Neural semantic parsing with type constraints for semi-structured tables. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1516–1526.
  • Kushman et al. (2014) Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. 2014. Learning to automatically solve algebra word problems. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 271–281.
  • Kwiatkowski et al. (2011) Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwater, and Mark Steedman. 2011. Lexical generalization in ccg grammar induction for semantic parsing. In Proceedings of the conference on empirical methods in natural language processing, pages 1512–1523. Association for Computational Linguistics.
  • Lev et al. (2004) Iddo Lev, Bill MacCartney, Christopher Manning, and Roger Levy. 2004. Solving logic puzzles: From robust processing to precise semantics. In Proceedings of the 2nd Workshop on Text Meaning and Interpretation.
  • Levesque et al. (2012) Hector Levesque, Ernest Davis, and Leora Morgenstern. 2012. The winograd schema challenge. In Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning.
  • Mitra et al. (2019) Arindam Mitra, Peter Clark, Oyvind Tafjord, and Chitta Baral. 2019. Declarative question answering over knowledge bases containing natural language text with answer set programming. In AAAI 2019.
  • Reddy et al. (2014) Siva Reddy, Mirella Lapata, and Mark Steedman. 2014. Large-scale semantic parsing without question-answer pairs. Transactions of the Association for Computational Linguistics, 2:377–392.
  • Tafjord et al. (2018) Oyvind Tafjord, Peter Clark, Matt Gardner, Wen-tau Yih, and Ashish Sabharwal. 2018. Quarel: A dataset and models for answering questions about qualitative relationships. arXiv preprint arXiv:1811.08048.
  • Weld and De Kleer (2013) Daniel S Weld and Johan De Kleer. 2013. Readings in qualitative reasoning about physical systems. Morgan Kaufmann.
  • Zelle and Mooney (1996) John M Zelle and Raymond J Mooney. 1996.

    Learning to parse database queries using inductive logic programming.


    Proceedings of the national conference on artificial intelligence

    , pages 1050–1055.