Exploring Neural Models for Parsing Natural Language into First-Order Logic

02/16/2020 ∙ by Hrituraj Singh, et al. ∙ 0

Semantic parsing is the task of obtaining machine-interpretable representations from natural language text. We consider one such formal representation - First-Order Logic (FOL) and explore the capability of neural models in parsing English sentences to FOL. We model FOL parsing as a sequence to sequence mapping task where given a natural language sentence, it is encoded into an intermediate representation using an LSTM followed by a decoder which sequentially generates the predicates in the corresponding FOL formula. We improve the standard encoder-decoder model by introducing a variable alignment mechanism that enables it to align variables across predicates in the predicted FOL. We further show the effectiveness of predicting the category of FOL entity - Unary, Binary, Variables and Scoped Entities, at each decoder step as an auxiliary task on improving the consistency of generated FOL. We perform rigorous evaluations and extensive ablations. We also aim to release our code as well as large scale FOL dataset along with models to aid further research in logic-based parsing and inference in NLP.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Semantic parsing aims at mapping natural language to structured meaning representations. This enables a machine to understand unstructured text better which is central to many tasks requiring natural language understanding such as question answering Berant et al. (2013); Pasupat and Liang (2015), robot navigation MacMahon et al. (2006); Artzi and Zettlemoyer (2013), database querying Zelle and Mooney (1996) etc. For question answering, natural language question is converted to formal semantics which facilitates interaction with a knowledge base (such as FreeBase Bollacker et al. (2008)) for retrieving concise answers Furbach et al. (2010). Such representations can be used to specify instructions to robots Artzi and Zettlemoyer (2013)

or conversational agents

Artzi and Zettlemoyer (2011) for executing desired action(s) in an environment. Similarly, natural language queries are transformed into executable database programming language instructions (such as SQL) to retrieve or generate correct results in a database Sun et al. (2018); Zhong et al. (2017).

A variety of logical forms and meaning representations have been proposed for text. These include graph-based formalisms Banarescu et al. (2013); Abend and Rappoport (2013); Oepen et al. (2014); Kollar et al. (2018) where text is represented as a typed graph. The entities and action events are represented as nodes with labeled edges depicting relations between them. Semantic dependency tree Oepen et al. (2014) is a directed graph depicting the syntactic structure of a sentence in the form of modifier relations between its words. AMR (Abstract Meaning Representation) graphs Banarescu et al. (2013) use variables to annotate nodes following neo-Davidsonian style Davidson (1969). Lambda Dependency-based Compositional Semantics (-DCS) Liang (2013) was proposed as a formal language adapting Dependency-Based Compositional Semantics Liang et al. (2013) borrowing the expressiveness of lambda calculus Barendregt et al. (1984) but aiming to remove explicit use of variables.

In this work, we focus on first-order logic (FOL) Smullyan (2012) as the language formalism for text. FOL represents entities and actions in natural language through quantified variables and consists of functions (called predicates) which take variables as arguments. The predicates attach semantics to variables and express relations between objects Blackburn (2005). For instance, a simple sentence - “a man is eating” can be represented through FOL as

Advanced natural language concepts as in sentence “the man and woman are seated facing each other” can be expressed as

where “man” and “woman” are represented together through shared variable C and “facing each other” is represented by negating the existence of a thing for which C is not facing E holds true.

The success of learning based neural approaches in NLP tasks like machine translation Cho et al. (2014); Sutskever et al. (2014); Vaswani et al. (2017), paraphrase generation Prakash et al. (2016); Gupta et al. (2018), dialog modeling Vinyals and Le (2015); Kottur et al. (2017), machine comprehension Wang et al. (2017), logical inference Kim et al. (2019) has motivated their use for semantic parsing Kočiskỳ et al. (2016); Buys and Blunsom (2017); Cheng et al. (2017); Liu et al. (2018); Li et al. (2018) as well. Many such works use the encoder-decoder framework to model it as a sequence transduction task. Since they were designed for solving specific tasks like question answering, such methods Jia and Liang (2016); Dong and Lapata (2016) have mainly focussed on confined logical formalism for specific domains such as flight reservation, restaurant booking, etc Wang et al. (2015) capturing limited vocabulary and semantic concepts.

In this paper, we aim at developing a general-purpose open-domain neural first-order logic parser for natural language sentences to examine the capabilities of such models. We train our model by obtaining a large corpus of text-FOL pairs for sentences in SNLI Dataset Bowman et al. (2015) through C&C parser Clark and Curran (2007) and Boxer Bos (2008) (discussed later in detail).111https://github.com/valeriobasile/candcapi Apart from meaning depiction, parsing sentences to FOL would enable neural models to capture complex relationships between entities resulting in richer embeddings which might be useful in several other NLP tasks. Such an examination would help understand challenges in generating FOL through neural approaches owing to complexities in its representation. Since it is one of the first such exploration for FOL, we treat the popular sequence to sequence model coupled with attention mechanism Bahdanau et al. (2014) as our baseline. We propose to disentangle the prediction of different types of FOL syntactic entities (unary and binary predicates, variables etc) while parsing sentences and show improvements through performing category type prediction as an auxiliary task. We further show major improvements by explicitly constraining the decoder to align variables across unary and binary predicates. This restricts the model to maintain consistency while expressing standalone entity attributes and relations between them.

Our contributions can be enumerated as: 1) We explore and develop an open domain neural semantic parser to parse natural language sentences to FOL using Seq2Seq framework; 2) We propose disentangled FOL entity type prediction along with FOL parsing under multi-task learning and FOL variable alignment through decoder alignment mechanism. We perform extensive ablation studies to establish the improvements registered; 3) We also aim to release our code, models and large scale dataset used comprising of sentence-FOL mappings to aid further research in FOL based NLP.

2 Background

Text to FOL Conversion : In this section, we give a brief overview of syntactic-semantic analysis pipeline used for obtaining the mappings data through Boxer Bos (2008) based on Combinatory Categorial Grammar (CCG) Steedman and Baldridge (2011) and Discourse Representation Theory (DRT) Kamp et al. (2011). CCG is phrase-level grammar which defines rules for generating constituency-based structures. CCG comprises of syntactically typed lexical items such that each item is a lambda-expression and uses combinatorial logic (lambda calculus) to combine them through the application of combinators. CCG derivation guides semantic composition to obtain Discourse Representation Structures (DRS) from CCG parses. DRS comprises of discourse referents and conditions defined on them which can be recursive. DRS is capable of representing varied linguistic phenomena such as anaphora, presupposition, tense and aspect. These DRSs are compatible and can be converted to FOL through a set of syntactic transformations Bunt et al. (2001). Formally, predicates in FOL are atomic formulas that are combined through logical connectives - logical and (), logical or () ; and quantifiers. In general, a predicate is an n-ary function of variables. There are two types of quantifiers, universal () - which specifies that sub-formula within its scope is true for all instances of the variable and existential () - which asserts existence of at least one instance represented by a variable under which the sub-formula holds true. For example, “All humans eat” can be represented as

Following generalized De Morgan’s law Johnstone (1979), universal quantifiers can be represented through existential quantification and negation () preserving the semantics as

Output and Mapping Format : Given a text sentence, we obtain the following FOL output.

Sentence : “three women are traveling by foot”

Output FOL : fol(1,some(A,some(B,some(C,and (r1by(B,A),and(n1foot(A),and(r1agent(B,C),and (v1travel(B),and(n1woman(C),some(D,and(card (C,D),and(c3number(D),n1numeral(D)))))))))))))

Here, the predicates are prefixed with POS-tags Wilks and Stevenson (1998) and relation types. Since the output FOL comprises of existential quantifiers and disjunction of atomic formulas only, we convert it into an equivalent mapping as a sequence of predicates, argument variables, scoping symbols (such as “fol(”, “)”, “not(”) and train our models to predict the sequence. We arrange scope symbols in accordance to their nesting level (top most appearing first in the sequence) with further ordering that entities that are part of same scope are arranged as sequence of unary predicates, followed by binary predicates and other nested scoped entities.

Equivalent Mapping : fol( n1foot A v1travel B n1woman C c3number D n1numeral D r1by B A r1agent B C card C D )

3 Proposed Approach

We model parsing a given sentence into FOL as a sequence to sequence transduction problem. Our parser

generates a token in the output FOL representation in a sequential manner by greedily sampling it from a probability distribution conditioned on the input sentence and the previously generated tokens. Our input

consists of a sequence of tokens which get encoded into hidden contextual representations by an Encoder. The Decoder, then, generates an output sequence of tokens .


3.1 Encoder

Our Encoder is a bidirectional LSTM (biLSTM) which encodes a sequence of input tokens into a sequence of hidden states , to capture contextual information from the input that is eventually used by the decoder to produce the output FOL sequence. The biLSTM block takes word embeddings for the input tokens , as input and processes them to calculate the contextual representations


where denotes concatenation operation and refer to the forward and backward hidden states of the biLSTM.

3.2 Decoder

Decoder consists of an LSTM which uses the outputs of encoder along with previously decoded outputs, provided as embeddings to it as input, to generate a sequence of hidden states .


Attention Bahdanau et al. (2014) has now become ubiquitous in sequence to sequence models. We consider it to be a part of our baseline model. Following Bahdanau et al. (2014), we calculate the weights for encoder-decoder attention using as queries while as keys as well as values (eq. 4.

Figure 1: Overview of our architecture showing separate heads (red), category prediction (orange) and alignment mechanism (green and pink). Input to Decoder LSTM (blue) depicts the output of last step being fed at next step. Red arrow between Attention Layer and Decoder depicts standard encoder-decoder attention.

The encoder-context vector is obtained by taking a weighted sum of encoder’s hidden states

(eq. 5).


The hidden state of the decoder along with encoder-context vector is used to predict the final output token at step.


where is output head and is the dimension of hidden vector and is the dimension of context vector.

We train the model on the standard cross-entropy objective while adopting teacher forcing methodology i.e. giving the inputs to the decoder from ground truth instead of previously decoded tokens while training


where is the target token from ground truth at step and refers to the trainable model parameters.

3.2.1 Separate Heads

The output tokens in an FOL sequence do not all belong to the same token category unlike majority sequence to sequence translation problems which process words. In particular, the output tokens in an FOL sequence can be divided into four major types - Unary Predicates , Binary Predicates , Variables , and Scoped Entities . We create separate vocabularies of sizes , , , and for each category. Apart from variables which have one-hot embedding, all other types of output tokens have dense embeddings. This is because a token of category does not posses semantic meaning that is shared across all sequences from the output distribution. Thus, they are defined in the context of an FOL sequence only. We represent them through one-hot embeddings to ensure independence between them.

Building on above motivation, we use five different heads on top of Decoder LSTM. While one head decides what type of token is being generated at a given decoding step, the other heads decode the probabilities of different types of tokens.


where and . We also treat different categories as words in a vocabulary of size and therefore,

We, thus, train the model on an additional auxiliary task of predicting the type of the token being generated at each step. Hence, the overall cross-entropy objective to decode the correct type at all steps becomes


where is the target type (from ground truth) of token to be predicted at this step and the probability of generating token is given by . refers to additional decoder parameters introduced in the model. Thus, our overall objective is now a sum of both cross entropy and auxiliary objective:


3.3 Decoder Self Attention

One of the key challenges for the model is to identify the relationship between the variables it generates. A variable that is an argument in a binary predicate should be aligned with the same variable used as an argument in a unary predicate previously. One of the ways to achieve such alignment is through decoder self-attention which is an extension of the regular encoder-decoder attention. In this case, queries, keys and values - all are decoder hidden states . Therefore, we determine decoder context vector along with the encoder-context vector. However, one of the key differences between encoder-decoder and decoder self-attention is that while encoder-decoder attention can be applied to the whole input, decoder self-attention can only be applied on the hidden states which have been decoded so far. Just like encoder-context, decoder context is calculated by taking a weighted sum of decoder hidden states


The linear head on the decoder now uses both encoder and decoder contexts along with decoder hidden state to generate the final output



3.3.1 Alignment Mechanism

Through decoder self attention, the model does not receive any explicit signal on alignment and relies only on cross-entropy objective to identify such relations between different variables. In order to provide an explicit signal to the model, we introduce Alignment Mechanism.

At each variable decoding step, along with decoding the type of the token i.e.

variable, a linear classifier makes the decision whether this variable is aligned with any previously decoded token/variable or is an entirely new variable being generated at this step


where . Depending on this decision by the classifier, an alignment mechanism similar to decoder self-attention performs the relational mapping between a previously decoded variable and the variable which is currently being decoded. This mapping is performed only for the variables and not for any other category. All the previously generated hidden states of the decoder are linearly projected into a different space before calculating the position of the token with which the variable is aligned. The projection is performed to reduce the interference with encoder-decoder attention due to alignment mechanism training.


The probabilities of whether a particular step aligns with the current decoding step is calculated with an attention-like formulation.


For every other category of tokens but variables, the decoder heads remain the same. However, for variables, we first calculate aligned hidden state value as


The output is, then, calculated as


In order to provide explicit signal during training, we train and on the target alignment positions and decisions with a cross entropy objective


where and refer to ground truth decision and alignment position values and refers to additional parameters introduced due to alignment mechanism. Therefore our overall loss becomes


4 Experiments

4.1 Dataset

We collated a subset of SNLI Bowman et al. (2015) corpus by extracting sentences from both premise and hypothesis for a limited number of examples. Eliminating duplicates, we prepared (Refer Section 2) two versions of the dataset - Small and Large to examine if the proposed improvements remain consistent even on small data. In the smaller version, we prepared 138,346 instances while in the larger one, we prepared 255,501 instances for training. We used the development and test sets of SNLI as provided but eliminated the duplicates resulting in evaluation set having 10,691 instances and test set having 10,633 instances.

4.2 Implementation

We used Pytorch

222https://pytorch.org library for implementing an auto-differentiable graph of our computations. All the models were trained with an Adam OptimizerKingma and Ba (2014) initialized with a learning rate of with a decay rate of . We use an embedding size = for encoder as well as decoder embeddings in the baseline model. In our separated heads model, remained the same for encoder embeddings. However, on the decoder side, Unary and Binary predicates have an embedding size of 100 each while variable and type embeddings are one-hot having the number of dimensions equal to their respective vocabulary sizes. Scoped entities, being very less in number, were encoded with an embedding size of 50. Our final input embedding is a concatenation of Unary, Binary, Variable, Scope and type embeddings. All dense embeddings are randomly initialized and trained from scratch.333We experimented with GloVe embeddings but it did not give further improvements We used , and .

4.3 Results and Discussion

4.3.1 Evaluation Framework

We evaluate different models through estimating the accuracy of complete match between gold standard FOL and predicted output. Due to the complex nature of the task, it is less likely that the model generates exactly the same FOL. To mitigate this, we propose to evaluate the degree of partial match between two FOLs following the intuition behind D-match and Smatch

Cai and Knight (2013), which are widely used to evaluate AMR graphs and DRGs. We align two FOLs in bottom up manner beginning with variables. For aligning two variables, it is required that the corresponding predicates’ name (in which they appear as arguments) and argument positions match. Subsequently, while aligning two predicates, we check if their arguments are aligned and their names are same. We continue to follow the same process where we align nested scope symbols (“not(” etc.). In particular, given an expected scoped entity, we determine the predicted scoped entity having maximum alignment with it based on the count of other aligned predicates and scoped entities that are contained inside them. Given an FOL, we decompose it into related pairs of the form such that appears inside the scope of . For instance, a variable that is an argument in a predicate or a predicate appearing inside a scoped entity. Consequently, we estimate the number of pairs in expected FOL that can be matched with pairs in predicted FOL based on the constraint that corresponding entities in the pairs should be aligned. We select the alignment with maximum matches and report metrics (precision-recall and F1 over pair-matching) as evaluation criteria along with overall FOL accuracy.

4.3.2 Comparison with Baseline and Ablation Studies

We conduct a range of experiments and evaluations on different models. We show our results on both development and test sets in Table 1, 2, 3, and 4. Our Vanilla (Baseline) model consists of a biLSTM Encoder and a plain LSTM decoder as described in Section 3.2 coupled with an encoder-decoder attention mechanism. Performing disentanglement, our Separate Heads model uses different linear heads on the top of LSTM decoder for different category of tokens as discussed in Section 3.2.1. Our final proposed model Separate Heads + Align uses our alignment mechanism on the top of Separate Heads and utilises the disentangled variable prediction mechanism coupled with an alignment mechanism to effectively identify the relationships between variables in binary predicates and their unary counterparts. We also conduct ablations on the Vanilla and Separate Heads models by incrementally adding both decoder self-attention and alignment mechanisms.

Evidently, our final model Separate Heads + Align convincingly outperforms all described models and improves the baseline by F-1 points. Decoder self-attention, even though, improves Vanilla Model does not provide any improvements when used with Separate Heads. This can be attributed to its inability to incorporate decoder level information which probably becomes factorized automatically during training through using separate heads. However, it provides improvements over Vanilla by a good margin but still only matches or remains inferior to the standalone Separate Heads model. Align Mechanism manages to provide a huge boost to the Separate Heads model by improving it by F-1 points. However, performance deteriorates when used with Vanilla model since its ability to align variables only vanishes in this setup which we find critical for its working. We further note that by increasing the size of training data, the performance increases uniformly with our final model achieving the best F-1 of and an overall accuracy of .

Model Precision Recall F-1 Accuracy
Vanilla (Baseline) 65.48 65.15 65.31 59.26
+ Self Attention 67.86 66.70 67.28 60.74
+ Align Mechanism 62.13 61.06 61.59 56.98
Separate Heads 68.48 66.81 67.64 60.77
+ Self Attention 67.11 65.87 66.48 60.02
Separate Heads + Align 73.68 72.17 72.92 63.26
Table 1: Results showing overall accuracy and F1-Scores of different models (trained on Large dataset) on development dataset
Model Precision Recall F-1 Accuracy
Vanilla (Baseline) 66.43 65.84 66.13 60.45
+ Self Attention 68.56 67.23 67.89 61.18
+ Align Mechanism 61.74 60.60 61.17 56.79
Separate Heads 69.18 67.63 68.40 61.14
+ Self Attention 66.92 65.69 66.30 60.12
Separate Heads + Align 74.05 72.53 73.28 63.24
Table 2: Results showing overall accuracy and F1-Scores of different models (trained on Large dataset) on Test dataset
Model Precision Recall F-1 Accuracy
Vanilla (Baseline) 59.54 58.14 58.83 52.99
Separate Heads 62.00 60.58 61.28 55.20
Separate Heads + Align 65.75 64.36 65.05 55.30
Table 3: Results showing overall accuracy and F1-Scores of different models (trained on Small Dataset) on development dataset
Model Precision Recall F-1 Accuracy
Vanilla (Baseline) 59.87 58.41 59.13 52.92
Separate Heads 62.56 61.10 61.82 55.94
Separate Heads + Align 66.45 65.05 65.74 56.10
Table 4: Results showing overall accuracy and F1-Scores of different models (trained on Small Dataset on test dataset

4.3.3 Analysis

We perform additional experiments to analyse the results observed. We conduct two sets of analysis - Variation of F-1 score with input length and Perturbed training to establish the robustness of our proposed method.

Figure 2: Variation of output F-1 Score with input length on Test Dataset

Variation with Input Length: Evidently, our proposed models are relatively much more robust to increase in length in the input sentence as shown in Fig. 2. This can be attributed to many factors - increased model capacities as well as their abilities to process different categories of output tokens separately giving better long range dependencies and less confusion in generating many variables over FOL owing to better alignment across the sequence.

Model Precision Recall F-1 Accuracy
Vanilla(Baseline) 63.75 62.32 63.03 57.95
Separate Heads 67.39 65.59 66.48 61.44
Separate Heads + Align 72.90 71.95 72.42 63.14
Table 5: Results on Test set showing both accuracy and F1-Scores with perturbed training

Perturbed Training: It has been noticed in literature (Jia and Liang (2017); Niven and Kao (2019) that neural models sometimes exploit trivial patterns in outputs/inputs to fool and provide pseudo-improved results. One such pattern could be presence of variables like A and B with some specific unary and binary predicates. In order to disturb such patterns, we randomly permute the presence of such variables in the ground truth during training. Our baseline model indeed shows a significant drop in results (Table 5). On the other hand, our other two main models do not show such large drop proving their robustness to such disturbances.

5 Related Work

Early semantic parsers were majorly rule based Johnson (1984); Woods (1973); Thompson et al. (1969) using grammar systems Waltz (1978); Hendrix et al. (1978)

, employing shallow pattern matching

Johnson (1984) and parse tree to generate database query language Woods (1973). These were succeeded by data driven learning techniques which use language data paired with meaning representations and can be broadly classified into statistical methods Thompson (2003); Zettlemoyer and Collins (2012); Zelle and Mooney (1996); Kwiatkowski et al. (2010) and neural approaches Kočiskỳ et al. (2016); Dong and Lapata (2016); Jia and Liang (2016); Buys and Blunsom (2017); Cheng et al. (2017); Liu et al. (2018); Li et al. (2018). Zettlemoyer and Collins (2012) proposed to use sentences and their lambda calculus expressions to learn a log linear model through probabilistic CCG to rank different parses for a given sentence using simple features such as count of lexical entries. Additionally, captioned videos have been used to perform visually grounded semantic parsing Ross et al. (2018). Feedback based semantic parsing has been done to facilitate continuous improvement in the quality of parse through conversations Artzi and Zettlemoyer (2011) and user interaction Iyer et al. (2017); Lawrence and Riezler (2018).

Neural approaches alleviate the need for manually defining lexicons and can further be categorized based on the structure of parse into sequential parse prediction

Jia and Liang (2016); Kočiskỳ et al. (2016) and graph structure decoding which tailor network architecture to utilize the syntactic structure of meaning representation. Yin and Neubig (2017); Alvarez-Melis and Jaakkola (2016); Rabinovich et al. (2017). Dong and Lapata (2016) proposed SEQ2TREE to generate domain-specific hierarchical logical form by introducing parenthesis token and parent connections to recursively generate sub-trees. Rabinovich et al. (2017) introduced a dynamic decoder whose components are composed depending on generated tree parse. Liu et al. (2018) parse DRSs using dedicated hierarchical decoders to generate partial structure first before the semantic content. We instead make the model disambiguate syntactic types (unary and binary predicates, variables, scope symbols) through performing category type prediction as an auxiliary task and using separate prediction heads. Constrained decoding using target language syntax and grammar rules has been explored Yin and Neubig (2017); Xiao et al. (2016). Copy-mechanism Gu et al. (2016) has been used to facilitate the generation of out of vocabulary entities through encoder attention Jia and Liang (2016). However, our variable alignment mechanism is different since it constrains the model to align binary predicate arguments with previously generated unary structures (alignment happening at decoder level) through specifying explicit loss on whether to align and where to align.

6 Conclusion

In this work, we examined the capability of neural models on the task of parsing First-Order Logic from natural language sentences. We proposed to disentangle the representations of different token categories while generating FOL output and used category prediction as an auxiliary task. We utilized token factorization to build an alignment mechanism which effectively manages to capture the relationship between variables across different predicates in FOL. Our analysis showed the difficulties faced by neural networks in modeling FOL and ways to tackle them. We also experimented by introducing a perturbation in inputs in order to examine the robustness of different proposed models. In a bid to promote research further in the area, we aim to release our code as well as data publicly.