Cross-lingual Semantic Parsing

by   Sheng Zhang, et al.
Johns Hopkins University

We introduce the task of cross-lingual semantic parsing: mapping content provided in a source language into a meaning representation based on a target language. We present: (1) a meaning representation designed to allow systems to target varying levels of structural complexity (shallow to deep analysis), (2) an evaluation metric to measure the similarity between system output and reference meaning representations, (3) an end-to-end model with a novel copy mechanism that supports intrasentential coreference, and (4) an evaluation dataset where experiments show our model outperforms strong baselines by at least 1.18 F1 score.



There are no comments yet.


page 1

page 2

page 3

page 4


Enhancing Cross-lingual Transfer by Manifold Mixup

Based on large-scale pre-trained multilingual representations, recent cr...

Translate, then Parse! A strong baseline for Cross-Lingual AMR Parsing

In cross-lingual Abstract Meaning Representation (AMR) parsing, research...

Limitations of Cross-Lingual Learning from Image Search

Cross-lingual representation learning is an important step in making NLP...

MaskParse@Deskin at SemEval-2019 Task 1: Cross-lingual UCCA Semantic Parsing using Recursive Masked Sequence Tagging

This paper describes our recursive system for SemEval-2019 Task 1: Cros...

Making Better Use of Bilingual Information for Cross-Lingual AMR Parsing

Abstract Meaning Representation (AMR) is a rooted, labeled, acyclic grap...

Hitachi at MRP 2019: Unified Encoder-to-Biaffine Network for Cross-Framework Meaning Representation Parsing

This paper describes the proposed system of the Hitachi team for the Cro...

SEMA: an Extended Semantic Evaluation Metric for AMR

Abstract Meaning Representation (AMR) is a recently designed semantic re...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

We are concerned here with representing the semantics of multiple natural languages in a single meaning representation. Renewed interest in meaning representations has led to a surge of proposed new frameworks, e.g., GMB Basile et al. (2012), AMR Banarescu et al. (2013), UCCA Abend and Rappoport (2013), and UDS White et al. (2016), as well as further calls to attend to existing representations, e.g., Episodic Logic (EL) Schubert and Hwang (2000); Schubert (2000); Hwang and Schubert (1994); Schubert (2014), or Discourse Representation Theory (DRT) Kamp (1981); Heim (1988).

Many of these efforts are limited to the analysis of English, but with a number of exceptions, e.g., recent efforts by DBLP:conf/eacl/BosEBAHNLN17, ongoing efforts in Minimal Recursion Semantics (MRS) Copestake et al. (1995), multilingual FrameNet annotation and parsing Fung and Chen (2004

); Padó and Lapata (

2005), among others. For many languages, semantic analysis can not be performed directly, owing to a lack of training data. While there is active work in the community focused on rapid construction of resources for low resource languages Strassel and Tracey (2016), it remains an expensive and perhaps infeasible solution to assume in-language annotated resources for developing semantic parsing technologies. In contrast, bitext is easier to get: it occurs often without researcher involvement,111For example, owing to a government decree. and even when not available, it may be easier to find bilingual speakers that can translate a text, than it is to find experts that will create in-language semantic annotations. In addition, we are simply further along in being able to automatically understand English than we are other languages, resulting from the bias in investment in English-rooted resources.

Figure 1: Input and output of cross-lingual semantic parsing. The reference translation for the input is “In Biloxi, 30 people were reported dead in one block of flats which was hit by a storm surge”.

Therefore, we propose the task of cross-lingual semantic parsing, which aims at transducing a sentence in the source language (e.g., Chinese sentence in Fig. 1) into a meaning representation derived from English examples, via bitext. Our contributions are four-fold:

(1) We present a meaning representation, which allows systems to target varying levels of structural complexity (shallow to deep analysis).

(2) We design an evaluation metric to measure the similarity between system output and reference meaning representations.

(3) We propose an encoder-decoder model to learn end-to-end cross-lingual semantic parsing. With a copying mechanism, the model is able to solve intra-sentential coreference explicitly.

(4) We release the first evaluation dataset for cross-lingual semantic parsing. Experiments show that our proposed model achieves an score of 38.38, which outperforms several strong baselines.

2 Related Work

Our work synthesizes two strands of research, meaning representation and cross-lingual learning.

The meaning representation targeted in this work is akin to that of hobbs2003, but our eventual goal is to transduce texts from arbitrary human languages into a “…broad, language-like, inference-enabling [semantic representation] in the spirit of Montague…” Schubert (2015). Unlike efforts such as by Schubert and colleagues that directly target such a representation, we are pursuing a strategy that incrementally increases the complexity of the target representation in accordance with our ability to fashion models capable of producing it.222E.g., in Fig. 1 we recognize “by a storm surge” as an initial structural unit, with multiple potential analysis, which may be further refined based on the capabilities of a given cross-lingual semantic parser. Embracing underspecification in the name of tractability is exemplified by MRS Copestake et al. (2005); Copestake (2009), the so-called slacker semantics, and we draw inspiration from that work. Representations such as AMR Banarescu et al. (2013) also make use of underspecification, but usually this is only implicit: certain aspects of meaning are simply not annotated. Unlike AMR, but akin to decisions made in PropBank Palmer et al. (2005) (which forms the majority of the AMR ontological backbone), we target a representation with a close correspondence to natural language syntax. Unlike interlingua Mitamura et al. (1991); Dorr and Habash (2002) that maps the source language into an intermediate representation, and then maps it into the target language, we are not concerned with generating text from the meaning representation. Substantial prior work on meaning representations exists, including HPSG-based representations Copestake et al. (2005), CCG-based representations Steedman (2000); Baldridge and Kruijff (2002); Bos et al. (2004), and Universal Dependencies based representations White et al. (2016); Reddy et al. (2017). See  Schubert (2015); Abend and Rappoport (2017) for further discussion.

Cross-lingual learning has previously been applied to various NLP tasks. Yarowsky:2001:IMT:1072133.1072187,Pado:2009:CAP:1734953.1734960,C16-1056,faruqui-kumar:2015:NAACL-HLT focused on projecting existing annotations on source-language text to the target language. zeman2008cross,ganchev-gillenwater-taskar:2009:ACLIJCNLP,mcdonald-petrov-hall:2011:EMNLP,naseem-barzilay-globerson:2012:ACL2012,Q14-1005 enabled model transfer by sharing features or model parameters for different languages. sudo2004cross,zhang-duh-vandurme:2017:EACLshort worked on cross-lingual information extraction and demonstrated the advantages of end-to-end learning approaches. In this work, we explore end-to-end learning for cross-lingual semantic parsing, as discussed in Section 6.

Figure 2: Meaning representations.

3 Meaning Representation

The goal of cross-lingual semantic parsing is to provide a meaning representation which can be used for various types of deep and shallow analysis on the target language side. Many meaning representations potentially suitable for this goal, e.g., AMR Banarescu et al. (2013), UCCA Abend and Rappoport (2013), UDS White et al. (2016), and UDepLambada Reddy et al. (2017). In this work, we choose the representation used as a scaffold by UDS, namely the PredPatt meaning representation. Other meaning representations may also be feasible.

PredPatt is a framework which defines a set of patterns for shallow semantic parsing. The reasons for choosing PredPatt meaning representation are three-fold: (1) Compatibility: The PredPatt meaning representation relates to Robust Minimal Recursion Semantics (RMRS) Copestake (2007), aiming for a maximal degree of semantic compatibility. With such a meaning representation, shallow analysis, such as predicate-argument extraction Zhang et al. (2017a), can be regarded as producing a semantics which is underspecified and reusable with respect to deeper analysis, such as lexical semantics and inference White et al. (2016). (2) Robustness and Speed: Patterns defined in PredPatt for producing this meaning representation are non-lexical and linguistically well-founded, and PredPatt has been shown to be fast and accurate enough to process large volumes of text Zhang et al. (2017b). (3) Cross-lingual validity: Patterns in PredPatt are purely based on Universal Dependencies, which is designed to be cross-linguistically consistent.

In the following sections, we describe three forms of PredPatt meaning representation (Fig. 2). They are created for different purposes, and are inter-convertible. In this work, the graph representation is used for evaluation, and the linearized representation is used for learning cross-lingual semantic parsing.

3.1 Flat Representation

The non-recursive or “flat” representation can be viewed as a Parson-style Parsons (1990) and underspecified version of neo-Davidsonianized RMRS Copestake (2007). As shown in Fig. 2(a), the flat representation is a tuple where is a bag of predicates that are all maximally unary, and is a bag of arguments represented by separate binary relations.

Predicate: Predicates in PredPatt representation are referred as complex predicates: they are open-class predicates represented in the target language. Scope and lexical information in the predicates are left unresolved, yet can be recovered incrementally in deep semantic parsing. From the perspective of RMRS, complex predicates are conjunctions of underspecified elementary predications Copestake et al. (2005) where handles are ignored, but syntax properties from Universal Dependencies are retained. For instance, in Fig. 2(a), the subscript “h” in the predicate “were reported” indicates that “reported” is a syntactic head in the predicate.

Argument Relation: The Parson-style flat representation makes arguments first-class predications

. Using this style allows incremental addition of arguments, which is useful in shallow semantics where the arity of open-class predicate and the argument indexation are underspecified. They can be recovered when lexicon is available in deep analysis 

Dowty (1989); Copestake (2007).

3.2 Graph Representation

The graph representation as shown in Fig. 2(b) is developed to improve ease of readability, parser evaluation, and integration with lexical semantics. The structure of the graph representation is a triple : a set of variables (e.g., and ), a mapping from each variable to its instance in the target language (e.g., the dotted arrows in Fig. 2(b)), and a mapping from each pair of variables to their argument relation (e.g., the solid arrows in Fig. 2(b)). The graph representation can be viewed as an underspecified version of Dependency Minimal Recursion Semantics (DMRS) Copestake (2009) due to the underspecification of scope. Different from DMRS, the graph representation can be linked cleanly to the syntax of Universal Dependencies in PredPatt.

3.3 Linearized Representation

The linearized representation aims to facilitate learning of cross-lingual semantic parsing. Recently parsers based on recurrent neural networks that make use of linearized representation have achieved state-of-the-art performance in constituency parsing 

Vinyals et al. (2015), logical form prediction Dong and Lapata (2016); Jia and Liang (2016), cross-lingual open information extraction Zhang et al. (2017a), and AMR parsing Barzdins and Gosko (2016); Peng et al. (2017). An example of PredPatt linearized representation is shown in Fig. 2(c): Starting at the root node of the dependency tree (i.e., “reported”), we take an in-order traversal of its spanning tree. As the tree is expanded, brackets are inserted to denote the beginning or end of a predicate span, and parentheses are inserted to denote the beginning or end of an argument span. The subscript “h” indicates the syntactic head of each span. Intra-sentential coreference occurs when an argument refers to one of its preceding nodes, where we replace the argument with a special symbol “” and add a coreference link between “” and its antecedent. Such a linearized representation can be viewed as a sequence of tokens accompanied by a list of conference links. Brackets, parentheses, syntactic heads, and the special symbol “” are all considered as tokens in this representation.

4 Evaluation Metric

In cross-lingual semantic parsing, meaning representation for the target language can be represented in three forms as shown in Fig. 2. Evaluation of such forms is crucial to the development of algorithms for cross-lingual semantic parsing. However, there is no method directly available for evaluation. Related methods come from semantic parsing, whose results are mainly evaluated in three ways: (1) task correctness Tang and Mooney (2001), which evaluates on a specific NLP task that uses the parsing results; (2) whole-parse correctness Zettlemoyer and Collins (2005), which counts the number of parsing results that are completely correct; and (3) Smatch Cai and Knight (2013), which computes the similarity between two semantic structures.

Nevertheless, in cross-lingual semantic parsing where instances of predicates are represented in the target language, we need an evaluate metric that can be used regardless of specific tasks or domains, and is able to differentiate two parsing results that have not only similar structures but also similar predicate instances. We design an evaluation metric that computes the similarity between two graph representations as shown Fig. 2(b). is the function to score the similarity between two instances, and scores the similarity between two argument relations. These scores are normalized to [0, 1].

As described in Section 3.2, the graph representation consists of three types of information . For two graphs and , we define the score to measure the similarity of against :

where is a mapping from variables in to variables in . is the domain of , i.e., all argument edges in . computes the highest similarity score among all possible mappings .

The precision and recall are computed by

where is the number of instances in , is the number of argument relations in .

In this work, we set  Papineni et al. (2002) and , the Kronecker delta. Bleu is a widely-used metric in machine translation, and here it gives partial credits to instance similarity in .333Future work could consider, e.g., a modified BLEU that considers Levenshtein distance between tokens for a more robust partial-scoring in the face of transliteration errors. Finding an optimal variable mapping that yields the highest similarity score is NP-complete. We instead adopt a strategy used in Smatch Cai and Knight (2013) that does a hill-climbing search with smart initialization plus 4 random restarts, and has been shown to give the best trade-off between accuracy and speed. Smatch for evaluating semantic structures can be considered as a special case of , where and . We show an example of evaluating two similar graphs using in the supplemental material.

5 Task

We formulate the task of cross-lingual semantic parsing as a joint problem of sequence-to-sequence learning and coreference resolution. The input is a sentence in the source language, e.g., the Chinese sentence in Fig. 1. The output is a linearized meaning representation as shown in Fig. 2(c): it contains a sequence of tokens in the target language as well as coreference assignments for each special symbol “” in .

Formally, let the input be a sequence of tokens , and let the output be a sequence of tokens and a list of coreference assignments , where is the coreference assignment for . The set of possible assignments for is , a dummy antecedent and all preceding tokens. The dummy antecedent represents a scenario where the token is not a special symbol “” and should be assigned to none of the preceding tokens. is the length of the input sentence, and , the length of the output sentence.

6 Model

The goal for cross-lingual semantic parsing is to learn a conditional probability distribution

whose most likely configuration, given the input sentence, outputs the true linearized meaning representation. While the standard encoder-decoder framework shows the state-of-the-art performance in sequence-to-sequence learning Vinyals et al. (2015); Jia and Liang (2016); Barzdins and Gosko (2016), it can not directly solve intra-sentential conference in cross-lingual semantic parsing. To achieve this goal, we propose an encoder-decoder architecture incorporated with a copying mechanism. As illustrated in Fig. 3, Encoder transforms the input sequence into hidden states; Decoder reads the hidden states, and then at each time step decides whether to generate a token or copy a preceding token.

Figure 3: Illustration of the model architecture. At the current decoding step, the decoder takes a token “[” as input, and decides to copy a preceding head token “block” via the coping mechanism, instead of generating a token via the attention mechanism.

6.1 Encoder

The encoder employs a bidirectional recurrent neural network 

Schuster and Paliwal (1997) to encode the input 444For simplicity, we use (and ) to represent both tokens as well as their word embeddings. into a sequence of hidden states . Each hidden state is a concatenation of a left-to-right hidden state and a right-to-left hidden state ,


where and are -layer stacked LSTM units Hochreiter and Schmidhuber (1997). The encoder hidden states are zero-initialized.

6.2 Copying-Enabled Decoder

Given the encoder hidden states, the decoder predicts meaning representation according to the conditional probability which can be decomposed as a product of the decoding probabilities at each time step :


where and are the preceding tokens and the coreference assignments. We omit and from the notation when the context is unambiguous. The decoding probability at each time step is defined as


where is the generating probability, and is the copying probability. If the dummy antecedent is assigned to , the decoder generates a token for , otherwise the decoder copies a token from the preceding tokens.

Generation: If the decoder decides to generate a token at time step , the probability distribution of the generated token is defined as


where is a two-layer feed-foward neural network over the decoder hidden state

and the context vector

. The decoder hidden state is computed by


where rnn is a recurrent neural network using -layer stacked LSTM, and is initialized by the last encoder left-to-right hidden state . The context vector is computed by Attention Mechanism Bahdanau et al. (2014); Luong et al. (2015) as illustrated in Fig. 3,


where is a transform matrix and is a bias.

Copying Mechanism: If the decoder at time step decides to copy a token from the preceding tokens as shown in Fig. 3, the probability of being a copy of the preceding token is defined as


where is the set of possible coreference assignments for defined in Section 5. is a pairwise score for a coreference link between and . There are three terms in this pairwise coreference score, which is akin to lee-EtAl:2017:EMNLP2017: (1) whether should be a copy of a preceding token, (2) whether shoud be a candidate source of such a copy, and (3) whether is an antecedent of .


Here is a unary score for being a copy of a preceding token, is a unary score for being a candidate source of such a copy, and is a pairwise score for being an antecedent of .

Figure 4: Coreference scoring architecture in the copy mechanism between a preceding token and the currently considered token .

Fig. 4 shows the details of the scoring architecture in the copy mechanism. At the core of the three factors are vector representations for each token , which is described in detail in the following section. Given the currently considered token and a preceding token , the scoring functions above are computed via standard feed-foward neural networks:


where denotes dot product, denotes element-wise multiplication, and ffnn denotes a two-layer feed-foward neural network over the input. The input of is a concatenation of vector representations and , and their explicit element-wise similarity .

Token representations: To accurately predict conference scores, we consider three types of information in each token representation : (1) the token itself , (2) on the decoder side, the preceding context , and (3) on the encoder side, the input sequence .

The lexical information of the token itself is represented by its word embedding . The preceding context is encoded by the decoder rnn in Equation 5. We use the decoder hidden state to represent the preceding context information.

The encode-side context is also crucial to predicting coreference: if and pay attention to the same context on the encoder side, they are likely to refer the same entity. Therefore, we use the context vector computed by an attention mechanism to represent the encoder side context information for :


where is the encoder hidden state computed by Equation 1, is the decoder hidden state computed by Equation 5, and is a transform matrix.

All the above information is concatenated to produce the final token representation :


6.3 Learning

In the training objective, we consider both the copying accuracy as well as the generating accuracy. Given the input sentence , the output sequence of tokens , and the coreference assignments , the objective is to minimize the negative log-likelihood:

To increase the convergence rate, we pretrain the model by setting to only optimize the generating accuracy. After the model converges, we set back to and continue training. Since most tokens in the output are not copied from their preceding tokens, and are therefore assigned the dummy antecedent , the training of the copy mechanism is heavily unbalanced. To alleviate the balance problem, we consider coreference assignments of syntactic head tokens in optimization.

7 Experiments

We now describe the evaluation data, baselines, and experimental results. Hyperparameter settings are reported in the supplemental material.

Data: We choose Chinese as the source language and English as the target language. For testing, we sampled 2,258 sentences from Universal Dependencies (UD) English Treebank Silveira et al. (2014), which is taken from five genres of web media: weblogs, newsgroups, emails, reviews, and Yahoo answers. We then created PredPatt meaning representations for these sentences based on the gold UD annotations. Meanwhile, the Chinese translations of these sentences were created by crowdworkers on Amazon Mechanical Turk. The test dataset will be released upon publication. For training, we first collected about 1.8M Chinese-English sentence bitexts from the GALE project Cohen (2007), then tokenized Chinese sentences with Stanford Word Segmenter Chang et al. (2008). We created PredPatt meaning representations for English sentences based on automatic UD annotations generated by SyntaxNet Parser Andor et al. (2016). We hold out 10K training sentences for validation. The dataset statistics are reported in Table 1.

No. sentences Source
Train 1,889,172 GALE
Validation 10,000 GALE
Test 2,258 UD Treebank
Table 1: Statistics of the evaluation data.
Prec. Rec. F Prec. Rec. F Prec. Rec. F
Seq2Seq+Copy 95.67 96.43 96.05 98.83 99.14 98.99 98.54 98.32 98.43 97.82


85.67 51.45 64.29 97.94 88.23 92.83 84.24 93.68 88.71 81.94
Seq2Seq+Random 10.44 31.13 15.63 37.91 83.89 52.22 62.37 27.69 38.36 35.40
Table 2: Evaluation of coreference results on the test dataset. We force the model to decode the gold target sequence, and only evaluate algorithms for solving coreference. The Avg. F is computed by averaging F scores of MUC, B and CEAF.
Prec. Rec. F
Seq2Seq+Copy 49.72 31.25 38.38
Seq2Seq+Heuristic 46.76 30.88 37.20
Seq2Seq+Random 42.86 30.74 35.80
Pipeline 28.50 20.65 23.95
Table 3: Precision, recall, and scores for the evaluation metric on the test dataset.

Comparisons: We evaluate 4 approaches in the experiments: (1) Seq2Seq+Copy is our proposed approach, described in Section 6. (2) Seq2Seq+Heuristic preprocesses the data by replacing the special symbol “” with the syntactic head of its antecedent. During training and testing, it replaces the copying mechanism with a heuristic that solves coreference by randomly choosing an antecedent among preceding arguments which have the same syntactic head. (3) Seq2Seq+Random replaces the copying mechanism by randomly choosing an antecedent among all preceding arguments. (4) We also include a Pipeline

approach where Chinese sentences are first translated into English by a neural machine translation system 

Klein et al. (2017) and are then annotated by a UD parser Andor et al. (2016). The final meaning representations of Pipeline are created based the automatic UD annotations.

Results: Table 3 reports the test set results based on the evaluation metric defined in Section 4. Overall, our proposed approach, Seq2Seq+Copy, achieves the best precision, recall, and . The two baselines, Seq2Seq+Heuristic and Seq2Seq+Random also achieve reasonable results. These two baselines both employ sequence-to-sequence models to predict meaning representations, which can be considered a replica of state-of-the-art approaches for structured prediction Vinyals et al. (2015); Barzdins and Gosko (2016); Peng et al. (2017). Our proposed approach outperforms these two strong baselines: the copying mechanism, aiming for solving coreference in cross-lingual semantic parsing, results in both precision and recall gains. The detailed gains achieved by the copying mechanism are discussed in the following section. In the Pipeline approach, each component is trained independently. During testing, residual errors from each component are propagated through the pipeline. As expected, Pipeline shows a significant performance drop compared to the other end-to-end learning approaches.

Coreference occurs 5,755 times in the test data. To evaluate the coreference accuracy of these end-to-end learning approaches,555Pipeline predicts PredPatt meaning representations based on automatic UD annotation, so its conference accuracy can not be separately evaluated. we force each approach to generate the gold target sequence, and only predict coreference via the Copying mechanism, the Heuristic baseline, or the Random baseline. We report the precision, recall, and for the standard MUC, B, and CEAF metrics using the official coreference scorer of the CoNLL-2011/2012 shared tasks Pradhan et al. (2014).

Table 2 shows the evaluation results. Since coreference in our setup occurs at the sentence level, the proposed copying mechanism achieves high scores in all three metrics, and the average is 97.82. The heuristic baseline, which solves coreference only based on syntactic heads, also achieves a relatively high average of 81.94. Under the MUC metric, the copying mechanism performs significantly better than the heuristic baseline. The random baseline limits the choice of coreference to preceding syntactic heads, ignoring all other tokens, achieving scores much lower than the other two approaches in all three metrics.

8 Conclusions

We introduce the task of cross-lingual semantic parsing, which maps content provided in a source language into a meaning representation based on a target language. We present: the PredPatt meaning representation as the target semantic interface, the metric for evaluation, and the Chinese-English semantic parsing dataset. We propose an end-to-end learning approach with a copying mechanism which outperforms two strong baselines in this task. The PredPatt meaning representation, the evaluation metric , and the evaluation dataset provided in this work will be beneficial to the increasing interests in meaning representations and cross-lingual applications.


  • Abend and Rappoport (2013) Omri Abend and Ari Rappoport. 2013. Universal conceptual cognitive annotation (ucca). In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 228–238, Sofia, Bulgaria. Association for Computational Linguistics.
  • Abend and Rappoport (2017) Omri Abend and Ari Rappoport. 2017. The state of the art in semantic representation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 77–89, Vancouver, Canada. Association for Computational Linguistics.
  • Andor et al. (2016) Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally normalized transition-based neural networks. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2442–2452, Berlin, Germany. Association for Computational Linguistics.
  • Bahdanau et al. (2014) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.
  • Baldridge and Kruijff (2002) Jason Baldridge and Geert-Jan Kruijff. 2002. Coupling ccg and hybrid logic dependency semantics. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 319–326, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
  • Banarescu et al. (2013) Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 178–186.
  • Barzdins and Gosko (2016) Guntis Barzdins and Didzis Gosko. 2016. Riga at semeval-2016 task 8: Impact of smatch extensions and character-level neural translation on amr parsing accuracy. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 1143–1147, San Diego, California. Association for Computational Linguistics.
  • Basile et al. (2012) Valerio Basile, Johan Bos, Kilian Evang, and Noortje Venhuizen. 2012. Developing a large semantically annotated corpus. In LREC 2012, Eighth International Conference on Language Resources and Evaluation.
  • Bos et al. (2004) Johan Bos, Stephen Clark, Mark Steedman, James R. Curran, and Julia Hockenmaier. 2004. Wide-coverage semantic representations from a ccg parser. In Proceedings of the 20th International Conference on Computational Linguistics, COLING ’04, Stroudsburg, PA, USA. Association for Computational Linguistics.
  • Bos et al. (2017) Johan Bos, Kilian Evang, Johannes Bjerva, Lasha Abzianidze, Hessel Haagsma, Rik van Noord, Pierre Ludmann, and Duc-Duy Nguyen. 2017. The parallel meaning bank: Towards a multilingual corpus of translations annotated with compositional meaning representations. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain, April 3-7, 2017, Volume 2: Short Papers, pages 242–247.
  • Cai and Knight (2013) Shu Cai and Kevin Knight. 2013. Smatch: an evaluation metric for semantic feature structures. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 748–752, Sofia, Bulgaria. Association for Computational Linguistics.
  • Chang et al. (2008) Pi-Chuan Chang, Michel Galley, and Christopher D. Manning. 2008. Optimizing Chinese word segmentation for machine translation performance. In Proceedings of the Third Workshop on Statistical Machine Translation, pages 224–232, Columbus, Ohio. Association for Computational Linguistics.
  • Cohen (2007) Jordan Cohen. 2007. The gale project: A description and an update. In Automatic Speech Recognition & Understanding, 2007. ASRU. IEEE Workshop on, pages 237–237. IEEE.
  • Copestake (2007) Ann Copestake. 2007. Semantic composition with (robust) minimal recursion semantics. In Proceedings of the Workshop on Deep Linguistic Processing, pages 73–80. Association for Computational Linguistics.
  • Copestake (2009) Ann Copestake. 2009. Invited Talk: slacker semantics: Why superficiality, dependency and avoidance of commitment can be the right way to go. In Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009), pages 1–9, Athens, Greece. Association for Computational Linguistics.
  • Copestake et al. (1995) Ann Copestake, Dan Flickinger, Rob Malouf, Susanne Riehemann, and Ivan Sag. 1995. Translation using minimal recursion semantics. In In Proceedings of the Sixth International Conference on Theoretical and Methodological Issues in Machine Translation.
  • Copestake et al. (2005) Ann Copestake, Dan Flickinger, Carl Pollard, and Ivan A Sag. 2005. Minimal recursion semantics: An introduction. Research on Language and Computation, 3(2-3):281–332.
  • Dong and Lapata (2016) Li Dong and Mirella Lapata. 2016. Language to logical form with neural attention. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 33–43, Berlin, Germany. Association for Computational Linguistics.
  • Dorr and Habash (2002) Bonnie Dorr and Nizar Habash. 2002. Interlingua approximation: A generation-heavy approach. In In Proceedings of AMTA-2002. University of Chicago Press.
  • Dowty (1989) David R Dowty. 1989. On the semantic content of the notion of ‘thematic role’. In Properties, types and meaning, pages 69–129. Springer.
  • Evang and Bos (2016) Kilian Evang and Johan Bos. 2016. Cross-lingual learning of an open-domain semantic parser. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 579–588. The COLING 2016 Organizing Committee.
  • Faruqui and Kumar (2015) Manaal Faruqui and Shankar Kumar. 2015. Multilingual open relation extraction using cross-lingual projection. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1351–1356, Denver, Colorado. Association for Computational Linguistics.
  • Fung and Chen (2004) Pascale Fung and Benfeng Chen. 2004. Biframenet: Bilingual frame semantics resources construction by cross-lingual induction. In In Proceedings of the 20th International Conference on Computational Linguistics, pages 931–935.
  • Ganchev et al. (2009) Kuzman Ganchev, Jennifer Gillenwater, and Ben Taskar. 2009. Dependency grammar induction via bitext projection constraints. In

    Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP

    , pages 369–377, Suntec, Singapore. Association for Computational Linguistics.
  • Heim (1988) Irene Heim. 1988. The Semantics of Definite and Indefinite Noun Phrases. Ph. d. dissertation, New York and London.
  • Hobbs (2003) Jerry R. Hobbs. 2003. Discourse and Inference.
  • Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780.
  • Hwang and Schubert (1994) Chung Hee Hwang and Lenhart K. Schubert. 1994. Interpreting tense, aspect and time adverbials: A compositional, unified approach. In Temporal Logic, pages 238–264, Berlin, Heidelberg. Springer Berlin Heidelberg.
  • Jia and Liang (2016) Robin Jia and Percy Liang. 2016. Data recombination for neural semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12–22, Berlin, Germany. Association for Computational Linguistics.
  • Kamp (1981) H Kamp. 1981. A theory of truth and semantic representation, 277-322, jag groenendijk, tmv janssen and mbj stokhof, eds.
  • Klein et al. (2017) G. Klein, Y. Kim, Y. Deng, J. Senellart, and A. M. Rush. 2017. OpenNMT: Open-Source Toolkit for Neural Machine Translation. ArXiv e-prints.
  • Lee et al. (2017) Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017. End-to-end neural coreference resolution. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 188–197, Copenhagen, Denmark. Association for Computational Linguistics.
  • Luong et al. (2015) Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421, Lisbon, Portugal. Association for Computational Linguistics.
  • McDonald et al. (2011) Ryan McDonald, Slav Petrov, and Keith Hall. 2011. Multi-source transfer of delexicalized dependency parsers. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 62–72, Edinburgh, Scotland, UK. Association for Computational Linguistics.
  • Mitamura et al. (1991) Teruko Mitamura, Eric H. Nyberg, and Jaime G. Carbonell. 1991. An efficient interlingua translation system for multi-lingual document production. In Proceedings of Machine Translation Summit III, pages 2–4.
  • Naseem et al. (2012) Tahira Naseem, Regina Barzilay, and Amir Globerson. 2012. Selective sharing for multilingual dependency parsing. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 629–637, Jeju Island, Korea. Association for Computational Linguistics.
  • Padó and Lapata (2005) Sebastian Padó and Mirella Lapata. 2005. Cross-linguistic projection of role-semantic information. In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT ’05, pages 859–866, Stroudsburg, PA, USA. Association for Computational Linguistics.
  • Padó and Lapata (2009) Sebastian Padó and Mirella Lapata. 2009. Cross-lingual annotation projection of semantic roles. J. Artif. Int. Res., 36(1):307–340.
  • Palmer et al. (2005) Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational linguistics, 31(1):71–106.
  • Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics.
  • Parsons (1990) Terence Parsons. 1990. Events in the Semantics of English, volume 5. Cambridge, Ma: MIT Press.
  • Peng et al. (2017) Xiaochang Peng, Chuan Wang, Daniel Gildea, and Nianwen Xue. 2017. Addressing the data sparsity issue in neural amr parsing. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 366–375, Valencia, Spain. Association for Computational Linguistics.
  • Pradhan et al. (2014) Sameer Pradhan, Xiaoqiang Luo, Marta Recasens, Eduard Hovy, Vincent Ng, and Michael Strube. 2014. Scoring coreference partitions of predicted mentions: A reference implementation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 30–35, Baltimore, Maryland. Association for Computational Linguistics.
  • Reddy et al. (2017) Siva Reddy, Oscar Täckström, Slav Petrov, Mark Steedman, and Mirella Lapata. 2017. Universal semantic parsing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 89–101, Copenhagen, Denmark. Association for Computational Linguistics.
  • Schubert (2014) Lenhart Schubert. 2014. From treebank parses to episodic logic and commonsense inference. In Proceedings of the ACL 2014 Workshop on Semantic Parsing, pages 55–60, Baltimore, MD. Association for Computational Linguistics.
  • Schubert (2015) Lenhart Schubert. 2015. Semantic representation. In

    Proceedings of AAAI Conference on Artificial Intelligence

  • Schubert (2000) Lenhart K Schubert. 2000. The situations we talk about. In Logic-based artificial intelligence, pages 407–439. Springer.
  • Schubert and Hwang (2000) Lenhart K. Schubert and Chung Hee Hwang. 2000. Natural language processing and knowledge representation. chapter Episodic Logic Meets Little Red Riding Hood: A Comprehensive Natural Representation for Language Understanding, pages 111–174. MIT Press, Cambridge, MA, USA.
  • Schuster and Paliwal (1997) Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673–2681.
  • Silveira et al. (2014) Natalia Silveira, Timothy Dozat, Marie-Catherine de Marneffe, Samuel Bowman, Miriam Connor, John Bauer, and Christopher D. Manning. 2014. A gold standard dependency corpus for English. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014).
  • Steedman (2000) Mark Steedman. 2000. The Syntactic Process. MIT Press, Cambridge, MA, USA.
  • Strassel and Tracey (2016) Stephanie Strassel and Jennifer Tracey. 2016. Lorelei language packs: Data, tools, and resources for technology development in low resource languages. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), Paris, France. European Language Resources Association (ELRA).
  • Sudo et al. (2004) Kiyoshi Sudo, Satoshi Sekine, and Ralph Grishman. 2004. Cross-lingual information extraction system evaluation. In Proceedings of the 20th international Conference on Computational Linguistics, page 882.
  • Tang and Mooney (2001) Lappoon R Tang and Raymond J Mooney. 2001.

    Using multiple clause constructors in inductive logic programming for semantic parsing.


    European Conference on Machine Learning

    , pages 466–477. Springer.
  • Vinyals et al. (2015) Oriol Vinyals, Łukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Grammar as a foreign language. In Advances in Neural Information Processing Systems, pages 2773–2781.
  • Wang and Manning (2014) Mengqiu Wang and Christopher D. Manning. 2014. Cross-lingual projected expectation regularization for weakly supervised learning. Transactions of the Association of Computational Linguistics, 2:55–66.
  • White et al. (2016) Aaron Steven White, Drew Reisinger, Keisuke Sakaguchi, Tim Vieira, Sheng Zhang, Rachel Rudinger, Kyle Rawlins, and Benjamin Van Durme. 2016. Universal decompositional semantics on universal dependencies. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1713–1723, Austin, Texas. Association for Computational Linguistics.
  • Yarowsky et al. (2001) David Yarowsky, Grace Ngai, and Richard Wicentowski. 2001. Inducing multilingual text analysis tools via robust projection across aligned corpora. In Proceedings of the First International Conference on Human Language Technology Research, HLT ’01, pages 1–8, Stroudsburg, PA, USA. Association for Computational Linguistics.
  • Zeman and Resnik (2008) Daniel Zeman and Philip Resnik. 2008. Cross-language parser adaptation between related languages. In Proceedings of the IJCNLP-08 Workshop on NLP for Less Privileged Languages.
  • Zettlemoyer and Collins (2005) Luke S. Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence, UAI’05, pages 658–666, Arlington, Virginia, United States. AUAI Press.
  • Zhang et al. (2017a) Sheng Zhang, Kevin Duh, and Benjamin Van Durme. 2017a. MT/IE: Cross-lingual Open Information Extraction with Neural Sequence-to-Sequence Models. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 64–70. Association for Computational Linguistics.
  • Zhang et al. (2017b) Sheng Zhang, Rachel Rudinger, and Ben Van Durme. 2017b. An Evaluation of PredPatt and Open IE via Stage 1 Semantic Role Labeling. In Proceedings of the 12th International Conference on Computational Semantics (IWCS), Montpellier, France.