Separating Argument Structure from Logical Structure in AMR

08/04/2019
by   Johan Bos, et al.
University of Groningen
0

The AMR (Abstract Meaning Representation) formalism for representing meaning of natural language sentences was not designed to deal with scope and quantifiers. By extending AMR with indices for contexts and formulating constraints on these contexts, a formalism is derived that makes correct prediction for inferences involving negation and bound variables. The attractive core predicate-argument structure of AMR is preserved. The resulting framework is similar to that of Discourse Representation Theory.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

09/20/2021

Intensionalizing Abstract Meaning Representations: Non-Veridicality and Scope

Abstract Meaning Representation (AMR) is a graphical meaning representat...
02/24/2022

Neural reality of argument structure constructions

In lexicalist linguistic theories, argument structure is assumed to be p...
05/17/2021

Supporting Context Monotonicity Abstractions in Neural NLI Models

Natural language contexts display logical regularities with respect to s...
12/29/2020

DRS at MRP 2020: Dressing up Discourse Representation Structures as Graphs

Discourse Representation Theory (DRT) is a formal account for representi...
12/13/2021

Plurality and Quantification in Graph Representation of Meaning

In this thesis we present a semantic representation formalism based on d...
08/12/2019

Stochastic differential theory of cricket

A new formalism for analyzing the progression of cricket game using Stoc...
12/16/2016

A Two-Phase Approach Towards Identifying Argument Structure in Natural Language

We propose a new approach for extracting argument structure from natural...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Abstract Meaning Representation (AMR) puts emphasis on argument structure. In these notes I make a proposal to extend AMRs with a logical dimension, in order to—from a formal semantics point of view—correctly capture negation, quantification, and presuppositional phenomena. It is desirable to investigate such an extension, because (i) it would make a comparison of AMR with other semantics formalisms possible (in particular Discourse Representation Theory); (ii) it would make AMR suitable for performing logical inferences; and (iii) it would be an important step in sharing resources for semantic parsing. The aim is to do this in such a way that existing annotations of AMR can be relatively easily extended with the desired extensions.

This is not the first proposal of extending AMR to handle scope phenomena (Pustejovsky et al., 2019). I think the contribution of this paper is that this extension is simpler of nature than those proposed in Bos (2016) and Stabler (2017). It bears similarities with the named graphs employed in Crouch and Kalouli (2018).

The orginal AMRs cannot be used directly for drawing valid inferences. Using the simple conjunction elimination rule (if the conjunction A and B is true, then A is true, and B is true), and assuming that an AMR is interpreted as a conjunction of clauses, AMR will make the right predictions as long as no negation is involved (e.g., it will yield the correct inference “Mary left” from “Mary left yesterday”). But since, in AMR, negation is represented as a predicate rather than an operator that takes scope, it will make wrong predications for negated sentences (e.g., it will allow the inference “Mary left” from “Mary did not leave”. This is why AMRs need some reformulation before interpretation, and that is exactly what Bos (2016) and Stabler (2017) propose.

Pustejovsky et al. (2019) extend AMR with a possibility of adding explicit scope relations. This extension, however, doesn’t solve a fundamental problem that AMR faces, namely the interpretation of bound variables in quantification. Consider examples such as “every man shaved himself” or “all women want to swim”. In AMR, where quantifiers are expressed as a predicate rather than taking scope, the resulting interpretations for these sentences could be paraphrased as “every man shaved every man” and “all women want all women to swim”, which are not the meanings that the sentences express.

I argue that if we want to fix these problems, AMR requires explicit scope in their representations, following Bos and Abzianidze (2019). In this short paper I propose a method to this by keeping the underlying predicate-argument structure, and adding a second, logical layer (Section 2). In Section 3, I show a list of examples of extended AMRs that demonstrate the approach. Some loose ends are discussed in Section 4.

2 Method

The idea is to extend the AMR with logical structure, by viewing an AMR as having two components: one comprising basic predicate-argument structure (the original AMR), and one consisting of the logical structure (information about logical operators such as negation and the scope they take). This is achieved by:

  1. Viewing AMR as a recursive structure, rather than interpreting it as a graph;

  2. Labelling each (sub-)AMR with an index;

  3. Adding constraints to the indices.

AMRs can be seen as a recursive structure by viewing every slash within an AMR as a sub-AMR. If a (sub-)AMR contains relations, those relations will introduce nested AMRs. A constant is also an AMR, in this definition—see Bos (2016) for details. An AMR (and all its sub-AMRs) will be labeled by decorating the slashes with indices.

Every AMR is augmented by a set of scoping constraints on the labels. This way, a sub-AMR can be viewed as describing a “context”. The constraints state how the contexts relate to each other. They can be the same contexts (=), a negated context (), a conditional context (), or a presuppositional context (). Colons are used to denote inclusion: states that context contains condition . These labels are similar in spirit to those used in underspecification formalisms as proposed by Reyle (1993), Bos (1996), and Copestake et al. (2005). The treatment of presuppositions is inspired by Venhuizen et al. (2013); Venhuizen et al. (2018).

3 Results

Here I illustrate the idea with canonical examples involving existential and universal quantification, definite descriptions, proper names, and negation.

3.1 Existential Quantification

Consider the AMR for “A dog scared a cat.” with a transitive verb and two indefinite noun phrases, where we can identify three sub-AMRs. We index and constrain them as follows:

(e /1/ scare.v.01
        :Stimulus (x /2/ dog.n.01)
        :Experiencer (y /3/ cat.n.01))

Here there is just one context shared by all three sub-AMRs. As equivalent alternative, the following simplified AMR is also possible:

(e /1/ scare.v.01
        :Stimulus (x /1/ dog.n.01)
        :Experiencer (y /1/ cat.n.01))

3.2 Definite Description and Proper Names

A sentence like “The woman smiled.” contains a definite description triggering an existential presupposition. Presuppositions yield new contexts:

(e /1/ smile.v.01
        :Agent (x /2/ woman)) 21

In other words, the definite article triggers a presupposition that there is a woman (2) with respect to context (1). Proper names can be handled similarly:

“John smiled.”

(e /1/ smile.v.01
        :Agent (x /2/ person
                              :Name (y /2/ john))) 21

The existence of a person named “John” is a presupposition for the smiling event.

3.3 Negation

“A woman didn’t smile.”

(e /1/ smile.v.01
        :Agent (x /2/ woman)) 2:1

“The woman didn’t smile.”

(e /1/ smile.v.01
        :Agent (x /3/ woman)) 32,2:1

Negation introduces a new (negated) context. This makes the :polarity- relation in AMR obsolete.

3.4 Universal Quantification

“Everyone smiled.”

(e /1/ smile.v.01
        :Agent (x /2/ person)) 3:21

“A dog scared every cat.”

(e /1/ scare.v.01
        :Stimulus (x /1/ dog.n.01)
        :Experiencer (y /2/ cat.n.01)) 3:21

“Every dog scared every cat.”

(e /1/ scare.v.01
        :Stimulus (x /2/ dog)
        :Experiencer (y /3/ cat)) 5:34,4:21

“Every boy revised his paper.”

(e /1/ revise
        :Agent (x /2/ boy)
        :Patient (y /3/ paper
                                 :Creator x)) 23,31,4:21

4 Discussion

In this section I discuss some loose ends: the issue of inferred labels, annotation work required to implement the approach, the link with Discourse Representation Theory (Kamp and Reyle, 1993), and the conversion to triples.

4.1 Inferred Labels

In the current proposal, negation and conditional introduce new indices, that are not shown in the AMR. These are necessary to ensure a well-formed logical structure. But they are (perhaps) not intuitive, and therefore harder to annotate by coders. It would be useful to investigate whether these labels can be inferred, in such a way that constraints with colons (for negation and conditionals) could be simplified.

4.2 Annotation Work

Existing AMR annotation can be monotonically extended: all “slashes” that occur in AMRs need to be indexed, and constraints need to be added. Given an annotated AMR corpus, this can be done semi-automatically: first add indices automatically (by replacing ”/” by ”/1/”). Then manually correct cases of negation (search for ”:polarity -”), universal quantification, and definite descriptions, for a sample of the corpus. Use machine learning to annotate the rest (or hire or persuade human annotators to do the job).

4.3 From AMR to DRS

A simple way of converting labelled AMRs to DRS (Discourse Representation Structure) is as follows: replace each AMR by a DRS (this DRS contains exactly one discourse referent, one one-place predicate, and zero or more two-place relations). Then merge all DRSs that are indexed with the same index. Then, following the structure expressed by the constraints, deal with negation, universal quantification, and presuppositions. The result is a semantic representation that is nearly equivalent with DRS in terms of expressive power (nearly equivalent, because a DRS doesn’t need to contain a discourse referent, whereas an AMR always contains a discourse referent). The appendix contains some example translations.

4.4 Triple Format of Logical Structure

AMRs are converted to sets of triples for evaluation purposes. Therefore, a sensible question to ask is how the logical constraints in this proposal are converted to triples. The indexed sub-AMRs all introduce a triple linking an instance with a scope index. Each scoping constraint introduces one or more triples. Constraints that involve two indices introduce a single triple. Constraints that involve three indices introduce two triples. Here is an example:

“Nobody smiled.”

(e /1/ smile.v.01
        :Agent (x /2/ person)) 4:23,3:1

This will introduce the membership triples e , IN , 1 and x , IN , 2 , the triple for negation 3 , NOT , 1 , and the triples for the implication 4 , IF , 2 , and 2 , THEN , 3 .

5 Conclusion

The original AMR notation can be extended by a layer of logical structure that gives correct interpretation of linguistic phneomena that require scope or quantification. The resulting framework bears strong similarities with Discourse Representation Theory.

References

  • Bos (1996) Bos, J. (1996). Predicate Logic Unplugged. In P. Dekker and M. Stokhof (Eds.), Proceedings of the Tenth Amsterdam Colloquium, ILLC/Dept. of Philosophy, University of Amsterdam, pp. 133–143.
  • Bos (2016) Bos, J. (2016). Expressive power of abstract meaning representations. Computational Linguistics 42(3), 527–535.
  • Bos and Abzianidze (2019) Bos, J. and L. Abzianidze (2019). Thirty musts for meaning banking. In Proceedings of the First International Workshop on Designing Meaning Representations, Florence, Italy, pp. 15–27. Association for Computational Linguistics.
  • Copestake et al. (2005) Copestake, A., D. Flickinger, I. Sag, and C. Pollard (2005). Minimal recursion semantics: An introduction. Journal of Research on Language and Computation 3(2–3), 281–332.
  • Crouch and Kalouli (2018) Crouch, D. and A.-L. Kalouli (2018). Named graphs for semantic representations. In The Seventh Joint Conference on Lexical and Computational Semantics (*SEM 2018), New Orleans, pp. 113–118.
  • Kamp and Reyle (1993) Kamp, H. and U. Reyle (1993). From Discourse to Logic; An Introduction to Modeltheoretic Semantics of Natural Language, Formal Logic and DRT. Dordrecht: Kluwer.
  • Pustejovsky et al. (2019) Pustejovsky, J., K. Lai, and N. Xue (2019, August). Modeling quantification and scope in abstract meaning representations. In Proceedings of the First International Workshop on Designing Meaning Representations, Florence, Italy, pp. 28–33. Association for Computational Linguistics.
  • Reyle (1993) Reyle, U. (1993). Dealing with Ambiguities by Underspecification: Construction, Representation and Deduction. Journal of Semantics 10, 123–179.
  • Stabler (2017) Stabler, E. (2017). Reforming AMR. Formal Grammar 2017. Lecture Notes in Computer Science 10686, 72–87.
  • Venhuizen et al. (2018) Venhuizen, N., J. Bos, P. Hendriks, and H. Brouwer (2018). Discourse semantics with information structure. Journal of Semantics 35(1), 127–169.
  • Venhuizen et al. (2013) Venhuizen, N. J., J. Bos, and H. Brouwer (2013, March). Parsimonious semantic representations with projection pointers. In Proceedings of the 10th International Conference on Computational Semantics (IWCS 2013) – Long Papers, Potsdam, Germany, pp. 252–263. Association for Computational Linguistics.

Appendix A Example Translations

a.1 “A dog scared every cat.”

(e /1/ scare.v.01
        :Stimulus (x /1/ dog.n.01)
        :Experiencer (y /2/ cat.n.01)) 3:21

(1) = e scare.v.01(e) Stimulus(e,x) Experiencer(e,y) x dog.n.01(x) = e x scare.v.01(e) Stimulus(e,x) Experiencer(e,y) dog.n.01(x)

(2) = y cat.n.01(y)

(3) = (2) (1) = y cat.n.01(y) e x scare.v.01(e) Stimulus(e,x) Experiencer(e,y) dog.n.01(x)

a.2 “Mary didn’t smile.”

(e /1/ smile.v.01
        :Agent (x /2/ person.n.01
                              :Name (n /2/ name.n.01
                                                :Op1 "mary"))) 23,3:1

(1) = e smile.v.01(e) Agent(e,x)

(2) = x person.n.01(x) Name(x,n) n name.n.01(n) Op1(n,”mary”) = x n person.n.01(x) Name(x,n) name.n.01(n) Op1(n,”mary”)

(3) = (1) = e smile.v.01(e) Agent(e,x)