# Event in Compositional Dynamic Semantics

We present a framework which constructs an event-style dis- course semantics. The discourse dynamics are encoded in continuation semantics and various rhetorical relations are embedded in the resulting interpretation of the framework. We assume discourse and sentence are distinct semantic objects, that play different roles in meaning evalua- tion. Moreover, two sets of composition functions, for handling different discourse relations, are introduced. The paper first gives the necessary background and motivation for event and dynamic semantics, then the framework with detailed examples will be introduced.

• 1 publication
• 10 publications
12/17/2014

### Entity-Augmented Distributional Semantics for Discourse Relations

Discourse relations bind smaller linguistic elements into coherent texts...
11/25/2014

### One Vector is Not Enough: Entity-Augmented Distributional Semantics for Discourse Relations

Discourse relations bind smaller linguistic units into coherent texts. H...
04/14/2019

### No Adjective Ordering Mystery, and No Raven Paradox, Just an Ontological Mishap

In the concluding remarks of Ontological Promiscuity Hobbs (1985) made w...
08/24/2018

### Role Semantics for Better Models of Implicit Discourse Relations

Predicting the structure of a discourse is challenging because relations...
09/03/2018

### Automatic Event Salience Identification

Identifying the salience (i.e. importance) of discourse units is an impo...
05/27/2020

### The First Shared Task on Discourse Representation Structure Parsing

The paper presents the IWCS 2019 shared task on semantic parsing where t...
03/14/2018

### One Net Fits All: A unifying semantics of Dynamic Fault Trees using GSPNs

Dynamic Fault Trees (DFTs) are a prominent model in reliability engineer...

## 1 Event Semantics

The idea of relating verbs to certain events or states can be found throughout the history of philosophy. For example, a simple sentence John cries can be referred to a action, in which is the agent who carries out the action. However, there were no real theoretical foundations for semantic proposals based on events before [5]. In [5], Davidson explained that a certain class of verbs (action verbs) explicitly imply the existence of underlying events, thus there should be an implicit event argument in the linguistic realization of verbs.

For instance, traditional Montague Semantics provides John cries the following interpretation: , where stands for a 1-place predicate denoting the event, and stands for the individual constant in the model. Davidson’s theory assigns the same sentence another interpretation: , where becomes a 2-place predicate taking an event variable and its subject as arguments.

Later on, based on Davidson’s theory of events, Parsons proposed the Neo-Davidsonian event semantics in [19]. In Parsons’ proposal, several modifications were made. First, event participants were added in more detail via thematic roles; second, besides action verbs, state verbs were also associated with an abstract variable; furthermore, the concepts of holding and culmination for events, the decomposition of events into subevents, and the modification of subevents by adverbs were investigated.

As explained in [19], there are various reasons for considering event as an implicit argument in the semantic representation. Three of the most prominent ones being adverbial modifiers, perception verbs and explicit references to events.

Adverbial modifiers of a natural language sentence usually bear certain logical relations with each other. Two known properties are and . Take Sentence 1 as an example. . John buttered the toast slowly, deliberately, in the bathroom, with a knife, at midnight.

Permutation means the truth conditions of the sentence will not change if the order of modifiers are alternated, regardless of certain syntactic constraints; drop means if some modifiers are eliminated from the original context, the new sentence should always be logically entailed by the original one. In Parsons’ theory, the above sentence is interpreted as . A similar treatment for adjectival modifiers can also be found in the literature. Compared with several other semantic proposals, such as increasing the arity of verbs or higher order logic solutions, event is superior.

Aside from modifiers, perception verbs form another piece of evidence for applying event in semantic representations. As their name suggests, perception verbs are verbs that express certain perceptual aspects, such as , , , and etc. The semantics of sentences that contain perception verbs are quite different from those whose sub-clauses are built with construction. For instance, we can interpret in three different ways111” and “” are the same as in traditional Montague Semantics, while “” stands for a new type for event.:

1. sb. see sb./sth.: , e.g., Mary sees John.

2. sb. see some fact: , e.g., Mary sees that John flies.

3. sb. see some event: , e.g., Mary sees John fly.

As the example shows, the first just means somebody sees somebody or something. The second indicates that Mary sees a fact, the fact is John flies. Even if Mary sees it from TV or newspaper, the sentence is still valid. The third sentence, in contrast, is true only if Mary directly perceives the event of John flying with her own sight.

Furthermore, natural language discourses contain examples of various forms of explicit references (mostly the anaphor) to events, for example, John sang on his balcony at midnight. It was horrible.

## 2 Dynamic Semantics & Discourse Relation

### 2.1 Dynamic Semantics

In the 1970s, based on the principle of compositionality, Richard Montague combined First Order Logic, -calculus, and type theory into the first formal natural language semantic system, which could compositionally generate semantic representations. This framework was formalized in [16], [17], and [18]. By convention, it is named Montague Grammar (MG).

However, despite its huge influence in semantic theory, MG was designed to handle single sentence semantics. Later on, some linguistic phenomena, such as anaphora, donkey sentences, and presupposition projection began to draw people’s attention from MG to other approaches, such as dynamic semantics, which has a finer-grained conception of meaning. By way of illustration, we can look at the following “donkey sentence”, which MG fails to explain:

. . A farmer owns a donkey. He beats it. .̱* Every farmer owns a donkey. He beats it.

In the traditional MG, the meaning of a sentence is represented as its truth conditions, that is the circumstances in which the sentence is true. However, in dynamic semantics, the meaning of a sentence is its context change potential. In other words, meaning is not a static concept any more, it is viewed as a function that always builds new information states out of the old ones by updating the current sentence. Some of the representative works, which emerged since the 1980s, include File Change Semantics [10], Discourse Representation Theory (DRT) [12], and Dynamic Predicate Logic (DPL) [6].

### 2.2 A New Approach to Dynamics

Recently in [8], de Groote introduced a new framework, which integrates a notion of context into MG, based only on Church’s simply-typed -calculus. Thus the concept of discourse dynamicscan be embedded in traditional MG without any other specific definitions as is the case in other dynamic systems.

In DRT, the problem of extending quantifier scope is tackled by introducing sets of reference markers. These reference markers act both as existential quantifiers and free variables. Because of their special status, variable renaming is very important when combining DRT with MG. The framework in [8] is superior in the computational aspect because the variable renaming has already been solved with the simply typed -calculus. Further more, every new sentence is only processed under the environment of the previous context in DRT, but [8] proposed to evaluate a sentence based on both left and right contexts, which would be abstracted over its meaning.

In Church’s simple type theory, there are only two atomic types: “”, denoting the type of individual; “”, denoting the type of proposition222Here we follow the original denotation in [8], but actually there is no great difference between “”, “” (Church’s denotation) and “”, “” (Montague’s denotation).. The new approach adds one more atomic type “”, to express the left contexts, thus the notion of dynamic context is realized. Consequently, as the right context could be interpreted as a proposition given its left context, its type should be . For the same reason, the whole discourse could be interpreted as a proposition given both its left and right contexts. Assuming and is respectively the syntactic category for sentence and discourse, their semantic interpretations are:

 ⟦s⟧=γ→(γ→o)→o,⟦t⟧=γ→(γ→o)→o

In order to conjoin the meanings of sentences to obtain the composed meaning of a discourse, the following formula is proposed:

 ⟦D.S⟧=λeϕ.⟦D⟧e(λe′.⟦S⟧e′ϕ) (1)

in which is the preceding discourse and is the sentence currently being processed. The updated context also possesses the same semantic type as and , it has the potential to update the context. Turning to DRT, if we assume “” are reference markers, and “” are conditions, the corresponding -term for a general DRS in the new framework should be:

 λeϕ.∃x1⋯xn.C1∧⋯Cm∧ϕe′\lx@notefootnoteHere,‘‘$e′$′′isaleftcontextmadeof‘‘$e$′′andthevariables‘‘$x1,x2,x3⋯$′′.Itsconstructiondependsonthespecificstructureofthecontext,formoredetailssee\@@cite[cite][\@@bibrefphilippe].

To solve the problem of anaphoric reference, [8] introduced a special choice operator. The choice operator is represented by some oracles, such as . It takes the left context as argument and returns a resolved individual. In order to update the context, another operator “::” is introduced, which adds new accessible variables to the processed discourse. For instance, term “” actually is interpreted as “” mathematically. In other words, we can view the list as the discourse referents in DRT.

Finally, let us look at a compositional treatment of Discourse 2.2 according to the above formalism. The detailed type and representation for each lexical entry is presented in the following table:

Word Type Semantic Interpretation
John/Mary
she
kisses
smiles

. John kisses Mary. She smiles.

### 2.3 Discourse Relations & Discourse Structure

Since the emergence of dynamic semantics, people have been changing their opinion on the notion of meaning. Based on that, many researchers working on multiple-sentence semantics have studied an abstract and general concept: discourse structure, in other words, the rhetorical relations, or coherence relations ([11], [15], [1]). Representative theories include Rhetorical Structure Theory (RST) and Segmented Discourse Representation Theory (SDRT). The idea that an internal structure exists in discourse comes naturally. Intuitively, in order for a context to appear natural, its constituent sentences should bear a certain coherence with each other, namely discourse relations (DRs). That is also why it is not the case that any two random sentences can form a natural context.

It is still an open question to identify all existing DRs. But it is generally agreed there are two classes of DRs, namely the coordinating relations and the subordinating relations. The former includes relations like Narration, Background, Result, Parallel, Contrast, etc., while relations such as Elaboration, Topic, Explanation, and Precondition belong to the latter type. The distinction of two classes of DRs also has intuitive reasons.

For instance, the function of a sentence over its context could be to introduce a new topic or to support and explain a topic. Thus the former plays a subordinate role, and the latter plays a coordinate role together with those that function in the similar way (supporting or explaining). In addition, it is a even more complicated task to determine which DRs belong to which class. [3] provides some linguistic tests to handle this problem and analyzes some deeper distinctions between these two classes.

The reason that we introduced different types of DRs is because we can construct a more specific discourse hierarchy based on it. The hierarchy can aid in the resolution of some semantic or pragmatic phenomena like anaphora. The original theoretical foundation of this idea dates back to [20], which says that in a discourse hierarchy, only constituents at accessible nodes can be integrated into the updated discourse structure. By convention, a subordinating DR creates a vertical edge and coordinating DR a horizontal edge. The accessible nodes are all located on the right frontier in the hierarchy. This is also known as the Right Frontier Constraint. For instance, in Figure 1,

, and are on the right frontier, so they stay accessible for further attachments. However, and are blocked, which indicates that variables in these two nodes cannot be referenced by future anaphora.

So far, we are clear about the fact that discourses do have structures. By comparing with other dynamic semantic treatments of phenomena such as pronouns and tense, we can identify advantages of using DRs. For further illustration, we use the example from [13]: . . John had a great evening last night. .̱ He had a great meal. .̧ He ate salmon. .̣ He devoured lots of cheese. . He won a dancing competition. . *It was a beautiful pink.

Traditional dynamic semantic frameworks, such as DRT, will totally accept Discourse 1, because there is no universal quantification or negation to block any variable. The pronoun it in 1 can either refer back to meal, or salmon, or cheese, or competition. Normally pink will only be used to describe salmon, which is in the candidate list. However, sentence 1 does sound unnatural to English-speaking readers. Here discourse structure can help to explain. If we construct the discourse hierarchy according to different types of DRs introduced above, we obtain the graph in Figure 2.

Thus, it is clear that 1 is not able to be attached to 1, where salmon is located. Relation between 1 and 1 is a Narration, which is of coordinating type, so 1 is blocked for further reference. In addition, many linguistic phenomena other than anaphora can be better explained with discourse structure and the right frontier constraint, such as presupposition projection, definite descriptions, and word sense disambiguation.

## 3 Event in Dynamic Semantics

So far, we have first presented the advantages of using events in semantic analysis over traditional MG, then the motivation for dynamics in discourse semantics, and finally the need for DRs in more subtle semantic processing. In this section, we propose a framework that compositionally constructs event-style discourse semantics, with various DRs and the accessibility constraint (right frontier constraint) embedded. First, we explain how to build representations of single sentences. After that, the meaning construction for discourse, which is based on its component sentences, will be presented.

### 3.1 Event-based Sentential Semantics

As we showed in Section 1, the implicit event argument helps to handle many linguistic phenomena, such as adverbial modifications, sentential anaphora (it) resolution. With the notion of thematic relations, the verb predicate will take only one event variable as argument, instead of multiple variables, each representing a thematic role relation. Most of the current theories only describe event semantics from a philosophical or pure linguistic point of view, and the corresponding semantic representations are provided without concrete computational constructions. That is what our proposal focuses on. Before we introduce our framework, some assumptions need to be specified.

Thematic roles have been used formally in literature since [9]. However, to determine how many thematic roles are necessary is still an open question. In addition, indicating exactly which part of a sentence correlates to which thematic role is also a difficult task. In our framework, we only consider the most elementary and the most widely accepted set of thematic roles. The roles and their corresponding syntactic categories are listed in the following table:

Thematic Role Syntactic Correspondence
Agent Subject
Theme Direct object; subject of “
Goal Indirect object, or with “
Benefactive Indirect object, or with “
Instrument Object of “”; subject
Experiencer Subject
Location/Time With “” or “

In [19]

, the author provides a template-based solution to construct semantic representations with events. Sentences are first classified into different cases based on their linguistic properties, such as

, , , , etc. Then a unique template is assigned to each case; the number, types and positions of arguments are specifically designed for that template. In our proposal, we will also use templates, but a much simpler version. The templates only contain the most basic thematic roles for certain verbs. They are subject to modification and enhancement for more complicated cases. For instance, the template for the verb smile only contains one agent role, while the verb kiss contains both agent and theme roles.

Furthermore, our proposal will generalize the ontology of event variables. So to speak, at the current stage we make no distinction between , and , for the sake of simplicity (their linguistic differences are described in [19]). So there will just be one unique variable, representing the underlying event, or state, or process, indicated by the verb. There is a simple example: . John kisses Mary in the plaza.

Under event semantics, the semantic representation for Sentence 3.1 should be:

 ∃e.(Kiss(e)∧Ag(e,john)∧Pat(e,mary)∧Loc(e,plaza))\lx@notefootnote$Ag$standsforAgent,$Pat$forPatientand$Loc$forLocation.

In order to obtain the above representation compositionally, we use the following semantic entries for words in the lexicons:

555It is of course possible to break down the interpretation construction of “in the plaza” into a more detailed level by providing entries for each word, but we give the compound one for the whole PP just for simplification.

Thus, by applying the above four entries to one another in a certain order666The function-argument application can be obtained via shallow syntactic processing., we can compute the semantic representation of 3.1 step by step:

At this point, the event variable “” is not yet instantiated as an existential quantifier. To realize that, we can simply design an EOS (End Of Sentence) operator777This could be a comma, full stop, exclamation point, or any other punctuation marks., to which the partial sentence representation could be applied:

As a result, the last step would be:

In the above solution, the adverbial modifier in the plaza is handled in the manner that is traditional for intersective adjectives (e.g., tall, red). With a similar formalism, any number of intersective adverbial modifiers can be added to the event structure, as long as the corresponding lexical entry is provided for each modifier.

The event variable, which is embedded in the verb, is the greatest difference between our framework and MG. From a computational point of view, we need to pass the underlying event variable from the verb to other modifiers. As a result, we first use the -operator for the event variable in the verb interpretation, then the EOS to terminate the evaluation and instantiate the event variable with an existential quantifier. Another framework which compositionally obtain an event-style semantics is [4], which introduces the existential quantifier for the event at the beginning of interpretation.

### 3.2 Event-based Discourse Semantics

In the previous part, we showed how to compute single sentence semantics with events. In this section we will combine event structure with dynamic semantics, extending our formalism to discourse.

As explained in Section 2.2, [8] expresses dynamics in MG by introducing the concept of left and right contexts. We adopt the idea, inserting the left and right contexts into our semantic representations. Thus we bestow upon our event-based formalism the potential to be updated as in other dynamic systems. To achieve this, we modify the lexical entries in the previous section as following:

Here, in contrast to the notation used in [8], “” stands for the left context and “” stands for the right context. In our logical typing system, we use type “” for the event variable, and type “” for the left context. Types “” and “” have the same meaning as convention. In an additional departure from the formalism in [8], we assume the left context contains the accessibility information of previous event variables, instead of individual variables. That is why we keep using the original interpretations for and , instead of inserting the constants “” and “” in the left context list structure. However, the list constructor “::” does have a similar meaning. The only difference with the constructor in [8] is that our “::” takes an event variable and the left context as arguments, while the previous one takes an individual variable and the left context. Given the lexical entries above, the semantic representation with a dynamic potential for Sentence 3.1 becomes:

Its semantic type also changes from “” into “888” is the type for the right context, represented by variable “”.. In order to terminate the semantic processing, we need a new EOS symbol:

Thus we obtain the final interpretation:

The “” and “” in the EOS and above formula are not variables any more. They are just constants of type “” and “”, respectively, which have the effect of freezing the left and right contexts. We can see that the new representation does not seem different from the previous version. That is, of course, because although we embed the dynamic potential into the entries, we are still evaluating single sentence semantics. The power of dynamics will not show up until the case becomes more complicated. So let us consider the following discourse: . .John kisses Mary in the plaza. .̱ She smiles.

To handle Example 3.2, we need to provide two more entries:

Inspired by [8], the interpretation of is made by an external function: . This function is supposed to work over a structured representation of the discourse: we claim that individual variables are defined in the scope of event variables. Thus the resolution of this anaphora must be first do by picking out an event variable, and, through this event, choose the correct individual variable following the previous.

We also apply a type-raising representation for NP (), because we need to pass the selection function for further processing. Similar type-raising version of and could also be constructed. After type raising, the only thing that needs to be changed is the order of argument application, and the resulting logic term will exactly be the same.

So, getting back to Discourse 3.2, we can first obtain the representations for 3.2 and 3.2 independently:

Now the problem is how to combine the two interpretations to yield the discourse semantics. [8] uses Formula 1 to merge sentence interpretations, which takes the previous discourse and the current sentence as input, returns a new piece of updated discourse. Same as DRT, there is no rhetorical relation involved in [8]. However, this paper goes one step further, aiming to encode the discourse structure and event accessibility relations between different sentences.

Hence, we make another assumption here: and are distinct semantic entities, they have different types and meaning evaluations. Every discourse contains certain rhetoric relations, while single sentences should be able to be interpreted without those relations. That is because we consider the discourse structure as a production from sentence composition. Unlike in Formula 1, where discourse and sentence have exactly the same semantic properties, we assign them different types. Drawing from the above assumption, the event variables should be instantiated into existential quantifier in discourse only, while they are still of -forms in sentences. By way of illustration, the followings are the most general representations for sentence and discourse:

999The left context “” in the representation is a complicated structure containing the event accessibility relation. There will be further examples showing how to create “” from “” and other event variables.

Please note that the interpretation for discourse does not only contain “”, where accessibility relations are located; but also various rhetoric relations, represented by , and so on. Those rhetoric relations, as we discussed in Section 2.3, can be classified into either subordinating or coordinating. They have completely different effects in shaping the discourse structure graphs, which determines the accessibility relations. Here we do not care about how many different discourse relations there are (such as Narration, Background, Elaboration, etc.), we just assume if there is a relation, it must belong to one of the two classes. And those rhetoric relations are added only during the meaning merging process. As a consequence, we propose two sets of composition functions, according to different types of DRs.

#### 3.2.1 Subordinating Composition Functions

Based on the right frontier constraint, for those discourses and sentences which are connected by subordinating DRs, all accessible nodes in the previous discourse remain the same, meanwhile the new sentence will be inserted as accessible in the updated discourse. For example in Figure 3,

when is added into the discourse by a subordinating relation with , the current accessible nodes include , , and . Hence, we introduce the composition functions for subordinating DRs as follows:

 ⟦SubBas⟧=λDSab.Da(λa′.∃e.(Sea′b)) (2)

We suppose that every sentence needs to combine with a previous discourse to form a new discourse, also including the first sentence in the context. However, the first sentence could only be combined with an empty discourse:

 ⟦Empty⟧=λab.ba

which in fact contains no context information at all, it is created just for computational reason. That’s why we design two composition functions 2 and 3, namely the and the , to respectively handle the first sentence case and all other situations. Now we will construct the interpretation of 3.2, as an illustration for our composition functions. Suppose 3.2 and 3.2 hold a subordinating relation between each other101010Here is just an assumption, our system does not account how to determine the DRs, we only focus on encoding those relations., then in order to obtain the whole representation for 3.2, we first need to combine 3.2 with the empty discourse by , then combine the result with 3.2 by .

1. 111111We omit some internal thematic structures for 3.2 just for a clear view of the logic terms. The same omission will also be carried out for 3.2.

This step does two things. First, it instantiates the event variable from 3.2 into an existential quantifier. In addition, it inserts the new event argument into the accessible list of the left context. Because the empty discourse does not contain any variable in its left context, the list construction is fairly simple, we just need a naive “push-in” operation.

Suppose the selection function is able to pick the correct event variable out of the accessible list, then our desired DRs and accessibility relation would be successfully encoded in the final logic formula. There are two more remarks for the subordinating composition functions: 1. no new event variable is created during the meaning composition, but all event variables with the -operator will be instantiated as existential quantifiers; 2. the composing process will not change the accessibility condition in the previous discourse, only a new accessible node is added.

#### 3.2.2 Coordinating Composition Functions

Again, let’s first analyze the effect of coordinating DRs on accessibility structure. When a new node is added to an existing discourse with coordinating relation, a horizontal edge is built, as shown in Figure 4, and for example.

At the same time, an abstract variable node - , is created. This is a distinct property compared to subordinating DRs. We need the new abstract node because in many cases the anaphora could only be resolved with reference to a set of sentences connected with coordinating relations, as in:

. Mary stumbled her ankle. She twisted it. John did so too.

To see more examples, see [14].

Based on the above analysis, we propose the following composition functions:

 ⟦CoorBas⟧=λDSab.Da(λa′.∃e.(Sea′b)) (4)

Notice that Formula 4 is identical to 2 because both basic composition functions are designed only to handle the first sentence case, in which we do not really need to distinguish from different DRs (there is even no DR at all). In contrast to Formula 3, the advanced subordinating function, there are three main differences in Formula 5. First, apart from instantiating the event variable of current sentence, another abstract event variable “” is created. It is directly inserted into the accessible list because the new abstract node will always be at the right frontier in the updated discourse structure. Moreover, we introduce a new function , which takes the current accessible list as argument, and deletes those nodes which will no longer be accessible in the new discourse. It works in a similar way as the function. Finally, the function takes three arguments, including the abstract variable. By doing this we can keep track of the relation between abstract variables and their component nodes.

Now let us use Discourse 3.2 again as an illustration. This time we assume the rhetoric relation between 3.2 and 3.2 is of a coordinating kind. Thus we will build its semantic representation with 4 and 5.

As we can see from the final formula, the new event variable “” and the abstract variable “” are added into the accessible list. will then eliminate the inaccessible node “”, leaving only “” and “” on the right frontier.

To test the validity of the proposed system, we have implemented all the above calculus in the Abstract Categorial Grammar [7].

### 3.3 Comparison with Other Related Works

Recently there are some other semantic frameworks based on discourse structure, DRT and other dynamic concepts. For example in [2], the authors expressed SDRT in a non-representational way with dynamic logic. Similar to the formalism presented in our paper, they also use the continuation calculus from [8], where the concepts of left and right contexts are involved for introducing dynamics. However, there are some distinctions between our work and theirs.

First of all, we use an event-style semantics for meaning representation. Consequently, the basic construct of rhetorical relation in our framework is event, in contrast with the discourse constituent unit (DCU) in [2]. Event-based theory, as an independent branch of formal semantics, has been studied since a long time ago. Many lexical properties (mainly for verbs), such as tense and aspect, causative and inchoative, etc., have already been investigated in detail. By using event here, we can borrow many off-the-shelf results directly. Also, the DCUs, which are notated by in other discourse literatures, are not as concretely defined as events. There are cases where a single DCU contains multiple events. For instance, “John says he loves Mary. Mary does not believe it.”. Only with DCU, the resolution for in the second sentence will cause ambiguity.

In addition, we and [2] make different assumptions over discourse and sentence. The same way as in [8], [2] views the discourse and sentence as identical semantic construct. However, as explained in Section 3.2, we do distinguish them as different objects. When encountering a single sentence, we should interpret it independently, without considering any discourse structure. While discourse is not simply a naive composition of component sentences. It should be their physical merging with various DRs added.

Finally, the DRs originate from different sources in the two works. [2] uses key words as DR indicator. For example, in discourse “A man walked in, then he coughed.”, [2] embeds the relation in the interpretation of . However, we believe that the DRs only be revealed when sentence and discourse are combined, they should not emerge in sentence interpretations. So DRs are presented in the composition functions (Formula 2, 3, 4 and 5) in our work.

## 4 Conclusion and Future Work

In this paper, we have represented the accessibility relations of natural language discourse within event semantics. This approach does not depend on any specific logic, all formulas are in the traditional MG style.

We decide to use event-based structure because it is able to handle sentential anaphora resolution (e.g., ), adverbial modifiers and other semantic phenomena. Also, applying dynamics to event semantics may largely extend its power, which was originally developed to treat single sentences. As we know, the accessibility among sentences in discourse depends on various types of DRs. However, these DRs are usually hard to determine. We assume all DRs be classified into two types: subordinating and coordinating. Also we obtain the accessibility relation with the right frontier constraint. Based on that, we encode these DRs and the updating potential for single sentences in a First Order Logic system.

In our approach, we differentiate discourse and sentence as two distinct semantic objects. The DRs are only added during the updating process, which is realized through the set of composition functions. This choice not only has computational, but also philosophical evidences.

In this paper, we only focus on representing the DRs and accessibility in logical forms, but how to determine these DRs, or whether the DRs have a more complicated effect than the right frontier constraint could be the subjects of future works. Further more, since we tried to construct the event structure compositionally, the scoping interaction among the new quantifiers (e.g., ) and previous existing ones (e.g., ) also needs further investigation.

## References

• [1] Asher, N.: Reference to abstract objects in discourse. Springer (1993)
• [2] Asher, N., Pogodalla, S.: Sdrt and continuation semantics. Proceedings of LENLS, Tokyo, Japan VII (2010)
• [3] Asher, N., Vieu, L.: Subordinating and coordinating discourse relations. Lingua 115(4), 591–610 (2005)
• [4] Bos, J.: Towards a large-scale formal semantic lexicon for text processing. In: Chiarcos, C., Eckart de Castilho, R., Stede, M. (eds.) From Form to Meaning: Processing Texts Automatically. Proceedings of the Biennal GSCL Conference 2009. pp. 3–14 (2009)
• [5] Davidson, D.: The logical form of action sentences. In: Rescher, N. (ed.) The Logic of Decision and Action. University of Pittsburgh Press, Pittsburgh (1967)
• [6] Groenendijk, J., Stokhof, M.: Dynamic predicate logic. Linguistics and Philosophy 14(1), 39–100 (1991)
• [7] de Groote, P.: Towards abstract categorial grammars. In: Proceedings of the 39th Annual Meeting on Association for Computational Linguistics. pp. 252–259. Association for Computational Linguistics (2001)
• [8] de Groote, P.: Towards a montagovian account of dynamics. Proceedings of Semantics and Linguistic Theory XVI (2006)
• [9] Gruber, J.S.: Studies in lexical relations. Ph.D. thesis, Massachusetts Institute of Technology. Dept. of Modern Languages (1965)
• [10] Heim, I.: File change semantics and the familiarity theory of definiteness. In: Bäuerle, R., Schwarze, C., von Stechow, A. (eds.) Meaning, Use, and Interpretation of Language, pp. 164–189. Walter de Gruyter, Berlin (1983)
• [11] Hobbs, J.R.: On the coherence and structure of discourse. CSLI, Center for the Study of Language and Information (US) (1985)
• [12] Kamp, H.: A theory of truth and semantic representation. In: Groenendijk, J., Janssen, T., Stokhof, M. (eds.) Formal Methods in the Study of Language, pp. 277–322. Mathematical Centre Tracts 135, Mathematisch Centrum, Amsterdam (1981)
• [13] Lascarides, A., Asher, N.: Temporal interpretation, discourse relations and commonsense entailment. Linguistics and Philosophy 16(5), 437–493 (1993)
• [14] Lascarides, A., Asher, N.: Segmented discourse representation theory: Dynamic semantics with discourse structure. Computing meaning pp. 87–124 (2007)
• [15] Mann, W., Thompson, S.: Rhetorical structure theory: Toward a functional theory of text organization. Text-Interdisciplinary Journal for the Study of Discourse 8(3), 243–281 (1988)
• [16] Montague, R.: English as A Formal Language. Linguaggi nella societae nella tecnica pp. 189–224 (1970)
• [17] Montague, R.: Universal Grammar. Theoria 36(3), 373–398 (1970)
• [18] Montague, R.: The proper treatment of quantification in ordinary english. In: Hintikka, J., Moravcsik, J., Suppes, P. (eds.) Approaches to Natural Language. Reidel, Dordrecht (1973)
• [19] Parsons, T.: Events in the Semantics of English: A Study in Subatomic Semantics. MIT Press, Cambridge, MA (1991)
• [20] Polanyi, L.: A formal model of the structure of discourse. Journal of Pragmatics 12(5-6), 601–638 (1988)