One Vector is Not Enough: Entity-Augmented Distributional Semantics for Discourse Relations

11/25/2014
by   Yangfeng Ji, et al.
Georgia Institute of Technology
0

Discourse relations bind smaller linguistic units into coherent texts. However, automatically identifying discourse relations is difficult, because it requires understanding the semantics of the linked arguments. A more subtle challenge is that it is not enough to represent the meaning of each argument of a discourse relation, because the relation may depend on links between lower-level components, such as entity mentions. Our solution computes distributional meaning representations by composition up the syntactic parse tree. A key difference from previous work on compositional distributional semantics is that we also compute representations for entity mentions, using a novel downward compositional pass. Discourse relations are predicted from the distributional representations of the arguments, and also of their coreferent entity mentions. The resulting system obtains substantial improvements over the previous state-of-the-art in predicting implicit discourse relations in the Penn Discourse Treebank.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

12/17/2014

Entity-Augmented Distributional Semantics for Discourse Relations

Discourse relations bind smaller linguistic elements into coherent texts...
10/21/2019

Semantic Graph Convolutional Network for Implicit Discourse Relation Classification

Implicit discourse relation classification is of great importance for di...
08/25/2011

Event in Compositional Dynamic Semantics

We present a framework which constructs an event-style dis- course seman...
11/08/2018

Towards Compositional Distributional Discourse Analysis

Categorical compositional distributional semantics provide a method to d...
03/03/2018

Tag-Enhanced Tree-Structured Neural Networks for Implicit Discourse Relation Classification

Identifying implicit discourse relations between text spans is a challen...
08/24/2018

Role Semantics for Better Models of Implicit Discourse Relations

Predicting the structure of a discourse is challenging because relations...
04/20/2021

Evaluating the Impact of a Hierarchical Discourse Representation on Entity Coreference Resolution Performance

Recent work on entity coreference resolution (CR) follows current trends...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The high-level organization of text can be characterized in terms of discourse relations between adjacent spans of text [Knott1996, Mann1984, Webber et al.1999]. Identifying these relations has been shown to be relevant to tasks such as summarization [Louis et al.2010a, Yoshida et al.2014]

, sentiment analysis 

[Somasundaran et al.2009], and coherence evaluation [Lin et al.2011]. While the Penn Discourse Treebank (PDTB) now provides a large dataset annotated for discourse relations [Prasad et al.2008], the automatic identification of implicit relations is a difficult task, with state-of-the-art performance at roughly 40% [Lin et al.2009].

Bob

gave

Tina

the

burger

She

was

hungry
(a) The distributional representations of burger and hungry are propagated up the parse tree, clarifying the implicit discourse relation between and .

Bob

gave

Tina

the

burger

She

was

hungry
(b) Distributional representations for the coreferent mentions Tina and she are computed from the parent and sibling nodes.
Figure 1: Distributional representations are computed through composition over the syntactic parse.

One reason for this poor performance is that predicting implicit discourse relations is a fundamentally semantic task, and the relevant semantics may be difficult to recover from surface level features. For example, consider the implicit discourse relation between the following two sentences (also shown in Figure 0(a)): Bob gave Tina the burger.
She was hungry.
While a connector like because seems appropriate here, there is little surface information to signal this relationship, unless the model has managed to learn a bilexical relationship between burger and hungry. Learning all such relationships from annotated data — including the relationship of hungry to knish, pierogie, pupusa etc — would require far more data than can possibly be annotated.

We address this issue by applying a discriminatively-trained model of compositional distributional semantics to discourse relation classification [Socher et al.2013b, Baroni et al.2014a]. The meaning of each discourse argument is represented as a vector [Turney et al.2010], which is computed through a series of compositional operations over the syntactic parse tree. The discourse relation can then be predicted as a bilinear combination of these vector representations. Both the prediction matrix and the compositional operator are trained in a supervised large-margin framework [Socher et al.2011], ensuring that the learned compositional operation produces semantic representations that are useful for discourse. We show that when combined with a small number of surface features, this approach outperforms prior work on the classification of implicit discourse relations in the PDTB.

Despite these positive results, we argue that purely vector-based representations are insufficiently expressive to capture discourse relations. To see why, consider what happens if make a tiny change to example (1): Bob gave Tina the burger.
He was hungry.

Figure 2: t-SNE visualization [Van der Maaten and Hinton2008] of word representations in the PDTB corpus.

After changing the subject of the second sentence to Bob, the connective “because” no longer seems appropriate; a contrastive connector like although is preferred. But despite the radical difference in meaning, the distributional representation of the second sentence will be almost unchanged: the syntactic structure remains identical, and the words he and she have very similar word representations (see Figure 2). If we reduce each discourse argument span to a single vector, we cannot possibly capture the ways that discourse relations are signaled by entities and their roles  [Cristea et al.1998, Louis et al.2010b]. As mooney2014semantic puts it, “you can’t cram the meaning of a whole %&!$# sentence into a single $&!#* vector!”

We address this issue by computing vector representations not only for each discourse argument, but also for each coreferent entity mention. These representations are meant to capture the role played by the entity in the text, and so they must take the entire span of text into account. We compute entity-role representations using a novel feed-forward compositional model, which combines “upward” and “downward” passes through the syntactic structure, shown in Figure 0(b). In the example, the downward representations for Tina and she

are computed from a combination of the parent and sibling nodes in the binarized parse tree. Representations for these coreferent mentions are then combined in a bilinear product, and help to predict the implicit discourse relation. In example (

1), we resolve he to Bob, and combine their vector representations instead, yielding a different prediction about the discourse relation.

Our overall approach achieves a 3% improvement in accuracy over the best previous work [Lin et al.2009] on multiclass discourse relation classification, and also outperforms more recent work on binary classification. The novel entity-augmented distributional representation improves accuracy over the “upward” compositional model, showing the importance of representing the meaning of coreferent entity mentions.

2 Entity augmented distributional semantics

We now formally define our approach to entity-augmented distributional semantics, using the notation shown in Table 1. For clarity of exposition, we focus on discourse relations between pairs of sentences. The extension to non-sentence arguments is discussed in Section 5.

Notation Explanation
left and right children of
parent and sibling of
set of aligned entities between arguments and
set of discourse relations
ground truth relation
decision function
upward vector
downward vector
classification parameter associated with upward vectors
classification parameter associated with downward vectors
composition operator in upward composition procedure
composition operator in downward composition procedure
objective function
Table 1: Table of notation

2.1 Upward pass: argument semantics

Distributional representations for discourse arguments are computed in a feed-forward “upward” pass: each non-terminal in the binarized syntactic parse tree has a -dimensional distributional representation that is computed from the distributional representations of its children, bottoming out in pre-trained representations of individual words.

We follow the Recursive Neural Network (RNN) model of socher2011parsing. For a given parent node

, we denote the left child as , and the right child as ; we compose their representations to obtain,

(1)

where is the element-wise hyperbolic tangent function [Pascanu et al.2012], and is the upward composition matrix. We apply this compositional procedure from the bottom up, ultimately obtaining the argument-level representation . The base case is found at the leaves of the tree, which are set equal to pre-trained word vector representations. For example, in the second sentence of Figure 1, we combine the word representations of was and hungry to obtain , and then combine with the word representation of she to obtain . Note that the upward pass is feedforward, meaning that there are no cycles and all nodes can be computed in linear time.

2.2 Downward pass: entity semantics

As seen in the contrast between Examples 1 and 1, a model that uses a single vector representation for each discourse argument would find little to distinguish between she was hungry and he was hungry. It would therefore almost certainly fail to identify the correct discourse relation for at least one of these cases, which requires tracking the roles played by the entities that are coreferent in each pair of sentences. To address this issue, we augment the representation of each argument with additional vectors, representing the semantics of the role played by each coreferent entity in each argument. For example, in (1a), Tina got the burger, and in (1b), she was hungry. Rather than represent this information in a logical form — which would require robust parsing to a logical representation — we represent it through additional distributional vectors.

The role of a constituent can be viewed as a combination of information from two neighboring nodes in the parse tree: its parent , and its sibling . We can make a downward pass, computing the downward vector from the downward vector of the parent , and the upward vector of the sibling :

(2)

where is the downward composition matrix. The base case of this recursive procedure occurs at the root of the parse tree, which is set equal to the upward representation, . This procedure is illustrated in Figure 0(b): for Tina, the parent node is , and the sibling is .

The up-down compositional algorithm is designed to maintain the feedforward nature of the neural network, so that we can efficiently compute all nodes without iterating. Each downward node influences only other downward nodes where , meaning that the downward pass is feedforward. The upward node is also feedforward: each upward node influences only other upward nodes where . Since the upward and downward passes are each feedforward, and the downward nodes do not influence any upward nodes, the combined up-down network is also feedforward. This ensures that we can efficiently compute all and in time that is linear in the length of the input.

Connection to the inside-outside algorithm

In the inside-outside algorithm for computing marginal probabilities in a probabilistic context-free grammar 

[Lari and Young1990], the inside scores are constructed in a bottom-up fashion, like our upward nodes; the outside score for node is constructed from a product of the outside score of the parent and the inside score of the sibling , like our downward nodes. The standard inside-outside algorithm sums over all possible parse trees, but since the parse tree is observed in our case, a closer analogy would be to the constrained version of the inside-outside algorithm for latent variable grammars [Petrov et al.2006]

. cohen2014spectral describe a tensor formulation of the constrained inside-outside algorithm; similarly, we could compute the downward vectors by a tensor contraction of the parent and sibling vectors 

[Smolensky1990, Socher et al.2013a]. However, this would involve parameters, rather than the parameters in our matrix-vector composition.

3 Predicting discourse relations

To predict the discourse relation between an argument pair , the decision function is a sum of bilinear products,

(3)

where and are the classification parameters for relation . A scalar is used as the bias term for relation , and is the set of coreferent entity mentions shared among the argument pair . The decision value of relation is therefore based on the upward vectors at the root, and , as well as on the downward vectors for each pair of aligned entity mentions. For the cases where there are no coreferent entity mentions between two sentences, , the classification model considers only the upward vectors at the root.

To avoid overfitting, we apply a low-dimensional approximation to each ,

(4)

The same approximation is also applied to each , reducing the number of classification parameters from to .

Surface features

Prior work has identified a number of useful surface-level features [Lin et al.2009], and the classification model can easily be extended to include them. Defining

as the vector of surface features extracted from the argument pair

, the corresponding decision function is modified as,

(5)

where is the classification weight on surface features for relation . We describe these features in Section 5.

4 Large-margin learning framework

There are two sets of parameters to be learned: the classification parameters , and the composition parameters . We use pre-trained word representations, and do not update them. While prior work shows that it can be advantageous to retrain word representations for discourse analysis [Ji and Eisenstein2014], our preliminary experiments found that updating the word representations led to serious overfitting in this model.

Following socher2011parsing, we define a large margin objective, and use backpropagation to learn all parameters of the network jointly 

[Goller and Kuchler1996]

. Learning is performed using stochastic gradient descent 

[Bottou1998], so we present the learning problem for a single argument pair with the gold discourse relation . The objective function for this training example is a regularized hinge loss,

(6)

where is the set of learning parameters. The regularization term indicates that the squared values of all parameters are penalized by ; this corresponds to penalizing the squared Frobenius norm for the matrix parameters, and the squared Euclidean norm for the vector parameters.

4.1 Learning the classification parameters

In Equation 6, , if for every , holds. Otherwise, the loss will be caused by any , where and . The gradient for the classification parameters therefore depends on the margin value between ground truth label and all other labels. Specifically, taking one component of , , as an example, the derivative of the objective for is

(7)

where is the delta function. The derivative for is

(8)

During learning, the updating rule for is

(9)

where is the learning rate.

Similarly, we can obtain the gradient information and updating rules for parameters .

4.2 Learning the composition parameters

There are two composition matrices and , corresponding to the upward and downward composition procedures respectively. Taking the upward composition parameter as an example, the derivative of with respect to is

(10)

As with the classification parameters, the derivative depends on the margin between and .

For every , we have the unified derivative form,

(11)

The gradient information of also depends on the gradient information of with respect to every downward vector , as shown in the last two terms in Equation 11. This is because the computation of each downward vector includes the upward vector of the sibling node, , as shown in Equation 2. For an example, see the construction of the downward vectors for Tina and she in Figure 0(b).

The partial derivatives of the decision function in Equation 11 are computed as,

(12)

The partial derivatives of the upward and downward vectors with respect to the upward compositional operator are computed as,

(13)

and

(14)

where is the set of all nodes in the upward composition model that help to generate . For example, in Figure 0(a), the set includes and the word representations for Tina, the, and burger. The set includes all the upward nodes involved in the downward composition model generating . For example, in Figure 0(b), the set includes and the word representations for was and hungry.

The derivative of the objective with respect to the downward compositional operator is computed in a similar fashion, but it depends only on the downward nodes, .

5 Implementation

Our implementation will be made available online after review. Training on the PDTB takes roughly three hours to converge.111On Intel(R) Xeon(R) CPU 2.20GHz without parallel computing. Convergence is faster if the surface feature weights are trained separately first. We now describe some additional details of our implementation.

Learning

During learning, we used AdaGrad [Duchi et al.2011]

to tune the learning rate in each iteration. To avoid the exploding gradient problem 

[Bengio et al.1994], we used the norm clipping trick proposed by pascanu2012difficulty, fixing the norm threshold at .

Parameters

Our model includes three tunable parameters: the latent dimension for the distributional representation, the regularization parameter , and the initial learning rate . All parameters are selected by randomly selecting a development set of 20% of the training data. We consider the values for the latent dimensionality, for the regularization (on each training instance), and for the learning rate. We assign separate regularizers and learning rates to the upward composition model, downward composition model, feature model and the classification model with composition vectors.

Initialization

All the classification parameters are initialized to . For the composition parameters, we follow bengio2012practical and initialize and with uniform random values drawn from the range .

Word representations

We trained a word2vec model [Mikolov et al.2013]

on the PDTB corpus, standardizing the induced representations to zero-mean, unit-variance 

[LeCun et al.2012]. Experiments with pre-trained GloVe word vector representations [Pennington et al.2014] gave broadly similar results.

Syntactic structure

Our model requires that the syntactic structure for each argument as a binary tree. We run the Stanford parser [Klein and Manning2003] to obtain constituent parse trees of each sentence in the PDTB, and binarize all resulting parse trees. Argument spans in the Penn Discourse Treebank need not be sentences or syntactic constituents: they can include multiple sentences, non-constituent spans, and even discontinuous spans [Prasad et al.2008]. In all cases, we identify the syntactic subtrees within the argument span, and construct a right branching superstructure that unifies them into a tree.

Dataset Annotation Training (%) Test (%)
1. PDTB Automatic 27.4 29.1
2. PDTBOnto Automatic 26.2 32.3
3. PDTBOnto Gold 40.9 49.3
Table 2: Proportion of relations with coreferent entities, according to automatic coreference resolution and gold coreference annotation.
Coreference

To extract entities from the PDTB, we ran the Berkeley coreference system [Durrett and Klein2013] on each document. For each argument pair, we simply ignore the non-corefential entity mentions. Line 1 in Table 2 shows the proportion of the instances with shared entities in the PDTB training and test data. We also consider the intersection of the PDTB with the OntoNotes corpus [Pradhan et al.2007], which contains gold coreference annotations. The intersection PDTBOnto contains 597 documents; the statistics for automatic and gold coreference are shown in lines 2 and 3 of Table 2.

Additional features

We supplement our classification model using additional surface features proposed by lin2009recognizing. These include four categories: lexical features, constituent parse features, dependency parse features, and contextual features. Following this prior work, we use mutual information to select features in the first three categories, obtaining 500 lexical features, 100 constituent features, and 100 dependency features.

6 Experiments

We evaluate our approach on the Penn Discourse Treebank (PDTB) [Prasad et al.2008], which provides a discourse level annotation over the Wall Street Journal corpus. In the PDTB, each discourse relation is annotated between two argument spans. Identifying the argument spans of discourse relations is a challenging task [Lin et al.2014], which we do not attempt here; instead, we use gold argument spans, as in most of the prior work on this task. PDTB relations may be explicit, meaning that they are signaled by discourse connectives (e.g., because); alternatively, they may be implicit

, meaning that the connective is absent. pitler2008easily show that most connectives are unambiguous, so we focus on the more challenging problem of classifying implicit discourse relations.

The PDTB provides a three-level hierarchy of discourse relations. The first level consists of four major relation classes: Temporal, Contingency, Comparison and Expansion. For each class, a second level of types is defined to provide finer semantic distinctions; there are sixteen such relation types. A third level of subtypes is defined for only some types, specifying the semantic contribution of each argument.

There are two main approaches to evaluating implicit discourse relation classification. Multiclass classification requires identifying the discourse relation from all possible choices. This task was explored by lin2009recognizing, who focus on second-level discourse relations. More recent work has emphasized binary classification, where the goal is to build and evaluate separate “one-versus-all” classifiers for each discourse relation [Pitler et al.2009, Park and Cardie2012, Biran and McKeown2013]. We primarily focus on multiclass classification, because it is more relevant for the ultimate goal of building a PDTB parser; however, to compare with recent prior work, we also evaluate on binary relation classification.

Model +Entity semantics +Surface features Accuracy(%)
Baseline models
1. Most common class No 26.03
2. Additive word representations No 50 28.73
Prior work
3. [Lin et al.2009] Yes 40.2
Our work
4. Surface feature model Yes 39.69
5. disco2 No No 50 36.98
6. disco2 Yes No 50 37.63
7. disco2 No Yes 50 42.53
8. disco2 Yes Yes 50 43.56
signficantly better than [Lin et al.2009] with
signficantly better than line 4 with
Table 3: Experimental results on multiclass classification of level-2 discourse relations. The results of lin2009recognizing are shown in line 3; the results for our reimplementation of this system are shown in line 4.

6.1 Multiclass classification

Our main evaluation involves predicting the correct discourse relation for each argument pair, from among the second-level relation types. Following lin2009recognizing, we exclude five relation types that are especially rare: Condition, Pragmatic Condition, Pragmatic Contrast, Pragmatic Concession and Expression. In addition, about 2% of the implicit relations in the PDTB are annotated with more than one type. During training, each argument pair that is annotated with two relation types is considered as two training instances, each with one relation type. During testing, if the classifier assigns either of the two types, it is considered to be correct.

6.1.1 Baseline and competitive systems

[topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex]

Most common class

The most common class is Cause, accounting for 26.03% of the implicit discourse relations in the PDTB test set.

Additive word representations

blacoe2012comparison show that simply adding word vectors can perform surprisingly well at assessing the meaning of short phrases. In this baseline, we represent each argument as a sum of its word representations, and estimate a bilinear prediction matrix.

lin2009recognizing

To our knowledge, the best published accuracy on multiclass classification of second-level implicit discourse relations is from lin2009recognizing, who apply feature selection to obtain a set of lexical and syntactic features over the arguments.

Surface feature model

We re-implement the system of lin2009recognizing, enabling a more precise comparison. The major difference is that we apply our online learning framework, rather than a batch classification algorithm.

Compositional

Finally, we report results for the method described in this paper. Since it is a distributional compositional approach to discourse relations, we name it disco2.

6.1.2 Results

Table 3 presents results for multiclass identification of second-level PDTB relations. As shown in lines 7 and 8, disco2 outperforms both baseline systems and the prior state-of-the-art (line 3). The strongest performance is obtained by including the entity distributional semantics, with a 3.4% improvement over the accuracy reported by lin2009recognizing (). The improvement over our reimplementation of this work is even greater, which shows how the distributional representation provides additional value over the surface features. Because we have reimplemented this system, we can observe individual predictions, and can therefore use the more sensitive sign test for statistical significance. This test shows that even without entity semantics, disco2 significantly outperforms the surface feature model ().

The latent dimension is chosen from a development set (see Section 5). Test set performance for each setting of is shown in Figure 3, with accuracies in a narrow range between 41.9% and 43.6%.

Figure 3: The performance of disco2 (full model), over different latent dimensions .

6.1.3 Coreference

The contribution of entity semantics is shown in Table 3 by the accuracy differences between lines 5 and 6, and between lines 7 and 8. On the subset of relations in which the arguments share at least one coreferent entity, the difference is substantially larger: the accuracy of disco2 is 44.9% with entity semantics, and 42.2% without. Considering that only 29.1% of the relations in the PDTB test set include shared entities, it therefore seems likely that a more sensitive coreference system could yield further improvements for the entity-semantics model. Indeed, gold coreference annotation on the intersection between the PDTB and the OntoNotes corpus shows that 40-50% of discourse relations involve coreferent entities (Table 2). Evaluating our model on just this intersection, we find that the inclusion of entity semantics yields an improvement in accuracy from 37.1% to 38.8%.

6.2 Binary classification

Much of the recent work in PDTB relation detection has focused on binary classification, building and evaluating separate one-versus-all classifiers for each relation type [Pitler et al.2009, Park and Cardie2012, Biran and McKeown2013]. This work has focused on recognition of the four first-level relations, grouping EntRel with the Expansion relation. We follow this evaluation approach as closely as possible, using sections 2-20 of the PDTB as a training set, sections 0-1 as a development set for parameter tuning, and sections 21-22 for testing.

6.2.1 Classification method

We apply disco2 with the downward composition procedure and the same surface features listed in Section 5; this corresponds to the system reported in line 8 of Table 3

. However, instead of employing a multiclass classifier for all four relations, we train four binary classifiers, one for each first-level discourse relation. We optimize the hyperparameters

separately for each classifier (see Section 5 for details), by performing a grid search to optimize the F-measure on the development data. Following pitler2009automatic, we obtain a balanced training set by resampling training instances in each class until the number of positive and negative instances are equal.

6.2.2 Competitive systems

We compare our model with the published results from several competitive systems. Since we are comparing with previously published results, we focus on systems which use the predominant training / test split, with sections 2-20 for training and 21-22 for testing. This means we cannot compare with recent work from li2014reducing, who use sections 20-24 for testing.

[topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex]

pitler2009automatic

present a classification model using linguistically-informed features, such as polarity tags and Levin verb classes.

zhou2010predicting

predict discourse connective words, and then use these predicted connectives as features in a downstream model to predict relations.

park2012implicit

showed that the performance on each relation can be improved by selecting a locally-optimal feature set.

biran2013aggregated

reweight word pair features using distributional statistics from the Gigaword corpus, obtaining denser aggregated score features.

6.2.3 Experimental results

Comparison Contingency Expansion Temporal
F1 Acc F1 Acc F1 Acc F1 Acc
Competitive systems
1. [Pitler et al.2009] 21.96 56.59 47.13 67.30 76.42 63.62 16.76 63.49
2. [Zhou et al.2010] 31.79 58.22 47.16 48.96 70.11 54.54 20.30 55.48
3. [Park and Cardie2012] 31.32 74.66 49.82 72.09 79.22 69.14 26.57 79.32
4. [Biran and McKeown2013] 25.40 63.36 46.94 68.09 75.87 62.84 20.23 68.35
Our work
5. disco2 35.84 68.45 51.39 74.08 79.91 69.47 26.91 86.41
Table 4: Evaluation on the first-level discourse relation identification. The results of the competitive systems are reprinted.

Table 4 presents the performance of the disco2 model and the published results of competitive systems. Our model achieves the best results on most metrics, achieving F-measure improvements of 4.52% on Comparison, 1.57% on Contingency, 0.69% on Expansion, and 0.34% on Temporal. These results are attained without performing per-relation feature selection, as in prior work.

7 Related Work

This paper draws mainly on previous work in discourse relation detection and compositional distributional semantics.

7.1 Discourse relations

Many models of discourse structure focus on relations between spans of text [Knott1996], including rhetorical structure theory (RST; Mann and Thompson, 1988), lexical tree-adjoining grammar for discourse (D-LTAG; Webber, 2004), and even centering theory [Grosz et al.1995], which posits relations such as continuation and smooth shift between adjacent spans. Consequently, the automatic identification of discourse relations has long been considered a key component of discourse parsing [Marcu1999].

We work within the D-LTAG framework, as annotated in the Penn Discourse Treebank (PDTB; Prasad et al., 2008), with the task of identifying implicit discourse relations. The seminal work in this task is from pitler2009automatic and lin2009recognizing. pitler2009automatic focus on lexical features, including linguistically motivated word groupings such as Levin verb classes and polarity tags. lin2009recognizing identify four different feature categories, based on the raw text, the context, and syntactic parse trees; the same feature sets are used in later work on end-to-end discourse parsing [Lin et al.2014], which also includes components for identifying argument spans. Subsequent research has explored feature selection [Park and Cardie2012, Lin et al.2014], as well as combating feature sparsity by aggregating features [Biran and McKeown2013]. Our model includes surface features that are based on a reimplementation of the work of lin2009recognizing, because they also undertake the task of multiclass relation classification; however, the techniques introduced in more recent research may also be applicable and complementary to the distributional representation that constitutes the central contribution of this paper; if so, applying these techniques could further improve performance.

Our contribution of entity-augmented distributional semantics is motivated by the intuition that entities play a central role in discourse structure. Centering theory draws heavily on referring expressions to entities over the discourse [Grosz et al.1995, Barzilay and Lapata2008]; similar ideas have been extended to rhetorical structure theory [Corston-Oliver1998, Cristea et al.1998]. In the specific case of identification of implicit PDTB relations, louis2010using explore a number of entity-based features, including grammatical role, syntactic realization, and information status. Despite the solid linguistic foundation for these features, they are shown to contribute little in comparison with more traditional word-pair features. This suggests that syntax and information status may not be enough, and that it is crucial to capture semantics of each entity’s role in the discourse. Our approach does this by propagating distributional semantics from throughout the sentence into the entity span, using our up-down compositional procedure.

7.2 Compositional distributional semantics

Distributional semantics begins with the hypothesis that words and phrases that tend to appear in the same contexts have the same meaning [Firth1957]. The current renaissance of interest in distributional semantics can be attributed in part to the application of discriminative techniques, which emphasize predictive models [Bengio et al.2006, Baroni et al.2014b], rather than context-counting and matrix factorization [Landauer et al.1998, Turney et al.2010]. In addition, recent work has made practical the idea of propagating distributional information through linguistic structures [Smolensky1990, Collobert et al.2011]. In such models, the distributional representations and compositional operators can be fine-tuned by backpropagating supervision from task-specific labels, enabling accurate and fast models for a wide range of language technologies [Socher et al.2011, Socher et al.2013b, Chen and Manning2014].

The application of distributional semantics to discourse includes the use of latent semantic analysis for text segmentation [Choi et al.2001] and coherence assessment [Foltz et al.1998], as well as paraphrase detection by the factorization of matrices of distributional counts [Kauchak and Barzilay2006, Mihalcea et al.2006]. These approaches essentially compute a distributional representation in advance, and then use it alongside other features. In contrast, our approach follows more recent work in which the distributional representation is driven by supervision from discourse annotations. For example, ji2014representation show that RST parsing can be performed by learning task-specific word representations, which perform considerably better than generic word2vec representations [Mikolov et al.2013]

. li2014recursive propose a recurrent neural network approach to RST parsing, which is similar to the upward pass in our model. However, prior work has not applied these ideas to the classification of implicit relations in the PDTB, and does not consider the role of entities. As we argue in the introduction, a single vector representation is insufficiently expressive, because it obliterates the entity chains that help to tie discourse together.

More generally, our entity-augmented distributional representation can be viewed in the context of recent literature on combining distributional and formal semantics: by representing entities, we are taking a small step away from purely distributional representations, and towards more traditional logical representations of meaning. In this sense, our approach is “bottom-up”, as we try to add a small amount of logical formalism to distributional representations; other approaches are “top-down”, softening purely logical representations by using distributional clustering [Poon and Domingos2009, Lewis and Steedman2013] or Bayesian non-parametrics [Titov and Klementiev2011] to obtain types for entities and relations. Still more ambitious would be to implement logical semantics within a distributional compositional framework [Clark et al.2011, Grefenstette2013]. At present, these combinations of logical and distributional semantics have been explored only at the sentence level. In generalizing such approaches to multi-sentence discourse, we argue that it will not be sufficient to compute distributional representations of sentences: a multitude of other elements, such as entities, will also have to represented.

8 Conclusion

Discourse relations are determined by the meaning of their arguments, and progress on discourse parsing therefore requires computing representations of the argument semantics. We present a compositional method for inducing distributional representations not only of discourse arguments, but also of the entities that thread through the discourse. In this approach, semantic composition is applied up the syntactic parse tree to induce the argument-level representation, and then down the parse tree to induce representations of entity spans. Discourse arguments can then be compared in terms of their overall distributional representation, as well as by the representations of coreferent entity mentions. This enables the compositional operators to be learned by backpropagation from discourse annotations. This approach outperforms previous work on classification of implicit discourse relations in the Penn Discourse Treebank. Future work may consider joint models of discourse structure and coreference, as well as representations for other discourse elements, such as event coreference and shallow semantics.

References

  • [Baroni et al.2014a] Marco Baroni, Raffaella Bernardi, and Roberto Zamparelli. 2014a. Frege in space: A program for compositional distributional semantics. Linguistic Issues in Language Technologies.
  • [Baroni et al.2014b] Marco Baroni, Georgiana Dinu, and Germán Kruszewski. 2014b. Don’t count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, volume 1.
  • [Barzilay and Lapata2008] Regina Barzilay and Mirella Lapata. 2008. Modeling local coherence: An entity-based approach. Computational Linguistics, 34(1):1–34.
  • [Bengio et al.1994] Yoshua Bengio, Patrice Simard, and Paolo Frasconi. 1994. Learning long-term dependencies with gradient descent is difficult. Neural Networks, IEEE Transactions on, 5(2):157–166.
  • [Bengio et al.2006] Yoshua Bengio, Holger Schwenk, Jean-Sébastien Senécal, Fréderic Morin, and Jean-Luc Gauvain. 2006. Neural probabilistic language models. In

    Innovations in Machine Learning

    , pages 137–186. Springer.
  • [Bengio2012] Yoshua Bengio. 2012. Practical recommendations for gradient-based training of deep architectures. In Neural Networks: Tricks of the Trade, pages 437–478. Springer.
  • [Biran and McKeown2013] Or Biran and Kathleen McKeown. 2013. Aggregated word pair features for implicit discourse relation disambiguation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 69–73.
  • [Blacoe and Lapata2012] William Blacoe and Mirella Lapata. 2012. A comparison of vector-based representations for semantic composition. In

    Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning

    , pages 546–556.
  • [Bottou1998] Léon Bottou. 1998. Online Algorithms and Stochastic Approximations. In David Saad, editor, Online Learning and Neural Networks. Cambridge University Press, Cambridge, UK.
  • [Chen and Manning2014] Danqi Chen and Christopher D Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of Empirical Methods for Natural Language Processing (EMNLP).
  • [Choi et al.2001] Freddy YY Choi, Peter Wiemer-Hastings, and Johanna Moore. 2001. Latent semantic analysis for text segmentation. In In Proceedings of EMNLP.
  • [Clark et al.2011] Stephen Clark, Bob Coecke, and Mehrnoosh Sadrzadeh. 2011. Mathematical foundations for a compositional distributed model of meaning. Linguistic Analysis, 36(1-4):345–384.
  • [Cohen et al.2014] Shay B Cohen, Karl Stratos, Michael Collins, Dean P Foster, and Lyle Ungar. 2014. Spectral learning of latent-variable PCFGs: Algorithms and sample complexity. Journal of Machine Learning Research, 15:2399–2449.
  • [Collobert et al.2011] R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. 2011. Natural Language Processing (Almost) from Scratch. Journal of Machine Learning Research, 12:2493–2537.
  • [Corston-Oliver1998] Simon Corston-Oliver. 1998. Beyond string matching and cue phrases: Improving efficiency and coverage in discourse analysis. In

    The AAAI Spring Symposium on Intelligent Text Summarization

    , pages 9–15.
  • [Cristea et al.1998] Dan Cristea, Nancy Ide, and Laurent Romary. 1998. Veins theory: A model of global discourse cohesion and coherence. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, pages 281–285.
  • [Duchi et al.2011] John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121–2159.
  • [Durrett and Klein2013] Greg Durrett and Dan Klein. 2013. Easy Victories and Uphill Battles in Coreference Resolution. In Proceedings of the Conference on Empirical Methods in Natural Language Processing.
  • [Firth1957] J. R. Firth. 1957. Papers in Linguistics 1934-1951. Oxford University Press.
  • [Foltz et al.1998] Peter W Foltz, Walter Kintsch, and Thomas K Landauer. 1998. The measurement of textual coherence with latent semantic analysis. Discourse processes, 25(2-3):285–307.
  • [Goller and Kuchler1996] Christoph Goller and Andreas Kuchler. 1996. Learning task-dependent distributed representations by backpropagation through structure. In Neural Networks, IEEE International Conference on, pages 347–352. IEEE.
  • [Grefenstette2013] Edward Grefenstette. 2013. Towards a Formal Distributional Semantics: Simulating Logical Calculi with Tensors, April.
  • [Grosz et al.1995] Barbara J Grosz, Scott Weinstein, and Aravind K Joshi. 1995. Centering: A framework for modeling the local coherence of discourse. Computational linguistics, 21(2):203–225.
  • [Ji and Eisenstein2014] Yangfeng Ji and Jacob Eisenstein. 2014. Representation learning for text-level discourse parsing. In Proceedings of the Association for Computational Linguistics (ACL), Baltimore, MD.
  • [Kauchak and Barzilay2006] David Kauchak and Regina Barzilay. 2006. Paraphrasing for automatic evaluation. In Proceedings of NAACL, pages 455–462. Association for Computational Linguistics.
  • [Klein and Manning2003] Dan Klein and Christopher D Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics-Volume 1, pages 423–430. Association for Computational Linguistics.
  • [Knott1996] Alistair Knott. 1996. A data-driven methodology for motivating a set of coherence relations. Ph.D. thesis, The University of Edinburgh.
  • [Landauer et al.1998] Thomas Landauer, Peter W. Foltz, and Darrel Laham. 1998. Introduction to Latent Semantic Analysis. Discourse Processes, 25:259–284.
  • [Lari and Young1990] Karim Lari and Steve J Young. 1990. The estimation of stochastic context-free grammars using the inside-outside algorithm. Computer speech & language, 4(1):35–56.
  • [LeCun et al.2012] Yann A LeCun, Léon Bottou, Genevieve B Orr, and Klaus-Robert Müller. 2012. Efficient backprop. In Neural networks: Tricks of the trade, pages 9–48. Springer.
  • [Lewis and Steedman2013] Mike Lewis and Mark Steedman. 2013. Combined Distributional and Logical Semantics. TACL, 1:179–192.
  • [Li and Nenkova2014] Junyi Jessy Li and Ani Nenkova. 2014. Reducing sparsity improves the recognition of implicit discourse relations. In Proceedings of SIGDIAL, pages 199–207.
  • [Li et al.2014] Jiwei Li, Rumeng Li, and Eduard Hovy. 2014. Recursive deep models for discourse parsing. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2061–2069.
  • [Lin et al.2009] Ziheng Lin, Min-Yen Kan, and Hwee Tou Ng. 2009. Recognizing implicit discourse relations in the Penn Discourse Treebank. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1-Volume 1, pages 343–351. Association for Computational Linguistics.
  • [Lin et al.2011] Ziheng Lin, Hwee Tou Ng, and Min-Yen Kan. 2011. Automatically Evaluating Text Coherence Using Discourse Relations. In Proceedings of ACL, pages 997–1006, Portland, Oregon, USA, June. Association for Computational Linguistics.
  • [Lin et al.2014] Ziheng Lin, Hwee Tou Ng, and Min-Yen Kan. 2014. A PDTB-styled end-to-end discourse parser. Natural Language Engineering, pages 1–34.
  • [Louis et al.2010a] Annie Louis, Aravind Joshi, and Ani Nenkova. 2010a. Discourse indicators for content selection in summarization. In Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 147–156. Association for Computational Linguistics.
  • [Louis et al.2010b] Annie Louis, Aravind Joshi, Rashmi Prasad, and Ani Nenkova. 2010b. Using entity features to classify implicit discourse relations. In Proceedings of the SIGDIAL, pages 59–62, Tokyo, Japan, September. Association for Computational Linguistics.
  • [Mann and Thompson1988] William Mann and Sandra Thompson. 1988. Rhetorical Structure Theory: Toward a Functional Theory of Text Organization. Text, 8(3):243–281.
  • [Mann1984] William Mann. 1984.

    Discourse structures for text generation.

    In Proceedings of the 10th International Conference on Computational Linguistics and 22nd annual meeting on Association for Computational Linguistics, pages 367–375. Association for Computational Linguistics.
  • [Marcu1999] Daniel Marcu. 1999. A decision-based approach to rhetorical parsing. In Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics, pages 365–372. Association for Computational Linguistics.
  • [Mihalcea et al.2006] Rada Mihalcea, Courtney Corley, and Carlo Strapparava. 2006. Corpus-based and knowledge-based measures of text semantic similarity. In AAAI.
  • [Mikolov et al.2013] Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013. Linguistic Regularities in Continuous Space Word Representations. In NAACL-HLT, pages 746–751, Atlanta, Georgia, June. Association for Computational Linguistics.
  • [Mooney2014] Raymond J. Mooney. 2014. Semantic parsing: Past, present, and future. Presentation slides from the ACL Workshop on Semantic Parsing.
  • [Park and Cardie2012] Joonsuk Park and Claire Cardie. 2012. Improving Implicit Discourse Relation Recognition Through Feature Set Optimization. In Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 108–112, Seoul, South Korea, July. Association for Computational Linguistics.
  • [Pascanu et al.2012] Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2012. On the difficulty of training recurrent neural networks. arXiv preprint arXiv:1211.5063.
  • [Pennington et al.2014] Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In EMNLP.
  • [Petrov et al.2006] Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning accurate, compact, and interpretable tree annotation. In Proceedings of the Association for Computational Linguistics (ACL).
  • [Pitler et al.2008] Emily Pitler, Mridhula Raghupathy, Hena Mehta, Ani Nenkova, Alan Lee, and Aravind Joshi. 2008. Easily Identifiable Discourse Relations. In Coling, pages 87–90, Manchester, UK, August. Coling 2008 Organizing Committee.
  • [Pitler et al.2009] Emily Pitler, Annie Louis, and Ani Nenkova. 2009. Automatic Sense Prediction for Implicit Discourse Relations in Text. In ACL-IJCNLP.
  • [Poon and Domingos2009] Hoifung Poon and Pedro Domingos. 2009. Unsupervised semantic parsing. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 1–10. Association for Computational Linguistics.
  • [Pradhan et al.2007] Sameer S Pradhan, Eduard Hovy, Mitch Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2007. Ontonotes: A unified relational semantic representation. International Journal of Semantic Computing, 1(04):405–419.
  • [Prasad et al.2008] Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, Livio Robaldo, Aravind Joshi, and Bonnie Webber. 2008. The Penn Discourse Treebank 2.0. In LREC.
  • [Smolensky1990] Paul Smolensky. 1990. Tensor product variable binding and the representation of symbolic structures in connectionist systems. Artificial intelligence, 46(1):159–216.
  • [Socher et al.2011] Richard Socher, Cliff C Lin, Chris Manning, and Andrew Y Ng. 2011. Parsing natural scenes and natural language with recursive neural networks. In Proceedings of the 28th International Conference on Machine Learning, pages 129–136.
  • [Socher et al.2013a] Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. 2013a. Reasoning With Neural Tensor Networks for Knowledge Base Completion. In Advances in Neural Information Processing Systems (NIPS).
  • [Socher et al.2013b] Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013b. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP).
  • [Somasundaran et al.2009] Swapna Somasundaran, Galileo Namata, Janyce Wiebe, and Lise Getoor. 2009. Supervised and unsupervised methods in employing discourse relations for improving opinion polarity classification. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 170–179. Association for Computational Linguistics.
  • [Titov and Klementiev2011] Ivan Titov and Alexandre Klementiev. 2011. A bayesian model for unsupervised semantic parsing. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 1445–1455. Association for Computational Linguistics.
  • [Turney et al.2010] Peter D Turney, Patrick Pantel, et al. 2010. From frequency to meaning: Vector space models of semantics. Journal of artificial intelligence research, 37(1):141–188.
  • [Van der Maaten and Hinton2008] Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research, 9(85):2579–2605.
  • [Webber et al.1999] Bonnie Webber, Alistair Knott, Matthew Stone, and Aravind Joshi. 1999. Discourse relations: A structural and presuppositional account using lexicalised tag. In Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics, pages 41–48. Association for Computational Linguistics.
  • [Webber2004] Bonnie Webber. 2004. D-LTAG: extending lexicalized TAG to discourse. Cognitive Science, 28(5):751–779, September.
  • [Yoshida et al.2014] Yasuhisa Yoshida, Jun Suzuki, Tsutomu Hirao, and Masaaki Nagata. 2014. Dependency-based Discourse Parser for Single-Document Summarization. In EMNLP.
  • [Zhou et al.2010] Zhi-Min Zhou, Yu Xu, Zheng-Yu Niu, Man Lan, Jian Su, and Chew Lim Tan. 2010. Predicting discourse connectives for implicit discourse relation recognition. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 1507–1514. Association for Computational Linguistics.