Dynamic Entity Representations in Neural Language Models

08/02/2017 ∙ by Yangfeng Ji, et al. ∙ University of Heidelberg University of Washington 0

Understanding a long document requires tracking how entities are introduced and evolve over time. We present a new type of language model, EntityNLM, that can explicitly model entities, dynamically update their representations, and contextually generate their mentions. Our model is generative and flexible; it can model an arbitrary number of entities in context while generating each entity mention at an arbitrary length. In addition, it can be used for several different tasks such as language modeling, coreference resolution, and entity prediction. Experimental results with all these tasks demonstrate that our model consistently outperforms strong baselines and prior work.



There are no comments yet.


page 1

page 2

page 3

page 4

Code Repositories


A toolkit for coreference resolution and error analysis.

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Understanding a narrative requires keeping track of its participants over a long-term context. As a story unfolds, the information a reader associates with each character in a story increases, and expectations about what will happen next change accordingly. At present, models of natural language do not explicitly track entities; indeed, in today’s language models, entities are no more than the words used to mention them.

In this paper, we endow a generative language model with the ability to build up a dynamic representation of each entity mentioned in the text. Our language model defines a probability distribution over the whole text, with a distinct generative story for entity mentions. It explicitly groups those mentions that corefer and associates with each entity a continuous representation that is updated by every contextualized mention of the entity, and that in turn affects the text that follows.

Our method builds on recent advances in representation learning, creating local probability distributions from neural networks. It can be understood as a recurrent neural network language model, augmented with random variables for entity mentions that capture coreference, and with dynamic representations of entities. We estimate the model’s parameters from data that is annotated with entity mentions and coreference.

[ John] wanted to go to [ the coffee shop] in [ downtown Copenhagen]. [ He] was told that [ it] sold [ the best beans].
Figure 1: EntityNLM explicitly tracks entities in a text, including coreferring relationships between entities like [ John] and [ He]. As a language model, it is designed to predict that a coreferent of [ the coffee shop] is likely to follow “told that,” that the referring expression will be “it”, and that “sold the best beans

” is likely to come next, by using entity information encoded in the dynamic distributed representation.

Because our model is generative, it can be queried in different ways. Marginalizing everything except the words, it can play the role of a language model. In §5.1, we find that it outperforms both a strong -gram language model and a strong recurrent neural network language model on the English test set of the CoNLL 2012 shared task on coreference evaluation (Pradhan et al., 2012). The model can also identify entity mentions and coreference relationships among them. In §5.2, we show that it can easily be used to add a performance boost to a strong coreference resolution system, by reranking a list of -best candidate outputs. On the CoNLL 2012 shared task test set, the reranked outputs are significantly better than the original top choices from the same system. Finally, the model can perform entity cloze tasks. As presented in §5.3, it achieves state-of-the-art performance on the InScript corpus (Modi et al., 2017).

2 Model

A language model defines a distribution over sequences of word tokens; let denote the random variable for the th word in the sequence, denote the value of and the distributed representation (embedding) of this word. Our starting point for language modeling is a recurrent neural network (Mikolov et al., 2010), which defines


where and are parameters of the model (along with word embeddings ), lstm

is the widely used recurrent function known as “long short-term memory

(Hochreiter and Schmidhuber, 1997), and is a lstm hidden state encoding the history of the sequence up to the th word.

Great success has been reported for this model (Zaremba et al., 2015), which posits nothing explicitly about the words appearing in the text sequence. Its generative story is simple: the value of each

is randomly chosen conditioned on the vector

encoding its history.

2.1 Additional random variables and representations for entities

To introduce our model, we associate with each word an additional set of random variables. At position ,

  • is a binary random variable that indicates whether belongs to an entity mention () or not (). Though not explored here, this is easily generalized to a categorial variable for the type of the entity (e.g., person, organization, etc.).

  • is a categorical random variable if , which indicates the number of remaining words in this mention, including the current word (i.e., for the last word in any mention). is a predefined maximum length fixed to be 25, which is an empirical value derived from the training corpora used in the experiments. If , then . We denote the value of by .

  • is the index of the entity referred to, if . The set consists of , i.e., the indices of all previously mentioned entities plus an additional value for a new entity. Thus starts as and grows monotonically with , allowing for an arbitrary number of entities to be mentioned. We denote the value of by . If , then is fixed to a special value .

The values of these random variables for our running example are shown in Figure 2.

In addition to using symbolic variables to encode mentions and coreference relationships, we maintain a vector representation of each entity that evolves over time. For the th entity, let be its representation at time . These vectors are different from word vectors (), in that they are not parameters of the model. They are similar to history representations (), in that they are derived through parameterized functions of the random variables’ values, which we will describe in the next subsections.

: John wanted to go to the coffee shop in downtown Copenhagen .
: 1 0 0 0 0 1 1 1 0 1 1 0
: 1 2 2 2 3 3
: 1 1 1 1 1 3 2 1 1 2 1 1
: He was told that it sold the best beans .
: 1 0 0 0 1 0 1 1 1 .
: 1 2 4 4 4
: 1 1 1 1 1 1 3 2 1 0
Figure 2: The random variable values in EntityNLM for the running example in Figure 1.

2.2 Generative story

The generative story for the word (and other variables) at timestep is as follows; forward-referenced equations are in the detailed discussion that follows.

  1. If (i.e., is not continuing an already-started entity mention):

    • Choose (Equation 3).

    • If , set and ; then go to step 3. Otherwise:

      • If there is no embedding for the new candidate entity with index , create one following §2.4.

      • Select the entity from (Equation 4).

      • Set , which is the entity embedding of before timestep .

      • Select the length of the mention, (Equation 5).

  2. Otherwise,

    • Set , , .

  3. Sample from the word distribution given the LSTM hidden state and the current (or most recent) entity embedding (Equation 6). (If , then still represents the most recently mentioned entity.)

  4. Advance the RNN, i.e., feed it the word vector to compute (Equation 2).

  5. If , update using and , then set . Details of the entity updating are given in §2.4.

  6. For every entity , set (i.e., no changes to other entities’ representations).

Note that at any given time step , will always contain the most recent vector representation of the most recently mentioned entity.

A generative model with a similar hierarchical structure was used by Haghighi and Klein (2010)

for coreference resolution. Our approach differs in two important ways. First, our model defines a joint distribution over all of the text, not just the entity mentions. Second, we use representation learning rather than Bayesian nonparametrics, allowing natural integration with the language model.

2.3 Probability distributions

The generative story above referenced several parametric distributions defined based on vector representations of histories and entities. These are defined as follows.

For ,


where is the parameterized embedding associated with , which paves the way for exploring entity type representations in future work; is a parameter matrix for the bilinear score for and .

To give the possibility of predicting a new entity, we need an entity embedding beforehand with index , which is randomly sampled from Equation 7. Then, for every :


where is the embedding of entity at time step and is the weight matrix for predicting entities using their continuous representations. The score above is normalized over values . represents a vector of distance features associated with and the mentions of the existing entities. Hence two information sources are used to predict the next entity: (i) contextual information , and (ii) distance features from the current mention to the closest mention from each previously mentioned entity. if is a new entity. This term can also be extended to include other surface-form features for coreference resolution (Martschat and Strube, 2015; Clark and Manning, 2016b).

For the chosen entity from Equation 4, the distribution over its mention length is drawn according to


where is the most recent embedding of the entity , not updated with . The intuition is that will help contextual information to select the residual length of entity . is the weight matrix for length prediction, with rows.

Finally, the probability of a word as the next token is jointly modeled by and the vector representation of the most recently mentioned entity :


where is a transformation matrix to adjust the dimensionality of . cfsm is a class factorized softmax function (Goodman, 2001; Baltescu and Blunsom, 2015). It uses a two-step prediction with predefined word classes instead of direct prediction on the whole vocabulary, and reduces the time complexity to the log of vocabulary size.

2.4 Dynamic entity representations

Before predicting the entity at step , we need an embedding for the new candidate entity with index

if it does not exist. The new embedding is generated randomly, according to a normal distribution, then projected onto the unit ball:


where . The time step in means the current embedding contains no information from step , although it will be updated once we have and if . is the parameterized embedding for , which will be jointly optimized with other parameters and is expected to encode some generic information about entities. All the initial entity embeddings are centered on the mean , which is used in Equation 3 to determine whether the next token belongs to an entity mention. Another choice would be to initialize with a zero vector, although our preliminary experiments showed this did not work as well as random initialization in Equation 7.

Assume and , which means is part of a mention of entity . Then, we need to update based on the new information we have from . The new embedding is a convex combination of the old embedding () and current LSTM hidden state (

) with the interpolation (

) determined dynamically based on a bilinear function:


This updating scheme will be used to update in each of all the following steps. The projection in the last step keeps the magnitude of the entity embedding fixed, avoiding numeric overflow. A similar updating scheme has been used by Henaff et al. (2016) for the “memory blocks” in their recurrent entity network models. The difference is that their model updates all memory blocks in each time step. Instead, our updating scheme in subsection 2.4 only applies to the selected entity at time step .

2.5 Training objective

The model is trained to maximize the log of the joint probability of , and :


where is the collection of all the parameters in this model. Based on the formulation in §2.3, Equation 9 can be decomposed as the sum of conditional log-probabilities of each random variable at each time step.

This objective requires the training data annotated as in Figure 2. We do not assume that these variables are observed at test time.

3 Implementation Details

Our model is implemented with DyNet (Neubig et al., 2017) and available at https://github.com/jiyfeng/entitynlm. We use AdaGrad (Duchi et al., 2011) with learning rate and ADAM (Kingma and Ba, 2014) with default learning rate as the candidate optimizers of our model. For all the parameters, we use the initialization tricks recommended by Glorot and Bengio (2010). To avoid overfitting, we also employ dropout (Srivastava et al., 2014) with the candidate rates as .

In addition, there are two tunable hyperparameters of

EntityNLM: the size of word embeddings and the dimension of LSTM hidden states. For both of them, we consider the values . We also experiment with the option to either use the pretrained GloVe word embeddings (Pennington et al., 2014) or randomly initialized word embeddings (then updated during training). For all experiments, the best configuration of hyperparameters and optimizers is selected based on the objective value on the development data.

4 Evaluation Tasks and Datasets

We evaluate our model in diverse use scenarios: (i) language modeling, (ii) coreference resolution, and (iii) entity prediction. The evaluation on language modeling shows how the internal entity representation, when marginalized out, can improve the perplexity of language models. The evaluation on coreference resolution experiment shows how our new language model can improve a competitive coreference resolution system. Finally, we employ an entity cloze task to demonstrate the generative performance of our model in predicting the next entity given the previous context.

We use two datasets for the three evaluation tasks. For language modeling and coreference resolution, we use the English benchmark data from the CoNLL 2012 shared task on coreference resolution (Pradhan et al., 2012). We employ the standard training/development/test split, which includes 2,802/343/348 documents with roughly 1M/150K/150K tokens, respectively. We follow the coreference annotation in the CoNLL dataset to extract entities and ignore the singleton mentions in texts.

For entity prediction, we employ the InScript corpus created by Modi et al. (2017). It consists of 10 scenarios, including grocery shopping, taking a flight, etc. It includes 910 crowdsourced simple narrative texts in total and 18 stories were ignored due to labeling problems (Modi et al., 2017). On average, each story has 12.4 sentences, 24.9 entities and 217.2 tokens. Each entity mention is labeled with its entity index. We use the same training/development/test split as in (Modi et al., 2017), which includes 619, 91, 182 texts, respectively.

Data preprocessing

For the CoNLL dataset, we lowercase all tokens and remove any token that only contains a punctuation symbol unless it is in an entity mention. We also replace numbers in the documents with the special token num and low-frequency word types with unk. The vocabulary size of the CoNLL data after preprocessing is 10K. For entity mention extraction, in the CoNLL dataset, one entity mention could be embedded in another. For embedded mentions, only the enclosing entity mention is kept. We use the same preprocessed data for both language modeling and coreference resolution evaluation.

For the InScript corpus, we apply similar data preprocessing to lowercase all tokens, and we replace low-frequency word types with unk. The vocabulary size after preprocessing is 1K.

5 Experiments

In this section, we present the experimental results on the three evaluation tasks.

5.1 Language modeling

Task description.

The goal of language modeling is to compute the marginal probability:


However, due to the long-range dependency in recurrent neural networks, the search space of during inference grows exponentially. We thus use importance sampling to approximate the marginal distribution of . Specifically, with the samples from a proposal distribution , the approximated marginal probability is defined as


A similar idea of using importance sampling for language modeling evaluation has been used by Dyer et al. (2016).

For language modeling evaluation, we train our model on the training set from the CoNLL 2012 dataset with coreference annotation. On the test data, we treat coreference structure as latent variables and use importance sampling to approximate the marginal distribution of . For each document, the model randomly draws samples from the proposal distribution, discussed next.

Proposal distribution.

For implementation of , we use a discriminative variant of EntityNLM by taking the current word for predicting the entity-related variables in the same time step. Specifically, in the generative story described in §2.2, we delete step 3 (words are not generated, but rather conditioned upon), move step 4 before step 1, and replace with in the steps for predicting entity type , entity and mention length . This model variant provides a conditional probability at each timestep.


We compare the language modeling performance with two competitive baselines: 5-gram language model implemented in KenLM (Heafield et al., 2013) and RNNLM with LSTM units implemented in DyNet (Neubig et al., 2017). For RNNLM, we use the same hyperparameters described in §3 and grid search on the development data to find the best configuration.


The results of EntityNLM and the baselines on both development and test data are reported in Table 1. For EntityNLM, we use the value of on the development set with coreference annotation to select the best model configuration and report the best number. On the test data, we are able to calculate perplexity by marginalizing all other random variables using Equation 11. To compute the perplexity numbers on the test data, our model only takes account of log probabilities on word prediction. The difference is that coreference information is only used for training EntityNLM and not for test. All three models reported in Table 1 share the same vocabulary, therefore the numbers on the test data are directly comparable. As shown in Table 1, EntityNLM outperforms both the 5-gram language model and the RNNLM on the test data. Better performance of EntityNLM on language modeling can be expected, if we also use the marginalization method defined in Equation 11 on the development data to select the best configuration. However, we plan to use the same experimental setup for all experiments, instead of customizing our model for each individual task.

Model Perplexity
1. 5-gram LM 138.37
2. RNNLM 134.79
3. EntityNLM 131.64
Table 1: Language modeling evaluation on the test sets of the English section in the CoNLL 2012 shared task. As mentioned in §4, the vocabulary size is 10K. EntityNLM does not require any coreference annotation on the test data.

5.2 Coreference reranking

Task description.

We show how EntityLM, which allows an efficient computation of the probability , can be used as a coreference reranker to improve a competitive coreference resolution system due to Martschat and Strube (2015). This task is analogous to the reranking approach used in machine translation (Shen et al., 2004). The specific formulation is as follows:


where is the -best list for a given document. In our experiments, . To the best of our knowledge, the problem of obtaining -best outputs of a coreference resolution system has not been studied before.

Approximate -best decoding.

We rerank the output of a system that predicts an antecedent for each mention by relying on pairwise scores for mention pairs. This is the dominant approach for coreference resolution (Martschat and Strube, 2015; Clark and Manning, 2016a). The predictions induce an antecedent tree, which represents antecedent decisions for all mentions in the document. Coreference chains are obtained by transitive closure over the antecedent decisions encoded in the tree. A mention also can have an empty mention as antecedent, which denotes that the mention is non-anaphoric.

For extending Martschat and Strube’s greedy decoding approach to -best inference, we cannot simply take the

highest scoring trees according to the sum of edge scores, because different trees may represent the same coreference chain. Instead, we use an heuristic that creates an approximate

-best list on candidate antecedent trees. The idea is to generate trees from the original system output by considering suboptimal antecedent choices that lead to different coreference chains. For each mention pair , we compute the difference of its score to the score of the optimal antecedent choice for . We then sort pairs in ascending order according to this difference and iterate through the list of pairs. For each pair , we create a tree by replacing the antecedent of in the original system output with . If this yields a tree that encodes different coreference chains from all chains encoded by trees in the -best list, we add to the -best list. In the case that we cannot generate a given number of trees (particularly for a short document with a large

), we pad the list with the last item added to the list.

Evaluation measures.

For coreference resolution evaluation, we employ the CoNLL scorer (Pradhan et al., 2014). It computes three commonly used evaluation measures MUC (Vilain et al., 1995),  (Bagga and Baldwin, 1998), and  (Luo, 2005). We report the score of each evaluation measure and their average as the CoNLL score.

Competing systems.

We employed cort111https://github.com/smartschat/cort, we used version (Martschat and Strube, 2015) as our baseline coreference resolution system. Here, we compare with the original (one best) outputs of cort’s latent ranking model, which is the best-performing model implemented in cort. We consider two rerankers based on EntityNLM. The first reranking method only uses the log probability for EntityNLM to sort the candidate list (Equation 12). The second method uses a linear combination of both log probabilities from EntityNLM and the scores from cort, where the coefficients were found via grid search with the CoNLL score on the development set.


The reranked results on the CoNLL 2012 test set are reported in Table 2. The numbers of the baseline are higher than the results reported in Martschat and Strube (2015) since the feature set of cort was subsequently extended. Lines 2 and 3 in Table 2 present the reranked best results. As shown in this table, both reranked results give more than 1% of CoNLL score improvement on the test set over cort, which are significant based on an approximate randomization test222https://github.com/smartschat/art.

Model CoNLL P R P R P R
1. Baseline: cort’s one best 62.93 77.15 68.67 72.66 66.00 54.92 59.95 60.07 52.76 56.18
2. Rerank: EntityNLM 64.00 77.90 69.45 73.44 66.84 56.12 61.01 61.73 53.90 57.55
3. Rerank: EntityNLM + cort 64.04 77.93 69.49 73.47 67.08 55.99 61.04 61.76 53.98 57.61
Table 2: Coreference resolution scores on the CoNLL 2012 test set. cort is the best-performing model of Martschat and Strube (2015) with greedy decoding.

Additional experiments also found that increasing from 100 to 500 had a minor effect. That is because the diversity of each -best list is limited by (i) the number of entity mentions in the document, (ii) the performance of the baseline coreference resolution system, and possibly (iii) the approximate nature of our -best inference procedure. We suspect that a stronger baseline system (such as that of Clark and Manning, 2016a) could give greater improvements, if it can be adapted to provide -best lists. Future work might incorporate the techniques embedded in such systems into EntityNLM.

5.3 Entity prediction

[ I] was about to ride [ my] [ bicycle] to the [ park] one day when [ I] noticed that the front [ tire] was flat . [ I] realized that [ I] would have to repair [ it] . [ I] went into [ my] [ garage] to get some [ tools] . The first thing [ I] did was remove the xxxx
Figure 3: A short story on bicycles from the InScript corpus (Modi et al., 2017). The entity prediction task requires predicting xxxx given the preceding text either by choosing a previously mentioned entity or deciding that this is a “new entity”. In this example, the ground-truth prediction is [ tire]. For training, EntityNLM attempts to predict every entity. While, for testing, it predicts a maximum of 30 entities after the first three sentences, which is consistent with the experimental setup suggested by Modi et al. (2017).

Task description.

Based on Modi et al. (2017), we introduce a novel entity prediction task that tries to predict the next entity given the preceding text. For a given text as in Figure 3, this task makes a forward prediction based on only the left context. This is different from coreference resolution, where both left and right contexts from a given entity mention are used in decoding. It is also different from language modeling, since this task only requires predicting entities. Since EntityNLM is generative, it can be directly applied to this task. To predict entities in test data, is always given and EntityNLM only needs to predict when .

Baselines and human prediction.

We introduce two baselines in this task: (i) the always-new baseline that always predicts “new entity”; (ii) a linear classification model using shallow features from Modi et al. (2017), including the recency of an entity’s last mention and the frequency. We also compare with the model proposed by Modi et al. (2017). Their work assumes that the model has prior knowledge of all the participant types, which are specific to each scenario and fine-grained, e.g., rider in the bicycle narrative, and predicts participant types for new entities. This assumption is unrealistic for pure generative models like ours. Therefore, we remove this assumption and adapt their prediction results to our formulation by mapping all the predicted entities that have not been mentioned to “new entity”. We also compare to the adapted human prediction used in the InScript corpus. For each entity slot, Modi et al. (2017) acquired 20 human predictions, and the majority vote was selected. More details about human predictions are discussed in (Modi et al., 2017).


Table 3 shows the prediction accuracies. EntityNLM (line 4) significantly outperforms both baselines (line 1 and 2) and prior work (line 3) (, paired -test). The comparison between line 4 and 5 shows our model is even close to the human prediction performance.

Accuracy (%)
1. Baseline: always-new 31.08
2. Baseline: shallow features 45.34
3. Modi et al. (2017) 62.65
4. EntityNLM 74.23
5. Human prediction 77.35
Table 3: Entity prediction accuracy on the test set of the InScript corpus.

6 Related Work

Rich-context language models.

The originally proposed recurrent neural network language models only capture information within sentences. To extend the capacity of RNNLMs, various researchers have incorporated information beyond sentence boundaries. Previous work focuses on contextual information from previous sentences (Ji et al., 2016a) or discourse relations between adjacent sentences (Ji et al., 2016b), showing improvements to language modeling and related tasks like coherence evaluation and discourse relation prediction. In this work, EntityNLM adds explicit entity information to the language model, which is another way of adding a memory network for language modeling. Unlike the work by Tran et al. (2016), where memory blocks are used to store general contextual information for language modeling, EntityNLM assigns each memory block specifically to only one entity.

Entity-related models.

Two recent approaches to modeling entities in text are closely related to our model. The first is the “reference-aware” language models proposed by Yang et al. (2016), where the referred entities are from either a predefined item list, an external database, or the context from the same document. Yang et al. (2016) present three models, one for each case. For modeling a document with entities, they use coreference links to recover entity clusters, though they only model entity mentions as containing a single word (an inappropriate assumption, in our view). Their entity updating method takes the latest hidden state (similar to when in our model) as the new representation of the current entity; no long-term history of the entity is maintained, just the current local context. In addition, their language model evaluation assumes that entity information is provided at test time (Yang, personal communication), which makes a direct comparison with our model impossible. Our entity updating scheme is similar to the “dynamic memory” method used by Henaff et al. (2016). Our entity representations are dynamically allocated and updated only when an entity appears up, while the EntNet from Henaff et al. (2016) does not model entities and their relationships explicitly. In their model, entity memory blocks are pre-allocated and updated simultaneously in each timestep. So there is no dedicated memory block for every entity and no distinction between entity mentions and non-mention words. As a consequence, it is not clear how to use their model for coreference reranking and entity prediction.

Coreference resolution.

The hierarchical structure of our entity generation model is inspired by Haghighi and Klein (2010). They implemented this idea as a probabillistic graphical model with the distance-dependent Chinese Restaurant Process (Pitman, 1995) for entity assignment, while our model is built on a recurrent neural network architecture. The reranking method considered in our coreference resolution evaluation could also be extended with samples from additional coreference resolution systems, to produce more variety (Ng, 2005). The benefit of such a system comes, we believe, from the explicit tracking of each entity throughout the text, providing entity-specific representations. In previous work, such information has been added as features (Luo et al., 2004; Björkelund and Kuhn, 2014) or by computing distributed entity representations (Wiseman et al., 2016; Clark and Manning, 2016b). Our approach complements these previous methods.

Entity prediction.

The entity prediction task discussed in §5.3 is based on work by Modi et al. (2017). The main difference is that we do not assume that all entities belong to a previously known set of entity types specified for each narrative scenario. This task is also closely related to the “narrative cloze” task of Chambers and Jurafsky (2008) and the “story cloze test” of Mostafazadeh et al. (2016). Those studies aim to understand relationships between events, while our task focuses on predicting upcoming entity mentions.

7 Conclusion

We have presented a neural language model, EntityNLM, that defines a distribution over texts and the mentioned entities. It provides vector representations for the entities and updates them dynamically in context. The dynamic representations are further used to help generate specific entity mentions and the following text. This model outperforms strong baselines and prior work on three tasks: language modeling, coreference resolution, and entity prediction.


We thank anonymous reviewers for the helpful feedback on this work. We also thank the members of Noah’s ARK and XLab at University of Washington for their valuable comments, particularly Eunsol Choi for pointing out the InScript corpus. This research was supported in part by a University of Washington Innovation Award, Samsung GRO, NSF grant IIS-1524371, the DARPA CwC program through ARO (W911NF-15-1-0543), and gifts by Google and Facebook.


  • Bagga and Baldwin (1998) Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In LREC Workshop on Linguistic Coreference.
  • Baltescu and Blunsom (2015) Paul Baltescu and Phil Blunsom. 2015. Pragmatic neural language modelling in machine translation. In NAACL.
  • Björkelund and Kuhn (2014) Anders Björkelund and Jonas Kuhn. 2014.

    Learning structured perceptrons for coreference resolution with latent antecedents and non-local features.

    In ACL.
  • Chambers and Jurafsky (2008) Nathanael Chambers and Daniel Jurafsky. 2008. Unsupervised Learning of Narrative Event Chains. In ACL.
  • Clark and Manning (2016a) Kevin Clark and Christopher D. Manning. 2016a.

    Deep reinforcement learning for mention-ranking coreference models.

    In EMNLP.
  • Clark and Manning (2016b) Kevin Clark and Christopher D. Manning. 2016b. Improving coreference resolution by learning entity-level distributed representations. In ACL.
  • Duchi et al. (2011) John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization.

    Journal of Machine Learning Research

    , 12:2121–2159.
  • Dyer et al. (2016) Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In EMNLP.
  • Glorot and Bengio (2010) Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In AISTATS, pages 249–256.
  • Goodman (2001) Joshua Goodman. 2001. Classes for fast maximum entropy training. In ICASSP.
  • Haghighi and Klein (2010) Aria Haghighi and Dan Klein. 2010. Coreference resolution in a modular, entity-centered model. In NAACL.
  • Heafield et al. (2013) Kenneth Heafield, Ivan Pouzyrevsky, Jonathan H. Clark, and Philipp Koehn. 2013. Scalable modified Kneser-Ney language model estimation. In ACL.
  • Henaff et al. (2016) Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, and Yann LeCun. 2016. Tracking the world state with recurrent entity networks. arXiv:1612.03969.
  • Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780.
  • Ji et al. (2016a) Yangfeng Ji, Trevor Cohn, Lingpeng Kong, Chris Dyer, and Jacob Eisenstein. 2016a. Document context language models. In ICLR (workshop track).
  • Ji et al. (2016b) Yangfeng Ji, Gholamreza Haffari, and Jacob Eisenstein. 2016b. A latent variable recurrent neural network for discourse-driven language models. In NAACL-HLT.
  • Kingma and Ba (2014) Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv:1412.6980.
  • Luo (2005) Xiaoqiang Luo. 2005. On coreference resolution performance metrics. In HLT-EMNLP.
  • Luo et al. (2004) Xiaoqiang Luo, Abe Ittycheriah, Hongyan Jing, Nanda Kambhatla, and Salim Roukos. 2004. A mention-synchronous coreference resolution algorithm based on the Bell tree. In ACL.
  • Martschat and Strube (2015) Sebastian Martschat and Michael Strube. 2015. Latent structures for coreference resolution. Transactions of the Association for Computational Linguistics, 3:405–418.
  • Mikolov et al. (2010) Tomas Mikolov, Martin Karafiát, Lukas Burget, Jan Cernockỳ, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In INTERSPEECH.
  • Modi et al. (2017) Ashutosh Modi, Ivan Titov, Vera Demberg, Asad Sayeed, and Manfred Pinkal. 2017. Modeling semantic expectation: Using script knowledge for referent prediction. Transactions of the Association of Computational Linguistics, 5:31–44.
  • Mostafazadeh et al. (2016) Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A corpus and evaluation framework for deeper understanding of commonsense stories. In NAACL.
  • Neubig et al. (2017) Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, et al. 2017. Dynet: The dynamic neural network toolkit. arXiv:1701.03980.
  • Ng (2005) Vincent Ng. 2005. Machine learning for coreference resolution: From local classification to global ranking. In ACL.
  • Pennington et al. (2014) Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In EMNLP.
  • Pitman (1995) Jim Pitman. 1995. Exchangeable and partially exchangeable random partitions. Probability Theory and Related Fields, 102(2):145–158.
  • Pradhan et al. (2014) Sameer Pradhan, Xiaoqiang Luo, Marta Recasens, Eduard Hovy, Vincent Ng, and Michael Strube. 2014. Scoring coreference partitions of predicted mentions: A reference implementation. In ACL.
  • Pradhan et al. (2012) Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. CoNLL-2012 shared task: Modeling multilingual unrestricted coreference in OntoNotes. In EMNLP-CoNLL.
  • Shen et al. (2004) Libin Shen, Anoop Sarkar, and Franz Josef Och. 2004. Discriminative reranking for machine translation. In NAACL.
  • Srivastava et al. (2014) Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929–1958.
  • Tran et al. (2016) Ke Tran, Arianna Bisazza, and Christof Monz. 2016. Recurrent memory networks for language modeling. In NAACL-HLT.
  • Vilain et al. (1995) Marc Vilain, John Burger, John Aberdeen, Dennis Connolly, and Lynette Hirschman. 1995. A model-theoretic coreference scoring scheme. In MUC.
  • Wiseman et al. (2016) Sam Wiseman, Alexander M. Rush, and Stuart M. Shieber. 2016. Learning global features for coreference resolution. In NAACL.
  • Yang et al. (2016) Zichao Yang, Phil Blunsom, Chris Dyer, and Wang Ling. 2016. Reference-aware language models. arXiv:1611.01628.
  • Zaremba et al. (2015) Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. 2015. Recurrent neural network regularization. ICLR.