Structured Prediction of Sequences and Trees using Infinite Contexts

03/09/2015 ∙ by Ehsan Shareghi, et al. ∙ Monash University The University of Melbourne 0

Linguistic structures exhibit a rich array of global phenomena, however commonly used Markov models are unable to adequately describe these phenomena due to their strong locality assumptions. We propose a novel hierarchical model for structured prediction over sequences and trees which exploits global context by conditioning each generation decision on an unbounded context of prior decisions. This builds on the success of Markov models but without imposing a fixed bound in order to better represent global phenomena. To facilitate learning of this large and unbounded model, we use a hierarchical Pitman-Yor process prior which provides a recursive form of smoothing. We propose prediction algorithms based on A* and Markov Chain Monte Carlo sampling. Empirical results demonstrate the potential of our model compared to baseline finite-context Markov models on part-of-speech tagging and syntactic parsing.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Markov models are widespread popular techniques for modelling the underlying structure of natural language, e.g., as sequences and trees. However local Markov assumptions often fail to capture phenomena outside the local Markov context, i.e., when the data generation process exhibits long range dependencies. A prime example is language modelling where only short range dependencies are captured by finite-order (i.e. -gram) Markov models. However, it has been shown that going beyond finite order in a Markov model improves language modelling because natural language embodies a large array of long range depepndencies  (Wood et al., 2009a). While infinite order Markov models have been extensively explored for language modelling  (Gasthaus and Teh, 2010; Wood et al., 2011), this has not yet been done for structure prediction.

In this paper, we propose an infinite-order Markov model for predicting latent structures, namely tag sequences and trees. We show that this expressive model can be applied to various structure prediction tasks in NLP, such as syntactic parsing and part-of-speech tagging. We propose effective algorithms to tackle significant learning and inference challenges posed by the infinite Markov model.

More specifically, we propose an unbounded-depth, hierarchical, Bayesian non-parametric model for the generation of linguistic utterances and their corresponding structure (e.g., the sequence of POS tags or syntax trees). Our model conditions each decision in a tree generating process on an

unbounded context consisting of the vertical chain of their ancestors, in the same way that infinite sequence models (e.g., -gram language models) condition on an unbounded window of linear context (Mochihashi and Sumita, 2007; Wood et al., 2009b).

Learning in this model is particularly challenging due to the large space of contexts and corresponding data sparsity. For this reason predictive distributions associated with contexts are smoothed using distribtions for successively smaller contexts via a hierarchical Pitman-Yor process, organised as a trie. The infinite context makes it impossible to directly apply dynamic programing for structure prediction. We present two inference algorithms based on A* and Markov Chain Monte Carlo (MCMC) for predicting the best structure for a given input utterance.

The experiments show that our generative model obtains similar performance to the state-of-the-art Stanford part-of-speech-tagger (Toutanova and Manning, 2000) for English and Swedish. For Danish, our model outperforms the Stanford tagger, which is impressive given the Stanford parser uses many more complex features and a discriminative training objective. Our experiments on parsing show that our unbounded-context tree model adapts itself to the data to effectively capture sufficient context to outperform both a PCFG baseline as well as Markov models with finite ancestor conditioning.

2 Background and related work

The syntactic parse tree of an utterance can be generated by combining a set of rules from a grammar, such as a context free grammar (CFG). A CFG is a 4-tuple , where is a set of terminal symbols, is a set of non-terminal symbols, is the distinguished root non-terminal and

is a set of productions (a.k.a., rewriting rules). A PCFG assigns a probability to each rule in the grammar, where

. The grammar rules are often in Chomsky Normal Form, taking either the form or where are syntactic cagegories (nonterminals), and is a word (terminal).

Tag sequences can also be represented as a tree structure, without loss of generality, in which rules take the form or where are POS tags, and is a word. Hence tagging models can be represented by restricted (P)CFGs. This unifies view to syntactic parsing and POS tagging will allow us to apply our model and inference algorithms to these problems with only minor refinements (see Figure 1).

In PCFG, a tree is generated by starting with the root symbol and rewriting (substituting) it with a grammar rule, then continuing to rewrite frontier non-terminals with grammar rules until there are no remaining frontier non-terminals. When making the decision about the next rule to expand a frontier non-terminal, the only conditioning context used from the partially generated tree is the frontier non-terminal itself, i.e., the rewrite rule is assumed independent from the remainder of the tree given the frontier non-terminal. Our model relaxes this strong independence assumptions by considering unbounded vertical history when making the next inference decision. This takes into account a wider context when making the next parsing decision.

Perhaps the most relevant work is on unbounded history language models (Mochihashi and Sumita, 2007; Wood et al., 2009a). A prime work is Sequence Memoizer (Wood et al., 2011) which conditions the generation of the next word on an unbounded history of previously generated words. We build on these techniques to develop rich infinite-context models for structured prediction, leading to additional complexity and challenges.

For syntactic parsing, several infinite extensions of probabilistic context free grammars (PCFGs) have been proposed (Liang et al., 2007; Finkel et al., 2007). These approaches achieve infinite grammars by allowing an unbounded set of non-terminals (hence grammar rules), but still make use of a bounded history when expanding each non-terminal. An alternative method allows for infinite grammars by considering segmentation of trees into arbitrarily large tree fragments, although only a limited history is used to conjoin fragments (Cohn et al., 2010; Johnson et al., 2006). Our work achieves infinite grammars by growing the vertical history needed to make the next parsing decision, as opposed to growing the number of rules, non-terminals or states horizontally, as done in prior work.

Earlier work in syntactic parsing has also looked into growing both the history vertically and the rules horizontally, in a bounded setting. (Johnson, 1998) has increased the history for the parsing task by parent-annotation, i.e., annotating each non-terminal in the training parse trees by its parent, and then reading off the grammar rules from the resulting trees. (Klein and Manning, 2003) have considered vertical and horizontal markovization while using the head words’ part-of-speech tag, and showed that increasing the size of the vertical contexts consistently improves the parsing performance. (Petrov et al., 2006), (Petrov and Klein, 2007) and (Matsuzaki et al., 2005)

have treated non-terminal annotations as latent variables and estimated them from the data.

Likewise, finite-state hidden Markov models (HMMs) have been extended

horizontally to have countably infinite number of states (Beal et al., 2001). Previous works on applying Markov models to part-of-speech tagging either considered finite-order Markov models  (Brants, 2000), or finite-order HMM (Thede and Harper, 1999). We differ from these works by conditioning both the emissions and transitions on their full contexts.

3 The Model

Our model relaxes strong local Markov assumptions in PCFG to enable capturing phenomena outside of the local Markov context. The model conditions the generation of a rule in a tree on its unbounded vertical history, i.e., its ancestors on the path towards the root of the tree (see Figure 1). Thus the probability of a tree is

where denotes the rule and its history, and is the probability of the next inference decision (i.e., grammar rule) conditioned on the context . In other words, a tree can be represented as a sequence of context-rule events .

Figure 1: Examples of infinite-order conditioning and smoothing mechanism. The bold symbols (NN, ADV, fine) are the part of the structure being generated, and the boxes correspond to the conditioning context. (a) Syntactic Parsing, and (b) Infinite-order HMM for POS tagging.

When learning such a model from data, a vector of predictive probabilities for the next rule

given each possible vertical context must be learned, where depending on the problem can denote the set of chains of non-terminals or chains of rules . As the context size increases, the number of events observed for such long contexts in the training data drastically decreases which makes parameter estimation challenging, particularly when generalising to unseen contexts. Assuming our unbounded-depth model, we need suitable smoothing

techniques to estimate conditional rule probabilities for large (and possibly infinite depth) contexts. We achieve smoothing by placing a hierarchical Bayesian prior over the set of probability distributions

. We smooth with a distribution conditioned on a shorter context , where is the suffix of containing all but the earliest event. This ties parameters of longer histories to their shorter suffixes in a hierarchical manner, and leads to sharing statistical strengths to overcome sparsity issues. Figure 1 shows our infinite-order Markov model and the smoothing mechanism described here.

Figure 2: Part of the smoothing mechanism corresponding to Figure 1(a). Each node represents a distribution labeled with a context, and the directed edges demonstrate the direction of smoothing. The path in bold corresponds to the smoothing for the rule .

More specifically, we assume that a distribution with the full history is related to a distribution with the most recent history through the Pitman-Yor process (Wood et al., 2011):


denotes the base (e.g. uniform) distribution, and

denotes the empty context. The Pitman-Yor process is a distribution over distributions, where is the discount parameter, is the concentration parameter, and H is the base distribution. Note that depends on which itself depends on , etc. This leads to a hierarchical Pitman-Yor process prior where context-dependent distributions are hidden. The formulation of the hierarchical PYP over different length contexts is illustrated in Figure 2.

Figure 3 demonstrates the property of PYP and how its behaviour depends on discount , and concentration parameters. Note that the PYP allows a good fit to data distribution compared to the Dirichlet Process (; as used in prior work) which cannot adequately represent the long tail of events.

(a) S NP
(b) VERB
Figure 3: -

plot of rule frequency vs rank, illustrated for (a) syntactic parsing and (b) POS tagging. Besides the data distribution, we also show samples from three PYP distributions with different hyperparameter values,


4 Learning

Given a training tree-bank, i.e., a collection of utterances and their trees, we are interested in the posterior distribution over . We make use of the approach developed in Wood et al. (2011) for learning such suffix-based graphical models when learning infinite-depth language models. It makes use of Chinese Restaurant Process (CRP) representation of the Pitman-Yor process in order to marginalize out distributions (Teh, 2006) and learn the predictive probabilities .

Under the CRP representation each context corresponds to a restaurant. As a new is observed in the training data, a customer is entered to the restaurant, i.e., the trie node corresponding to . Whenever a customer enters a restaurant, it should be decided whether to seat him on an existing table serving the dish , or to seat him on a new table and sending a proxy customer to the parent node in the trie to order (i.e., based on ). Fixing a seating arrangement and PYP parameters for all restaurants (i.e., the collection of concentration and discount parameters), the predictive probability of a rule based on our infinite-context rule model is:

where and are the discount and concentration parameters, is the number of customers at table served the dish in the restaurant (accordingly is the number of customers served the dish and is the number of customers), and is the number of tables serving dish in the restaurant (accordingly is the number of tables).

The seating arrangements (the state of all restaurants including their tables and customers sitting on each table) are hidden, so they need to be marginalized out:

where is the training tree-bank. We approximate this integral by the so called “minimal assumption seating arrangement” and the MAP parameter setting which maximizes the corresponding data posterior. Based on the minimal assumption, a new table is created only when there is no table serving the desired dish in a restaurant . That is, a proxy customer is created and sent to the parent node in the trie for each unique dish type (sequence of events).

This approximation has been shown to recover interpolated Kneser-Ney smoothing, when applied to hierarchical Pitman-Yor process language model

(Teh, 2006).

The parameter is learned by maximising the posterior, given the seating arrangement corresponding to the minimal assumption. We put the following prior distributions over the parameters: and . The posterior is the prior multiplied by the following likelihood term:

where denotes the generalised factorial function.111 and . We maximize the posterior with the constraints and using the L-BFGS-B optimisation method (Zhu et al., 1997), which results in the optimised discount and concentration values for each context size.

5 Prediction

In this section, we propose algorithms for the challenging problem of predicting the highest scoring tree. The key ideas are to compactly represent the space of all possible trees for a given utterance, and then search for the best tree in this space in a top-down manner. By traversing the hyper-graph top-down, the search algorithms have access to the full history of grammar rules.

In the test time, we need to predict the tree structure of a given utterance by maximizing the tree score:

The unbounded context allowed by our model makes it infeasible to apply dynamic programming, e.g. CYK (Cocke and Schwartz, 1970), for finding the highest scoring tree. CYK is a bottom-up algorithm which requires storing in a dynamic programming table the score of each utterance’s sub-span conditioned on all possible contexts. Even truncating the context size to bound this term may be insufficient to allow CYK for prediction, due to the unreasonable computational complexity.

The space of all possible trees for a given utterance can be compactly represented as a hyper-graph (Klein and Manning, 2001). Each hyper-graph node is labelled with a non-terminal and a sub-span of the utterance. There exists a hyper-edge from the nodes and to the node if the rule belongs to the grammar (Figure 4). Starting from the top node , our prediction algorithms search for the highest scoring tree sub-graph that covers all of the utterance terminals in the hyper-graph. Our top-down prediction algorithms have access to the full history needed by our model when deciding about the next hyper-edge to be added to the partial tree.

Figure 4: Hyper-graph representation of the search space. The gray areas are examples of two partial hypotheses in A* priority queue.

5.1 A* Search

This algorithm incrementally expands frontier nodes of the best partial tree until a complete tree is constructed. In the expansion step, all possible rules for expanding all frontier non-terminals are considered and the resulting partial trees are inserted into a priority queue (see Figure 4), sorted based on the following score:

where is a partial tree after expanding a frontier non-terminal, is the probability of the current partial tree, is the probability of expanding a non-terminal via a rule in the full context , and

is the heuristic function (i.e., the estimate of the score for the best tree completing

). We use various heuristic functions when expanding a node in the hypergraph via a hyperedge with tails and :

  • Full Frontier: which estimates the completion cost by

    where is the set of frontier nodes of the partial tree, and is a simplified grammar admitting dynamic programming. Here we choose the PCFG used the base measure in the root of the PYP hierarchy. Accordingly the terms can be computed cheaply using the PCFG inside probabilities.

  • Local Frontier: which only takes into account the completion of the following frontier nodes:

    This heuristic focuses on the completion cost of the sub-span using the selected rule.

The above heuristics functions are not admissible, hence the A* algorithm is not guaranteed to find the optimal tree. However the PCFG provides reasonable estimates of the completion costs, and accordingly with a sufficiently wide beam, search error is likely to be low.

5.2 MCMC Sampling

We make use of Metropolis-Hastings (MH) algorithm, which is a Markov chain Monte Carlo (MCMC) method, for obtaining a sequence of random trees. We then combine these trees to construct the predicted tree.

In the MH algorithm, we use a PCFG as our proposal distribution and draw samples from it. Each sampled tree is then accepted/rejected using the following acceptance rate:

where is the sampled tree, is the current tree, is the probability of the proposed tree under our model, and is its probability under the proposal PCFG. Under some conditions, i.e., detailed balance and ergodicity, it is guarantheed that the stationary distribution of the underlying Markov chain (defined by the MH sampling) is the distribution that our model induces over the space of trees . For each utterence, we sample a fresh tree for the whole utterance from a PCFG using the approach of (Johnson et al., 2007), which works by first computing the inside lattice under the proposal model (which can be computed once and reused), followed by top-down sampling to recover a tree. Finally the proposed tree is scored using the MH test, according to which the tree is randomly accepted as the next sample or else rejected in which case the previous sample is retained.

Once the sampling is finished, we need to choose a tree based on statistics of the sampled collection of trees. One approach is to select the most frequently sampled tree, however this does not work effectively in such large search spaces because of high sampling variance. Note that local Gibbs samplers might be able to address this problem, at least partly, through resampling subtrees instead of full tree sampling (as done here). Local changes would allow for more rapid mixing from trees with some high and low scoring subtrees to trees with uniformly high scoring sub-structures. We leave local sampling for future work, noting that the obvious local operation of resampling complete sub-trees or local tree fragments would compromise detailed balance, and thus not constitute a valid MCMC sampler 

(Levenberg et al., 2012).

To address this problem, we use a Minimum Bayes Risk (MBR) decoding method to predict the best tree (Goodman, 1996) as follows: For each pair of a nonterminal-span, we record the count in the collection of sampled trees. Then using the Viterbi algorithm, we select the tree from the hypergraph for which the sum of the induced pairs of nonterminal-span is maximized. Roughly speaking, this allows to make local corrections that result in higher accuracy compared to the best sampled trees.

6 Experiments

In order to evaluate the proposed model and prediction algorithms, we performed two sets of experiments on tasks with different structural complexity. The statistics of the tasks and datasets are provided in Table 1.

Task Train Test Len Rules
parse 33180 2416 24 31920
pos EN 38219 5462 24 29499
pos DN 3638 1000 20 5269
pos SW 10653 389 18 9739
Table 1: Statistics for PTB syntactic Parsing and part-of-speech tagging, showing the number of training and test sentences, average sentence length in words and number of grammar rules. For morph the numbers are averaged over the 10 folds.

6.1 Syntactic Parsing

For syntactic parsing, we use the Penn. treebank (PTB) dataset (Marcus et al., 1993). We used the standard data splits for training and testing (train sec 2-21; validation sec 22; test sec 23). We followed Petrov et al. (2006)

preprocessing steps by right-binarizing the trees and replacing words with

in the training sample with generic unknown word markers representing the tokens’ lexical features and position. The results reported in Table 2 are produced by EVALB.

The results in Table 2 demonstrate the superiority of our model compared to the baseline PCFG. We note that the A* parser becomes less effective (even with a large beam size) for this task, which we attribute to the large search arising for the large grammar and long sentences. Our best results are achieved by MCMC, demonstrating the effectiveness of MCMC in large search spaces.

An interesting observation is how our results compare with those achieved by bounded vertical and horizontal Markovization reported in (Klein and Manning, 2003). Our binarization corresponds to one of their simpler settings for horizontal markovization, namely in their terminology, and note also that we ignore the head information which is used in their models. Despite this we still manage to equal their results obtained using vertical context of size 3 (), with F1 score. Their best result, , was achieved with , (and tags for head words). We believe that our model would outperform theirs if we consider greater horizontal markovization and incorporate head word information. To facilitate a fair comparison with vertical markovization, we experimented with limiting the size of the vertical contexts to 2, 3 or 4 within our model. Using MCMC parsing we found that performance consistently improved as the size of the context was increased, scoring , , F-measure respectively. This is below F-measure of our unbounded-context model which adapts itself to data to effectively capture the right context.

Overall our approach significantly outperforms the baseline PCFG, although note these results are well below the current state-of-the-art in parsing, which typically makes use of discriminative training with much richer features. We speculate that future enhancements could close the gap between our results and that of modern parsers, while offering the potential benefits of our generative model which allows further incorporation of different types of contexts (e.g., head words and -gram lexical context).

Syntactic Parser F1 ACC F1 ACC
A* (Local Frontier) 75.33 16.12 76.21 16.85
A* (Full Frontier) 72.27 13.14 72.34 13.57
MCMC 76.74 18.23 78.21 18.99
PCFG CYK 58.91 4.11 60.25 4.42
Table 2: Syntactic parsing results for the Penn. treebank, showing labelled F-Measure (F1) and exact bracketing match (ACC).

6.2 Part-of-Speech Tagging

The part of speech (POS) corpora have been extracted from PTB (sections 0-18 for training and 22-24 for test) for English, and NAACL-HLT 2012 Shared task on Grammar Induction222 for Danish and Swedish (Gelling et al., 2012). We convert the sequence of part-of-speech tags for each sentence into a tree structure analogous to a Hidden Markov Model (HMM). For each POS tag we introduce a twin (e.g., ADJ’ for ADJ) in order to encode HMM-like transition and emission probabilities in the grammar. As shown in Figure 5, this representation guarantees that all the rules in the structures are either in the form of (transition) or (emission).

Figure 5: The analogy between HMM (i) and our representation (ii) for the part-of-speech tags of the sentence “that’s fine now.”

The tagging results are reported in Table 3, including comparison with the baseline PCFG ( HMM) and the state-of-the-art Stanford POS Tagger (Toutanova and Manning, 2000), which we trained and tested on these datasets. As illustrated in Table 3, our model consistently improves the PCFG baseline. While for Danish we outperform the state-of-the-art tagger, the results for English and Swedish we are a little behind the Stanford Tagger. This is an promising result since our model is only based on the rules and their contexts, as opposed to the Stanford Tagger which uses complex hand-designed features and a complex form of discriminative training.

Note the strong performance of MCMC sampling, which consistently outperforms A* search on the three tagging tasks.

English Danish Swedish
A*(Local Frontier) 95.50 54.11 89.85 35.10 87.04 32.13
A*(Full Frontier) 95.27 53.88 88.57 32.6 85.62 28.53
MCMC 96.04 54.25 95.55 72.93 89.97 34.45
PCFG CYK 94.69 47.22 89.04 31.7 89.76 33.93
Stanford Tagger 97.24 56.34 93.66 51.30 91.28 37.02
Table 3: TL stands for Token-Level Accuracy, SL stands for Sentence-Level Accuracy. MCMC results are the average of 10 runs.

7 Conclusion and Future Work

We have proposed a novel hierarchical model over linguistic trees which exploits global context by conditioning the generation of a rule in a tree on an unbounded tree context consisting of the vertical chain of its ancestors. To facilitate learning of such a large and unbounded model, the predictive distributions associated with tree contexts are smoothed in a recursive manner using a hierarchical Pitman-Yor process. We have shown how to perform prediction based on our model to predict the parse tree of a given utterance using various search algorithms, e.g. A* and Markov Chain Monte Carlo.

This consistently improved over baseline methods in two tasks, and produced state-of-the-art results for Danish part-of-speech tagging.

In future, we would like to consider sampling the seating arrangements and model hyperparameters, and seek to incorporate several different notions of context besides the chain of ancestors.


  • Beal et al. (2001) M. J. Beal, Z. Ghahramani, and C. E. Rasmussen. The infinite hidden markov model. In Advances in Neural Information Processing Systems, Vancouver, British Columbia, Canada, pages 577–584, 2001.
  • Brants (2000) T. Brants. Tnt – A statistical part-of-speech tagger. In

    Proceedings of the sixth conference on Applied natural language processing

    , pages 224–231, 2000.
  • Cocke and Schwartz (1970) J. Cocke and J. T. Schwartz. Programming languages and their compilers : preliminary notes. Technical report, 1970.
  • Cohn et al. (2010) T. Cohn, P. Blunsom, and S. Goldwater. Inducing tree-substitution grammars.

    The Journal of Machine Learning Research

    , 11:3053–3096, 2010.
  • Finkel et al. (2007) J. Finkel, T. Grenager, and C. Manning. The infinite tree. In Proceedings of the 45th annual meeting of Association for Computational Linguistics, pages 272–279, 2007.
  • Gasthaus and Teh (2010) J. Gasthaus and Y. W. Teh. Improvements to the sequence memoizer. In Advances in Neural Information Processing Systems, pages 685–693, 2010.
  • Gelling et al. (2012) D. Gelling, T. Cohn, P. Blunsom, and J. Graca. Proceedings of the NAACL-HLT Workshop on the Induction of Linguistic Structure, chapter The PASCAL Challenge on Grammar Induction, pages 64–80. Association for Computational Linguistics, 2012.
  • Goodman (1996) J. Goodman. Parsing algorithms and metrics. In Proceedings of the 34th Annual Meeting on Association for Computational Linguistics, ACL ’96, pages 177–183, Stroudsburg, PA, USA, 1996. Association for Computational Linguistics.
  • Johnson (1998) M. Johnson. Pcfg models of linguistic tree representations. Computational Linguistics, 24(4):613–632, Dec. 1998. ISSN 0891-2017.
  • Johnson et al. (2006) M. Johnson, T. L. Griffiths, and S. Goldwater. Adaptor grammars: A framework for specifying compositional nonparametric bayesian models. In Advances in Neural Information Processing Systems 19, Proceedings of the Twentieth Annual Conference on Neural Information Processing Systems, Vancouver, British Columbia, Canada, December 4-7, 2006, pages 641–648, 2006.
  • Johnson et al. (2007) M. Johnson, T. L. Griffiths, and S. Goldwater. Bayesian inference for pcfgs via markov chain monte carlo. In HLT-NAACL, pages 139–146, 2007.
  • Klein and Manning (2001) D. Klein and C. D. Manning. Parsing and hypergraphs. In Proceedings of the Seventh International Workshop on Parsing Technologies (IWPT-2001), 17-19 October 2001, Beijing, China, 2001.
  • Klein and Manning (2003) D. Klein and C. D. Manning. Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics-Volume 1, pages 423–430. Association for Computational Linguistics, 2003.
  • Levenberg et al. (2012) A. Levenberg, C. Dyer, and P. Blunsom. A bayesian model for learning scfgs with discontiguous rules. In Proceedings of the 2012 joint conference on empirical methods in natural language processing and computational natural language learning, pages 223–232. Association for Computational Linguistics, 2012.
  • Liang et al. (2007) P. Liang, S. Petrov, M. Jordan, and D. Klein. The infinite PCFG using hierarchical Dirichlet processes. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 688–697, 2007.
  • Marcus et al. (1993) M. P. Marcus, M. A. Marcinkiewicz, and B. Santorini. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313–330, 1993.
  • Matsuzaki et al. (2005) T. Matsuzaki, Y. Miyao, and J. Tsujii. Probabilistic cfg with latent annotations. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, ACL ’05, pages 75–82, Stroudsburg, PA, USA, 2005. Association for Computational Linguistics. doi: 10.3115/1219840.1219850.
  • Mochihashi and Sumita (2007) D. Mochihashi and E. Sumita. The infinite markov model. In Advances in Neural Information Processing Systems 20, Proceedings of the Twenty-First Annual Conference on Neural Information Systems, Vancouver, British Columbia, Canada, 2007.
  • Petrov and Klein (2007) S. Petrov and D. Klein. Learning and inference for hierarchically split PCFGs. In

    Proceedings of the Twenty-Second AAAI Conference on Artificial Intelligence, Vancouver, British Columbia, Canada

    , 2007.
  • Petrov et al. (2006) S. Petrov, L. Barrett, R. Thibaux, and D. Klein. Learning accurate, compact, and interpretable tree annotation. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 433–440. Association for Computational Linguistics, 2006.
  • Teh (2006) Y. W. Teh. A hierarchical bayesian language model based on pitman-yor processes. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 985–992. Association for Computational Linguistics, 2006.
  • Thede and Harper (1999) S. M. Thede and M. P. Harper. A second-order hidden markov model for part-of-speech tagging. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics on Computational Linguistics, ACL ’99, pages 175–182, Stroudsburg, PA, USA, 1999. Association for Computational Linguistics. ISBN 1-55860-609-3.
  • Toutanova and Manning (2000) K. Toutanova and C. D. Manning. Enriching the knowledge sources used in a maximum entropy part-of-speech tagger. In Proceedings of the 2000 Joint SIGDAT conference on Empirical methods in natural language processing and very large corpora, pages 63–70. Association for Computational Linguistics, 2000.
  • Wood et al. (2009a) F. Wood, C. Archambeau, J. Gasthaus, L. James, and Y. W. Teh. A stochastic memoizer for sequence data. In Proceedings of the 26th Annual International Conference on Machine Learning, ICML 2009, Montreal, Quebec, Canada, June 14-18, 2009, page 142, 2009a.
  • Wood et al. (2009b) F. Wood, C. Archambeau, J. Gasthaus, L. James, and Y. W. Teh. A stochastic memoizer for sequence data. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 1129–1136. ACM, 2009b.
  • Wood et al. (2011) F. Wood, J. Gasthaus, C. Archambeau, L. James, and Y. W. Teh. The sequence memoizer. Communications of the ACM, 54(2):91–98, 2011.
  • Zhu et al. (1997) C. Zhu, R. H. Byrd, P. Lu, and J. Nocedal. Algorithm 778: L-bfgs-b: Fortran subroutines for large-scale bound-constrained optimization. ACM Transactions on Mathematical Software (TOMS), 23(4):550–560, 1997.