1 Introduction
Scripts were developed as a means of representing stereotypical event sequences and interactions in narratives. The benefits of scripts for encoding common sense knowledge, filling in gaps in a story, resolving ambiguous references, and answering comprehension questions have been amply demonstrated in the early work in natural language understanding [Schank and Abelson1977]. The earliest attempts to learn scripts were based on explanationbased learning, which can be characterized as exampleguided deduction from first principles [DeJong1981, DeJong and Mooney1986]. While this approach is successful in generalizing from a small number of examples, it requires a strong domain theory, which limits its applicability.
More recently, some new graphbased algorithms for inducing scriptlike structures from text have emerged. “Narrative Chains” is a narrative model similar to Scripts [Chambers and Jurafsky2008]. Each Narrative Chain is a directed graph indicating the most frequent temporal relationship between the events in the chain. Narrative Chains are learned by a novel application of pairwise mutual information and temporal relation learning. Another graph learning approach employs Multiple Sequence Alignment in conjunction with a semantic similarity function to cluster sequences of event descriptions into a directed graph [Regneri, Koller, and Pinkal2010]. More recently still, graphical models have been proposed for representing scriptlike knowledge, but these lack the temporal component that is central to this paper and to the early script work. These models instead focus on learning bags of related events [Chambers2013, Kit Cheung, Poon, and Vanderwende2013].
While the above approches demonstrate the learnability of scriptlike knowledge, they do not offer a probabilistic framework to reason robustly under uncertainty taking into account the temporal order of events. In this paper we present the first formal representation of scripts as Hidden Markov Models (HMMs), which support robust inference and effective learning algorithms. The states of the HMM correspond to event types in scripts, such as entering a restaurant or opening a door. Observations correspond to natural language sentences that describe the event instances that occur in the story, e.g., “John went to Starbucks. He came back after ten minutes.” The standard inference algorithms, such as the ForwardBackward algorithm, are able to answer questions about the hidden states given the observed sentences, for example, “What did John do in Starbucks?”
There are two complications that need to be dealt with to adapt HMMs to model narrative scripts. First, both the set of states, i.e., event types, and the set of observations are not prespecified but are to be learned from data. We assume that the set of possible observations and the set of event types to be bounded but unknown. We employ the clustering algorithm proposed in [Regneri, Koller, and Pinkal2010] to reduce the natural language sentences, i.e., event descriptions, to a small set of observations and states based on their Wordnet similarity.
The second complication of narrative texts is that many events may be omitted either in the narration or by the event extraction process. More importantly, there is no indication of a time lapse or a gap in the story, so the standard forwardbackward algorithm does not apply. To account for this, we allow the states to skip generating observations with some probability. This kind of HMMs, with insertions and gaps, have been considered previously in speech processing
[Bahl, Jelinek, and Mercer1983] and in computational biology [Krogh et al.1994]. We refine these models by allowing statedependent missingness, without introducing additional “insert states” or “delete states” as in [Krogh et al.1994]. In this paper, we restrict our attention to the socalled “LefttoRight HMMs” which have acyclic graphical structure with possible selfloops, as they support more efficient inference algorithms than general HMMs and suffice to model most of the natural scripts.We consider the problem of learning the structure and parameters of scripts in the form of HMMs from sequences of natural language sentences. Our solution to script learning is a novel bottomup method for structure learning, called SEMHMM, which is inspired by Bayesian Model Merging (BMM) [Stolcke and Omohundro1994] and Structural Expectation Maximization (SEM) [Friedman1998]
. It starts with a fully enumerated HMM representation of the event sequences and incrementally merges states and deletes edges to improve the posterior probability of the structure and the parameters given the data. We compare our approach to several informed baselines on many natural datasets and show its superior performance. We believe our work represents the first formalization of scripts that supports probabilistic inference, and paves the way for robust understanding of natural language texts.
2 Problem Setup
Consider an activity such as answering the doorbell. An example HMM representation of this activity is illustrated in Figure 1. Each box represents a state, and the text within is a set of possible event descriptions (i.e., observations). Each event description is also marked with its conditional probability. Each edge represents a transition from one state to another and is annotated with its conditional probability.
In this paper, we consider a special class of HMMs with the following properties. First, we allow some observations to be missing. This is a natural phenomenon in text, where not all events are mentioned or extracted. We call these null observations and represent them with a special symbol . Second, we assume that the states of the HMM can be ordered such that all transitions take place only in that order. These are called LefttoRight HMMs in the literature [Rabiner1990, Bahl, Jelinek, and Mercer1983]. Selftransitions of states are permitted and represent “spurious” observations or events with multitime step durations. While our work can be generalized to arbitrary HMMs, we find that the LefttoRight HMMs suffice to model scripts in our corpora.
Formally, an HMM is a 4tuple , where is a set of states, is the probability of transition from to , is a set of possible nonnull observations, and is the probability of observing when in state ^{1}^{1}1 can be straightforwardly generalized to depend on both of the states in a state transition., where , and is the terminal state. An HMM is LefttoRight if the states of the HMM can be ordered from thru such that is nonzero only if . We assume that our target HMM is LefttoRight. We index its states according to a topological ordering of the transition graph. An HMM is a generative model of a distribution over sequences of observations. For convenience w.l.o.g. we assume that each time it is “run” to generate a sample, the HMM starts in the same initial state , and goes through a sequence of transitions according to until it reaches the same final state , while emitting an observation in in each state according to . The initial state and the final state respectively emit the distinguished observation symbols, “” and “” in , which are emitted by no other state. The concatenation of observations in successive states consitutes a sample of the distribution represented by the HMM. Because the null observations are removed from the generated observations, the length of the output string may be smaller than the number of state transitions. It could also be larger than the number of distinct state transitions, since we allow observations to be generated on the self transitions. Thus spurious and missing observations model insertions and deletions in the outputs of HMMs without introducing special states as in profile HMMs [Krogh et al.1994].
In this paper we address the following problem. Given a set of narrative texts, each of which describes a stereotypical event sequence drawn from a fixed but unknown distribution, learn the structure and parameters of a LefttoRight HMM model that best captures the distribution of the event sequences. We evaluate the algorithm on natural datasets by how well the learned HMM can predict observations removed from the test sequences.
3 HMMScript Learning
At the top level, the algorithm is input a set of documents , where each document is a sequence of natural language sentences that describes the same stereotypical activity. The output of the algorithm is a LefttoRight HMM that represents that activity.
Our approach has four main components, which are described in the next four subsections: Event Extraction, Parameter Estimation, Structure Learning, and Structure Scoring. The event extraction step clusters the input sentences into event types and replaces the sentences with the corresponding cluster labels. After extraction, the event sequences are iteratively merged with the current HMM in batches of size
starting with an empty HMM. Structure Learning then merges pairs of states (nodes) and removes state transitions (edges) by greedy hill climbing guided by the improvement in approximate posterior probability of the HMM. Once the hill climbing converges to a local optimum, the maxmimum likelihood HMM parameters are reestimated using the EM procedure based on all the data seen so far. Then the next batch of sequences are processed. We will now describe these steps in more detail.3.1 Event Extraction
Given a set of sequences of sentences, the event extraction algorithm clusters them into events and arranges them into a tree structured HMM. For this step, we assume that each sentence has a simple structure that consists of a single verb and an object. We make the further simplifying assumption that the sequences of sentences in all documents describe the events in temporal order. Although this assumption is often violated in natural documents, we ignore this problem to focus on script learning. There have been some approaches in previous work that specifically address the problem of inferreing temporal order of events from texts, e.g., see [Raghavan, FoslerLussier, and Lai2012].
Given the above assumptions, following [Regneri, Koller, and Pinkal2010], we apply a simple agglomerative clustering algorithm that uses a semantic similarity function over sentence pairs given by , where is the verb and is the object in the sentence . Here is the path similarity metric from Wordnet [Miller1995]. It is applied to the first verb (preferring verbs that are not stop words) and to the objects from each pair of sentences. The constants and are tuning parameters that adjust the relative importance of each component. Like [Regneri, Koller, and Pinkal2010], we found that a high weight on the verb similarity was important to finding meaningful clusters of events. The most frequent verb in each cluster is extracted to name the event type that corresponds to that cluster.
The initial configuration of the HMM is a Prefix Tree Acceptor, which is constructed by starting with a single event sequence and then adding sequences by branching the tree at the first place the new sequence differs from it [Dupont, Miclet, and Vidal1994, Seymore, McCallum, and Rosenfeld1999]. By repeating this process, an HMM that fully enumerates the data is constructed.
3.2 Parameter Estimation with EM
In this section we describe our parameter estimation methods. While parameter estimation in this kind of HMM was treated earlier in the literature [Rabiner1990, Bahl, Jelinek, and Mercer1983], we provide a more principled approach to estimate the statedependent probability of transitions from data without introducing special insert and delete states [Krogh et al.1994]. We assume that the structure of the LefttoRight HMM is fixed based on the preceding structure learning step, which is described in Section 3.3.
The main difficulty in HMM parameter estimation is that the states of the HMM are not observed. The ExpectationMaximization (EM) procedure (also called the BaumWelch algorithm in HMMs) alternates between estimating the hidden states in the event sequences by running the ForwardBackward algorithm (the Expectation step) and finding the maximum likelihood estimates (the Maximization step) of the transition and observation parameters of the HMM [Baum et al.1970]. Unfortunately, because of the transitions the state transitions of our HMM are not necessarily aligned with the observations. Hence we explicitly maintain two indices, the time index and the observation index . We define to be the joint probability that the HMM is in state at time and has made the observations . This is computed by the forward pass of the algorithm using the following recursion. Equations 1 and 2 represent the base case of the recursion, while Equation 3 represents the case for null observations. Note that the observation index of the recursive call is not advanced unlike in the second half of Equation 3 where it is advanced for a normal observation. We exploit the fact that the HMM is LefttoRight and only consider transitions to from states with indices . The time index is incremented starting , and the observation index varies from thru .
(1)  
(2)  
(3)  
The backward part of the standard ForwardBackward algorithm starts from the last time step and reasons backwards. Unfortunately in our setting, we do not know —the true number of state transitions—as some of the observations are missing. Hence, we define as the conditional probability of observing in the remaining steps given that the current state is . This allows us to increment starting from as recursion proceeds, rather than decrementing it from .
(4)  
(5)  
(6)  
Equation 7 calculates the probability of the observation sequence , which is computed by marginalizing over time and state and setting the second index to the length of the observation sequence . The quantity serves as the normalizing factor for the last three equations.
(7)  
(8)  
(9)  
(10)  
Equation 8
, the joint distribution of the state and observation index
at time is computed by convolution, i.e., multiplying the and that correspond to the same time step and the same state and marginalizing out the length of the statesequence . Convolution is necessary, as the length of the statesequenceis a random variable equal to the sum of the corresponding time indices of
and .Equation 9 computes the joint probability of a statetransition associated with a null observation by first multiplying the state transition probability by the null observation probability given the state transition and the appropriate and values. It then marginalizes out the observation index . Again we need to compute a convolution with respect to to take into account the variation over the total number of state transitions. Equation 10 calculates the same probability for a nonnull observation . This equation is similar to Equation 9 with two differences. First, we ensure that the observation is consistent with by multiplying the product with the indicator function which is if and otherwise. Second, we advance the observation index in the function.
Since the equations above are applied to each individual observation sequence, , , , and all have an implicit index which denotes the observation sequence and has been omitted in the above equations. We will make it explicit below and calculate the expected counts of state visits, state transitions, and state transition observation triples.
(11)  
(12)  
(13)  
Equation 11 counts the total expected number of visits of each state in the data. Also, Equation 12 estimates the expected number of transitions between each state pair. Finally, Equation 13 computes the expected number of observations and statetransitions including null transitions. This concludes the Estep of the EM procedure.
The Mstep of the EM procedure consists of Maximum Aposteriori (MAP) estimation of the transition and observation distributions is done assuming an uninformative Dirichlet prior. This amounts to adding a pseudocount of 1 to each of the next states and observation symbols. The observation distributions for the initial and final states and are fixed to be the Kronecker delta distributions at their true values.
(14)  
(15) 
The Estep and the Mstep are repeated until convergence of the parameter estimates.
3.3 Structure Learning
We now describe our structure learning algorithm, SEMHMM. Our algorithm is inspired by Bayesian Model Merging (BMM) [Stolcke and Omohundro1994] and Structural EM (SEM) [Friedman1998] and adapts them to learning HMMs with missing observations. SEMHMM performs a greedy hill climbing search through the space of acyclic HMM structures. It iteratively proposes changes to the structure either by merging states or by deleting edges. It evaluates each change and makes the one with the best score. An exact implementation of this method is expensive, because, each time a structure change is considered, the MAP parameters of the structure given the data must be reestimated. One of the key insights of both SEM and BMM is that this expensive reestimation can be avoided in factored models by incrementally computing the changes to various expected counts using only local information. While this calculation is only approximate, it is highly efficient.
During the structure search, the algorithm considers every possible structure change, i.e., merging of pairs of states and deletion of statetransitions, checks that the change does not create cycles, evaluates it according to the scoring function and selects the best scoring structure. This is repeated until the structure can no longer be improved (see Algorithm 1).
The Merge States operator creates a new state from the union of a state pair’s transition and observation distributions. It must assign transition and observation distributions to the new merged state. To be exact, we need to redo the parameter estimation for the changed structure. To compute the impact of several proposed changes efficiently, we assume that all probabilistic state transitions and trajectories for the observed sequences remain the same as before except in the changed parts of the structure. We call this “locality of change” assumption, which allows us to add the corresponding expected counts from the states being merged as shown below.
The second kind of structure change we consider is edge deletion and consists of removing a transition between two states and redistributing its evidence along the other paths between the same states. Again, making the locality of change assumption, we only recompute the parameters of the transition and observation distributions that occur in the paths between the two states. We reestimate the parameters due to deleting an edge , by effectively redistributing the expected transitions from to , , among other edges between and based on the parameters of the current model.
This is done efficiently using a procedure similar to the ForwardBackward algorithm under the null observation sequence. Algorithm 2 takes the current model , an edge (), and the expected count of the number of transitions from to , , as inputs. It updates the counts of the other transitions to compensate for removing the edge between and . It initializes the of and the of with 1 and the rest of the s and s to . It makes two passes through the HMM, first in the topological order of the nodes in the graph and the second in the reverse topological order. In the first, “forward” pass from to , it calculates the value of each node that represents the probability that a sequence that passes through also passes through while emitting no observation. In the second, “backward” pass, it computes the value of a node that represents the probability that a sequence that passes through emits no observation and later passes through . The product of and gives the probability that is passed through when going from to and emits no observation. Multiplying it by the expected number of transitions gives the expected number of additional counts which are added to to compensate for the deleted transition . After the distribution of the evidence, all the transition and observation probabilities are reestimated for the nodes and edges affected by the edge deletion
In principle, one could continue making incremental structural changes and parameter updates and never run EM again. This is exactly what is done in Bayesian Model Merging (BMM) [Stolcke and Omohundro1994]. However, a series of structural changes followed by approximate incremental parameter updates could lead to bad local optima. Hence, after merging each batch of sequences into the HMM, we rerun EM for parameter estimation on all sequences seen thus far.
3.4 Structure Scoring
We now describe how we score the structures produced by our algorithm to select the best structure. We employ a Bayesian scoring function, which is the posterior probability of the model given the data, denoted . The score is decomposed via Bayes Rule (i.e., ), and the denominator is omitted since it is invariant with regards to the model.
Since each observation sequence is independent of the others, the data likelihood is calculated using the ForwardBackward algorithm and Equation 7 in Section 3.2. Because the initial model fully enumerates the data, any merge can only reduce the data likelihood. Hence, the model prior must be designed to encourage generalization via state merges and edge deletions (described in Section 3.3). We employed a prior with three components: the first two components are syntactic and penalize the number of states and the number of nonzero transitions respectively. The third component penalizes the number of frequentlyobserved semantic constraint violations . In particular, the prior probabilty of the model . The parameters assign weights to each component in the prior.
The semantic constraints are learned from the event sequences for use in the model prior. The constraints take the simple form “ never follows .” They are learned by generating all possible such rules using pairwise permutations of event types, and evaluating them on the training data. In particular, the number of times each rule is violated is counted and a test is performed to determine if the violation rate is lower than a predetermined error rate. Those rules that pass the hypothesis test with a threshold of are included. When evaluating a model, these contraints are considered violated if the model could generate a sequence of observations that violates the constraint.
Also, in addition to incrementally computing the transition and observation counts, and , the likelihood, can be incrementally updated with structure changes as well. Note that the likelihood can be expressed as when the state transitions are observed. Since the state transitions are not actually observed, we approximate the above expression by replacing the observed counts with expected counts. Further, the locality of change assumption allows us to easily calculate the effect of changed expected counts and parameters on the likelihood by dividing it by the old products and multiplying by the new products. We call this version of our algorithm SEMHMMApprox.
4 Experiments and Results
We now present our experimental results on SEMHMM and SEMHMMApprox. The evaluation task is to predict missing events from an observed sequence of events. For comparison, four baselines were also evaluated. The “Frequency” baseline predicts the most frequent event in the training set that is not found in the observed test sequence. The “Conditional” baseline predicts the next event based on what most frequently follows the prior event. A third baseline, referred to as “BMM,” is a version of our algorithm that does not use EM for parameter estimation and instead only incrementally updates the parameters starting from the raw document counts. Further, it learns a standard HMM, that is, with no transitions. This is very similar to the Bayesian Model Merging approach for HMMs [Stolcke and Omohundro1994]. The fourth baseline is the same as above, but uses our EM algorithm for parameter estimation without transitions. It is referred to as “BMM + EM.”
Batch Size  2  5  10 

SEMHMM  42.2%  45.1%  46.0% 
SEMHMM Approx.  43.3%  43.5%  44.3% 
BMM + EM  41.1%  41.2%  42.1% 
BMM  41.0%  39.5%  39.1% 
Conditional  36.2%  
Frequency  27.3% 
The Open Minds Indoor Common Sense (OMICS) corpus was developed by the Honda Research Institute and is based upon the Open Mind Common Sense project [Gupta and Kochenderfer2004]. It describes 175 common household tasks with each task having 14 to 122 narratives describing, in short sentences, the necessary steps to complete it. Each narrative consists of temporally ordered, simple sentences from a single author that describe a plan to accomplish a task. Examples from the “Answer the Doorbell” task can be found in Table 2. The OMICS corpus has 9044 individual narratives and its short and relatively consistent language lends itself to relatively easy event extraction.
Example 1  Example 2 

Hear the doorbell.  Listen for the doorbell. 
Walk to the door.  Go towards the door. 
Open the door.  Open the door. 
Allow the people in.  Greet the vistor. 
Close the door.  See what the visitor wants. 
Say goodbye to the visitor.  
Close the door. 
The 84 domains with at least 50 narratives and 3 event types were used for evaluation. For each domain, forty percent of the narratives were withheld for testing, each with one randomlychosen event omitted. The model was evaluated on the proportion of correctly predicted events given the remaining sequence. On average each domain has 21.7 event types with a standard deviation of 4.6. Further, the average narrative length across domains is 3.8 with standard deviation of 1.7. This implies that only a frcation of the event types are present in any given narrative. There is a high degree of omission of events and many different ways of accomplishing each task. Hence, the prediction task is reasonably difficult, as evidenced by the simple baselines. Neither the frequency of events nor simple temporal structure is enough to accurately fill in the gaps which indicates that most sophisticated modeling such as SEMHMM is needed.
The average accuracy across the 84 domains for each method is found in Table 1. On average our method significantly outperformed all the baselines, with the average improvement in accuracy across OMICS tasks between SEMHMM and each baseline being statistically significant at a .01 level across all pairs and on sizes of and
using onesided paired ttests. For
improvement was not statistically greater than zero. We see that the results improve with batch size until for SEMHMM and BMM+EM, but they decrease with batch size for BMM without EM. Both of the methods which use EM depend on statistics to be robust and hence need a larger value to be accurate. However for BMM, a smaller size means it reconciles a couple of documents with the current model in each iteration which ultimately helps guide the structure search. The accuracy for “SEMHMM Approx.” is close to the exact version at each batch level, while only taking half the time on average.5 Conclusions
In this paper, we have given the first formal treatment of scripts as HMMs with missing observations. We adapted the HMM inference and parameter estimation procedures to scripts and developed a new structure learning algorithm, SEMHMM, based on the EM procedure. It improves upon BMM by allowing for transitions and by incorporating maximum likelihood parameter estimation via EM. We showed that our algorithm is effective in learning scripts from documents and performs better than other baselines on sequence prediction tasks. Thanks to the assumption of missing observations, the graphical structure of the scripts is usually sparse and intuitive. Future work includes learning from more natural text such as newspaper articles, enriching the representations to include objects and relations, and integrating HMM inference into text understanding.
Acknowledgments
We would like to thank Nate Chambers, Frank Ferraro, and Ben Van Durme for their helpful comments, criticism, and feedback. Also we would like to thank the SCALE 2013 workshop. This work was supported by the DARPA and AFRL under contract No. FA87501320033. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the DARPA, the AFRL, or the US government.
References
 [Bahl, Jelinek, and Mercer1983] Bahl, L. R.; Jelinek, F.; and Mercer, R. L. 1983. A maximum likelihood approach to continuos speech recognition. IEEE Transactions in Pattern Analysis and Machine Intelligence (PAMI) 5(2):179–190.

[Baum et al.1970]
Baum, L. E.; Petrie, T.; Soules, G.; and Weiss, N.
1970.
A maximization technique occurring in the statistical analysis of probabilistic functions of markov chains.
The Annals of Mathematical Statistics 41(1):164–171.  [Chambers and Jurafsky2008] Chambers, N., and Jurafsky, D. 2008. Unsupervised learning of narrative event chains. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics (ACL), 789–797.

[Chambers2013]
Chambers, N.
2013.
Event schema induction with a probabilistic entitydriven model.
In
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing
, 1797–1807.  [DeJong and Mooney1986] DeJong, G., and Mooney, R. 1986. Explanationbased learning: An alternative view. Machine learning 1(2):145–176.

[DeJong1981]
DeJong, G.
1981.
Generalizations based on explanations.
In
Proceedings of the Seventh International Joint Conference on Artificial Intelligence (IJCAI)
, 67–69.  [Dupont, Miclet, and Vidal1994] Dupont, P.; Miclet, L.; and Vidal, E. 1994. What is the search space of the regular inference? In Grammatical Inference and Applications. Springer. 25–37.
 [Friedman1998] Friedman, N. 1998. The Bayesian structural EM algorithm. In Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence, 129–138. Morgan Kaufmann Publishers Inc.
 [Gupta and Kochenderfer2004] Gupta, R., and Kochenderfer, M. J. 2004. Common sense data acquisition for indoor mobile robots. In AAAI, 605–610.
 [Kit Cheung, Poon, and Vanderwende2013] Kit Cheung, J. C.; Poon, H.; and Vanderwende, L. 2013. Probabilistic frame induction. In Proceedings of Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL:HLT), 837–846.
 [Krogh et al.1994] Krogh, A.; Brown, M.; Mian, I. S.; Sjolander, K.; and Haussler, D. 1994. Hidden markov models in computational biology. Journal of Molecular Biology 1501–1531.
 [Miller1995] Miller, G. A. 1995. WordNet: a lexical database for english. Communications of the ACM 38(11):39–41.
 [Rabiner1990] Rabiner, L. R. 1990. A tutorial on hidden Markov models and selected applications in speech recognition. 267–296.
 [Raghavan, FoslerLussier, and Lai2012] Raghavan, P.; FoslerLussier, E.; and Lai, A. M. 2012. Learning to temporally order medical events in clinical text. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics (ACL), 70–74.
 [Regneri, Koller, and Pinkal2010] Regneri, M.; Koller, A.; and Pinkal, M. 2010. Learning script knowledge with web experiments. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, 979–988. Association for Computational Linguistics.
 [Schank and Abelson1977] Schank, R., and Abelson, R. 1977. Scripts, plans, goals and understanding: An inquiry into human knowledge structures. Lawrence Erlbaum Publishers.
 [Seymore, McCallum, and Rosenfeld1999] Seymore, K.; McCallum, A.; and Rosenfeld, R. 1999. Learning hidden Markov model structure for information extraction. In AAAI Workshop on Machine Learning for Information Extraction, 37–42.
 [Stolcke and Omohundro1994] Stolcke, A., and Omohundro, S. M. 1994. Bestfirst model merging for hidden Markov model induction. arXiv preprint cmplg/9405017.
Comments
There are no comments yet.