Subword segmentation has become a standard preprocessing step in many neural approaches to natural language processing (NLP) tasks, e.g Neural Machine Translation (NMT) and Automatic Speech Recognition (ASR) . Word level modeling suffers from sparse statistics, issues with Out-of-Vocabulary (OOV) words, and heavy computational cost due to a large vocabulary. Word level modeling is particularly unsuitable for morphologically rich languages, but subwords are commonly used for other languages as well. Subword segmentation is best suited for languages with agglutinative morphology.
While rule-based morphological segmentation systems can achieve high quality, the large amount of human effort needed makes the approach problematic, particularly for low-resource languages. The systems are language dependent, necessitating use of multiple tools in multilingual setups. As a fast, cheap and effective alternative, data-driven segmentation can be learned in a completely unsupervised manner from raw corpora.
Unsupervised morphological segmentation saw much research interest until the early 2010’s; for a survey on the methods, see hammarstrom2011unsupervised. Semi-supervised segmentation with already small amounts of annotated training data was found to improve the accuracy significantly when compared to a linguistic segmentation; see ruokolainen2016comparative for a survey. While this line of research has been continued in supervised and more grammatically oriented tasks , the more recent work on unsupervised segmentation is less focused on approximating a linguistically motivated segmentation. Instead, the aim has been to tune subword segmentations for particular applications. For example, the simple substitution dictionary based Byte Pair Encoding segmentation algorithm , first proposed for NMT by sennrich2015neural, has become a standard in the field. Especially in the case of multilingual models, training a single language-independent subword segmentation method is preferable to linguistic segmentation .
In this study, we compare three existing and one novel subword segmentation method, all sharing the use of a unigram language model in a generative modeling framework. The previously published methods are Morfessor Baseline , Greedy Unigram Likelihood , and SentencePiece . The new Morfessor variant proposed in this work is called Morfessor EM+Prune.
The contributions of this article111 This work is licensed under a Creative Commons Attribution–NoDerivatives 4.0 International Licence. Licence details: http://creativecommons.org/licenses/by-nd/4.0/ are
a better training algorithm for Morfessor Baseline, with reduction of search error during training, and improved segmentation quality for English, Finnish and Turkish;
comparing four similar segmentation methods, including a close look at the SentencePiece reference implementation, highlighting details omitted from the original article ;
and showing that the proposed Morfessor EM+Prune with particular hyper-parameters yields SentencePiece.
|Morfessor BL||Greedy Unigram||SentencePiece||Morfessor EM+Prune|
|Model||Unigram LM||Unigram LM||Unigram LM||Unigram LM|
|Training algorithm||Local search||EM+Prune||EM+Prune||EM+Prune|
|Initialization||Words||Seed lexicon||Seed lexicon||Seed lexicon|
|EM variant||–||Lateen-EM once||EM||EM / Lateen-EM|
|Cost change threshold||✓||✓||–||✓|
|Target lexicon size||Approximate||✓||✓||✓|
1.1 Morphological Segmentation with Unigram Language Models
Morphological surface segmentation is the task of splitting words into morphs, the surface forms of meaning-bearing sub-word units, morphemes. The concatenation of the morphs is the word, e.g.
Probabilistic generative methods for morphological segmentation model the probabilityof generating a sequence of morphs (a word, sentence or corpus) , as opposed to discriminative methods that model the conditional probability of the segmentation boundaries given the unsegmented data.
This study focuses on segmentation methods applying a unigram language model
. In the unigram language model, an assumption is made that the morphs in a word occur independently of each other. Alternatively stated, it is a zero-order (memoryless) Markov model, generalized so that one observation can cover multiple characters. The probability of a sequence of morphs decomposes into the product of the probabilities of the morphs of which it consists.
The Expectation Maximization (EM) algorithm 
is an iterative algorithm for finding Maximum Likelihood (ML) or Maximum a Posteriori (MAP) estimates for parameters in models with latent variables. The EM algorithm consists of two steps. In the E-step (2), the expected value of the complete data likelihood including the latent variable is taken, and in the M-step (3), the parameters are updated to maximize the expected value of the E-step:
When applied to a (hidden) Markov model, EM is called the forward-backward algorithm. Using instead the related Viterbi algorithm is sometimes referred to as hard-EM.222An analogy can be drawn to clustering using
-means, which yields a hard assignment of data points to clusters, and using EM for clustering with a Gaussian Mixture Model (GMM), where the assignment is soft.spitkovsky2011lateen present lateen-EM, a hybrid variant in which EM and Viterbi optimization are alternated.
[Section 188.8.131.52]virpioja2012learning discusses the challenges of applying EM to learning of generative morphology. Jointly optimizing both the morph lexicon and the parameters for the morphs is intractable. If, like in Morfessor Baseline, the cost function is discontinuous when morphs are added or removed from the lexicon, there is no closed form solution to the M-step. With ML estimates for morph probabilities, EM can neither add nor remove morphs from the lexicon, because it can neither change a zero probability to nonzero nor vice versa.
One solution to this challenge is to apply local search. Starting from the current best estimate for the parameters, small search steps are tried to explore near-lying parameter configurations. The choice that yields the lowest cost is selected as the new parameters. Greedy local search often gets stuck in local minima. Even if there are parameters yielding a better cost, the search may not find them, causing search error. The error remaining at the parameters with globally optimal cost is the model error.
Another solution is to combine EM with pruning (EM+Prune). The methods based on pruning begin with a seed lexicon, which is then iteratively pruned until a stopping condition is reached. Subwords cannot be added to the lexicon after initialization. As a consequence, proper initialization is important, and the methods should not prune too aggressively without reestimating parameters, as pruning decisions cannot be backtracked. For this reason, EM+Prune methods proceed iteratively, only pruning subwords up to a predefined iteration pruning quota, e.g. removing at most 20% of the remaining lexicon at a time.
2 Related Work
In this section we review three previously published segmentation methods that apply a unigram language model. Table 1 summarizes the differences between these methods.
2.1 Morfessor Baseline
A point estimate for the model parameters is found using MAP estimation with a Minimum Description Length (MDL)  inspired prior that favors lexicons containing fewer, shorter morphs. The MAP estimate yields a two-part cost function, consisting of a prior (the lexicon cost) and likelihood (the corpus cost). The model can be tuned using the hyper-parameter , which is a weight applied to the likelihood :
The parameter controls the overall amount of segmentation, with higher values increasing the weight of each emitted morph in the corpus (leading to less segmentation), and lower values giving a relatively larger weight to a small lexicon (more segmentation).
The prior can be further divided into two parts: the prior for the morph form properties and the usage properties. The form properties encode the string representation of the morphs, while the usage properties encode their frequencies. Morfessor Baseline applies a non-informative prior for the distribution of the morph frequencies. It is derived using combinatorics from the number of ways that the total token count can be divided among the lexicon items:
Morfessor Baseline is initialized with a seed lexicon of whole words. The Morfessor Baseline training algorithm is a greedy local search. During training, in addition to storing the model parameters, the current best segmentation for the corpus is stored in a graph structure. The segmentation is iteratively refined, by looping over all the words in the corpus in a random order and resegmenting them. The resegmentation is applied by recursive binary splitting, leading to changes in other words that share intermediary units with the word currently being resegmented. The search converges to a local optimum, and is known to be sensitive to the initialization .
In the Morfessor 2.0 implementation, the likelihood weight hyper-parameter
is set either with a grid search using the best evaluation score on a held-out development set, or by applying an approximate automatic tuning procedure based on a heuristic guess of which direction theparameter should be adjusted.
2.2 Greedy Unigram Likelihood
varjokallio2013learning presents a subword segmentation method, particularly designed for use in ASR. It applies greedy pruning based on unigram likelihood. The seed lexicon is constructed by enumerating all substrings from a list of common words, up to a specified maximum length. Pruning proceeds in two phases, which the authors call initialization and pruning.
In the first phase, a character-level language model is trained. The initial probabilities of the subwords are computed using the language model. The probabilities are refined by EM, followed by hard-EM. During the hard-EM, frequency based pruning of subwords begins.
In the second phase, hard-EM is used for parameter estimation. At the end of each iteration, the least frequent subwords are selected as candidates for pruning. For each candidate subword, the change in likelihood when removing the subword is estimated by resegmenting all words in which the subword occurs. After each pruned subword, the parameters of the model are updated. Pruning ends when the goal lexicon size is reached or the change in likelihood no longer exceeds a given threshold.
Although  implies a use of maximum likelihood estimation, the reference implementation333https://github.com/google/sentencepiece uses the implicit Dirichlet Process prior called Bayesian EM . In the M-step, the count normalization is modified to
where is the digamma function.
The seed lexicon is simply the e.g. one million most frequent substrings. SentencePiece uses an EM+Prune training algorithm. Each iteration consists of two sub-iterations of EM, after which the lexicon is pruned. Pruning is based on Viterbi counts (EM+Viterbi-prune). First, subwords that do not occur in the Viterbi segmentation are pre-pruned. The cost function is the estimated change in likelihood when the subword is removed, estimated using the assumption that all probability mass of the removed subword goes to its Viterbi segmentation. Subwords are sorted according to the cost, and a fixed proportion of remaining subwords are pruned each iteration. Single character subwords are never pruned. A predetermined lexicon size is used as the stopping condition.
3 Morfessor EM+Prune
Morfessor EM+Prune444Software available at https://github.com/Waino/morfessor-emprune . uses the unigram language model and priors similar to Morfessor Baseline, but combines them with EM+Prune training.
The prior must be slightly modified for use with the EM+Prune algorithm. The prior for the frequency distribution (5) is derived using combinatorics. When using real-valued expected counts, there are infinite assignments of counts to parameters. Despite not being theoretically motivated, it can still be desirable to compute an approximation of the Baseline frequency distribution prior, in order to use EM+Prune as an improved search to find more optimal parameters for the original cost. To do this, the real valued token count is rounded to the nearest integer555An alternative would be to replace the factorial with the gamma function. This added precision serves no practical purpose, particularly as we already use Stirling’s approximation of the factorial.. Alternatively, the prior for the frequency distribution can be omitted, or a new prior with suitable properties could be formulated. We do not propose a completely new prior in this work, instead opting to remain as close as possible to Morfessor Baseline.
In Morfessor EM+Prune, morphs are explicitly stored in the lexicon, and morphs are removed from the lexicon only during pruning. This differs from Morfessor Baseline, in which a morph is implicitly considered to be stored in the lexicon if it has non-zero count.
The prior for the morph form properties does not need to be modified. During the EM parameter estimation, the prior for the morph form properties is omitted as the morph lexicon remains constant. During pruning, the standard form prior is applicable.
Additionally we apply the Bayesian EM implicit Dirichlet Process prior . We experiment with four variations of the prior:
the full EM+Prune prior,
omitting the Bayesian EM (noexp),
omitting the approximate frequency distribution prior (nofreqdistr),
and omitting the prior entirely (noprior).
3.2 Seed Lexicon
The seed lexicon consists of the one million most frequent substrings, with two restrictions on which substrings to include: pre-pruning of redundant subwords, and forcesplit. Truncating to the chosen size is performed after pre-pruning, which means that pre-pruning can make space for substrings that would otherwise have been below the threshold.
Pre-pruning of redundant subwords is based on occurrence counts. If a string occurs times, then any substring of will occur at least times. Therefore, if the substring has a count of exactly , we know that it is not needed in any other context except as a part of . Such unproductive substrings are likely to be poor candidate subwords, and can be removed to make space in the seed lexicon for more useful substrings. This pre-pruning is not a neutral optimization, but does affect segmentation results. We check all initial and final substrings for redundancy, but do not pre-prune internal substrings.
To achieve forced splitting before or after certain characters, e.g. hyphens, apostrophes and colons, substrings which include a forced split point can be removed from the seed lexicon. As EM+Prune is unable to introduce new subwords, this pre-pruning is sufficient to guarantee the forced splits. While Morfessor 2.0 only implements force splitting certain characters to single-character morphs, i.e. force splitting on both sides, we implement more fine-grained force splitting separately before and after the specified character.
|EM+P MDL lateen||6.3510||2.0110||1.8810|
|EM+P MDL lateen||1.4110||8.2210||1.7910|
|EM+P MDL lateen||2.7910||1.3710||5.7810|
|EM+P MDL keep-redundant||2.9710||1.3610||5.7310|
|EM+P MDL lateen||9.8310||1.6910||1.7910|
3.3 Training Algorithm
We experiment with three variants of the EM+Prune iteration structure:
EM+Viterbi-prune is an intermediary mode between EM and lateen-EM in the context of pruning. The pruning decisions are made based on counts from a single iteration of Viterbi training, but these Viterbi counts are not otherwise used to update the parameters. In effect, this allows for the more aggressive pruning using the Viterbi counts, while retaining the uncertainty of the soft parameters.
Each iteration begins with 3 sub-iterations of EM. In the pruning phase of each iteration, the subwords in the current lexicon are sorted in ascending order according to the estimated change in the cost function if the subword is removed from the lexicon. Subwords consisting of a single character are always kept, to retain the ability to represent an open vocabulary without OOV issues. The list is then pruned according to one of three available pruning criteria:666MDL with or without automatic tuning is not compatible with omitting the prior.
(-weighted) MDL pruning,
MDL with automatic tuning of for lexicon size,
lexicon size with omitted prior or pretuned .
In (-weighted) Minimum Description Length (MDL) pruning, subwords are pruned until the estimated cost starts rising, or until the pruning quota for the iteration is reached, whichever comes first.
A subword lexicon of a predetermined size can be used as pruning criterion in two different ways. If the desired is known in advance, or if the prior is omitted, subwords are pruned until the desired lexicon size is reached, or until the pruning quota for the iteration is reached, whichever comes first.
To reach a subword lexicon of a predetermined size while using the Morfessor prior, the new automatic tuning procedure can be applied. For each subword, the estimated change in prior and likelihood are computed separately. These allow computing the value of that would cause the removal of each subword to be cost neutral, i.e. the value that would cause MDL pruning to terminate at that subword. For subwords with the same sign for both the change in prior and likelihood, no such threshold can be computed: if the removal decreases both costs the subword will always be removed, and if it increases both costs it will always be kept. Sorting the list of subwords according to the estimated threshold including the always kept subwords allows automatically tuning so that a subword lexicon of exactly the desired size is retained after MDL pruning. The automatic tuning is repeated before the pruning phase of each iteration, as retraining the parameters affects the estimates.
|EM+P MDL noexp||0.8||✓||82.9||71.8||77.0|
|EM+P MDL nofreqdistr||0.8||✓||83.3||71.4||76.9|
|EM+P MDL nofreqdistr||0.02||✓||68.7||58.0||62.9||E|
|EM+P MDL noexp||0.02||✓||68.4||57.9||62.7||E|
|EM+P MDL keep-redundant||0.3||87.8||58.7||70.4|
|EM+P MDL noexp||0.4||87.6||58.1||69.9|
|EM+P MDL nofreqdistr||0.3||86.4||58.2||69.6||E|
|EM+P MDL nofreqdistr||1.0||✓||73.7||61.8||67.2||B|
|EM+P MDL noexp||0.4||✓||66.5||65.9||66.2|
3.4 Sampling of Segmentations
Morfessor EM+Prune can be used in subword regularization , a denoising-based regularization method for neural NLP systems. Alternative segmentations can be sampled from the full data distribution using Forward-filtering backward-sampling algorithm  or approximatively but more efficiently from an -best list.
3.5 SentencePiece as a Special Case of Morfessor EM+Prune
Table 1 contains a comparison between all four methods discussed in this work. To recover SentencePiece, Morfessor EM+Prune should be run with the following settings: The prior should be omitted entirely, leaving only the likelihood
As the tuning parameter is no longer needed when the prior is omitted, the pruning criterion can be set to a predetermined lexicon size, without automatic tuning of . Morfessor by default uses type-based training; to use frequency information, count dampening should be turned off. The seed lexicon should be constructed without using forced splitting. The EM+Viterbi-prune training scheme should be used, with Bayesian EM turned on.
4 Experimental Setup
English, Finnish and Turkish data are from the Morpho Challenge 2010 data set [12, 13]. The training sets contain ca 878k, 2.9M and 617k word types, respectively. As test sets we use the union of the 10 official test set samples. For North Sámi, we use a list of ca 691k word types extracted from Den samiske tekstbanken corpus (Sametinget, 2004) and the 796 word type test set from version 2 of the data set collected by [8, 7].
In most experiments we use a grid search with a development set to find a suitable value for . The exception is experiments using autotuning or lexicon size criterion, and experiments using SentencePiece. We use type-based training (dampening counts to 1) with all Morfessor methods.
For English, we force splits before and after hyphens, and before apostrophes, e.g. “women’s-rights” is force split into “women ’s - rights”. For Finnish, we force splits before and after hyphens, and after colons. For North Sámi, we force splits before and after colons. For Turkish, the Morpho Challenge data is preprocessed in a way that makes force splitting ineffectual.
The ability of the training algorithm to find parameters minimizing the Morfessor cost is evaluated by using the trained model to segment the training data, and loading the resulting segmentation as if it was a Morfessor Baseline model. We observe both unweighted prior and likelihood, and their -weighted sum.
The closeness to linguistic segmentation is evaluated by comparison with annotated morph boundaries using boundary precision, boundary recall, and boundary -score . The boundary
-score (F-score for short) equals the harmonic mean of precision (the percentage of correctly assigned boundaries with respect to all assigned boundaries) and recall (the percentage of correctly assigned boundaries with respect to the reference boundaries). Precision and recall are calculated using macro-averages over the word types in the test set. In the case that a word has more than one annotated segmentation, we take the one that gives the highest score.
4.2 Error Analysis
We perform an error analysis, with the purpose of gaining more insight into the ability of the methods to model particular aspects of morphology. We follow the procedure used by ruokolainen2016comparative. It is based on a categorization of morphs into the categories prefix, stem, and suffix. The category labels are derived from the original morphological analysis labels in the English and Finnish gold standards, and directly correspond to the annotation scheme used in the North Sámi test set.
We first divide errors into two kinds, over-segmentation and under-segmentation. Over-segmentation occurs when a boundary is incorrectly assigned within a morph segment. In under-segmentation, the a correct morph boundary is omitted from the generated segmentation. We further divide the errors by the morph category in which the over-segmentation occurs, and the two morph categories surrounding the omitted boundary in under-segmentation.
Figure 1 compares the cost components of the Morfessor model across different parameters. The lowest costs for the mid-range settings are obtained for the EM+Prune algorithm, but for larger lexicons, the Baseline algorithm copes better. As expected, using forced splits at certain characters increase the costs, and the increase is larger than between the training algorithms. As Turkish preprocessing causes the results to be unaffected by the forced splits, we only report results without them.
Tables 2 to 5 show the Morfessor cost of the segmented training data for particular values. Again, the proposed Morfessor EM+Prune reaches a lower Morfessor cost than Morfessor Baseline. Using the lateen-EM has only minimal effect to the costs, decreasing the total cost slightly for English and increasing for the other languages. Turkish results include the “keep-redundant” setting discussed below in more detail.
Figure 2 shows the Precision–Recall curves for the primary systems, for all four languages. While increasing the Morfessor cost, forced splitting improves BPR. Tables 6 to 9 show test set Boundary Precision, Recall and F-score (BPR) results at the optimal tuning point (selected using a development set) for each model, for English, Finnish, Turkish and North Sámi, respectively777Note that SentencePiece is not designed for aiming towards a linguistic morpheme segmentation. Neither does it attempt to minimize the Morfessor cost. Therefore, SentencePiece is included in the evaluations for context, not as a baseline method.. The default Morfessor EM+Prune configuration (“soft” EM, full prior, forcesplit) significantly outperforms Morfessor Baseline w.r.t. the F-score for all languages except North Sámi, for which there is no significant difference between the methods.
Morfessor EM+Prune is less responsive to tuning than Morfessor Baseline. This is visible in the shorter lines in Figures 1 and 2, although the tuning parameter takes values from the same range. In particular, EM+Prune can not easily be tuned to produce very large lexicons.
Pre-pruning of redundant substrings gives mixed results. For Turkish, both Morfessor cost and BPR are degraded by the pre-pruning, but for the other three languages the pre-pruning is beneficial or neutral. When tuning to very high values (less segmentation), pre-pruning of redundant substrings improves the sensitivity to tuning. The same effect may also be achievable by using a larger seed lexicon. We perform most of our experiments with pre-pruning turned on.
To see the effect of pre-pruning on the seed lexicon, we count the number of subwords that are used in the gold standard segmentations, but not included in seed lexicons of various sizes. Taking Finnish as an example, we see 203 subword types missing from a 1 million substring seed lexicon without pre-pruning. Turning on pre-pruning decreases the number of missing types to 120. To reach the same number without using pre-pruning, a much larger seed lexicon of 1.7M substrings must be used.
Omitting the frequency distribution appears to have little effect on Morfessor cost and BPR. Turning off Bayesian EM (noexp) results in a less compact lexicon resulting in higher prior cost, but improves BPR for two languages: English and Turkish.
Table 10 contains the error analysis for English, Finnish and North Sámi. For English and North Sámi, EM+Prune results in less under-segmentation but worse over-segmentation. For Finnish these results are reversed. However, the suffixes are often better modeled, as shown by lower under-segmentation on SUF-SUF (all languages) and STM-SUF (English and North Sámi).
We propose Morfessor EM+Prune, a new training algorithm for Morfessor Baseline. EM+Prune reduces search error during training, resulting in models with lower Morfessor costs. Lower costs also lead to improved accuracy when segmentation output is compared to linguistic morphological segmentation.
We compare Morfessor EM+Prune to three previously published segmentation methods applying unigram language models. We find that using the Morfessor prior is beneficial when the reference is linguistic morphological segmentation.
In this work we focused on model cost and linguistic segmentation. In future work the performance of Morfessor EM+Prune in applications will be evaluated. Also, a new frequency distribution prior, which is theoretically better motivated or has desirable properties, could be formulated.
This study has been supported by the MeMAD project, funded by the European Union’s Horizon 2020 research and innovation programme (grant agreement No 780069), and the FoTran project, funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113) Computer resources within the Aalto University School of Science “Science-IT” project were used.
8 Bibliographical References
-  (2019) Massively multilingual neural machine translation in the wild: findings and challenges. CoRR abs/1907.05019. External Links: Cited by: §1.
-  (2017-08) CoNLL-SIGMORPHON 2017 shared task: universal morphological reinflection in 52 languages. In Proceedings of the CoNLL SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection, Vancouver, pp. 1–30. External Links: Cited by: §1.
-  (2002-07) Unsupervised discovery of morphemes. In ACL-02 Workshop on Morphological and Phonological Learning, MPL ’02, Vol. 6, Philadelphia, Pennsylvania, USA, pp. 21–30 (en). External Links: Cited by: §1, §2.1.
-  (2007-01) Unsupervised models for morpheme segmentation and morphology learning. ACM Transactions on Speech and Language Processing 4 (1). Cited by: §2.1.
-  (1977) Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B (Methodological) 39 (1), pp. 1–38. Cited by: §1.1.
-  (1994-02) A new algorithm for data compression. C Users Journal 12 (2), pp. 23–38. Cited by: §1.
Low-resource active learning of morphological segmentation. Northern European Journal of Language Technology. Cited by: §4.
-  (2015) Low-resource active learning of North Sámi morphological segmentation. In Proceedings of 1st International Workshop in Computational Linguistics for Uralic Languages, pp. 20–33. Cited by: §4.
-  (2010-07) Semi-supervised learning of concatenative morphology. In Proceedings of the 11th Meeting of the ACL Special Interest Group on Computational Morphology and Phonology, Uppsala, Sweden, pp. 78–86. External Links: Cited by: §2.1.
-  (2018-11) SentencePiece: a simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, Brussels, Belgium, pp. 66–71. External Links: Cited by: §2.3.
-  (2018-04) Subword regularization: improving neural network translation models with multiple subword candidates. arXiv:1804.10959 [cs] (en). Note: arXiv: 1804.10959Comment: Accepted as a long paper at ACL2018 External Links: Cited by: item (ii), §1, §2.3, §2.3, §3.4.
-  (2010-07) Morpho challenge 2005-2010: evaluations and results. In Proceedings of the 11th Meeting of the ACL Special Interest Group on Computational Morphology and Phonology, J. Heinz, L. Cahill, and R. Wicentowski (Eds.), Uppsala, Sweden, pp. 87–95. Cited by: §4.
-  (2010-09) Overview and results of Morpho Challenge 2010. In Proceedings of the Morpho Challenge 2010 Workshop, Espoo, Finland, pp. 7–24. Note: Technical Report TKK-ICS-R37 Cited by: §4.
-  (2007) Structured Bayesian nonparametric models with variational inference (tutorial). In Association for Computational Linguistics (ACL), Cited by: §2.3, §3.1.
-  (1989) Stochastic complexity in statistical inquiry. Vol. 15, World Scientific Series in Computer Science, Singapore. Cited by: §2.1.
-  (2002) Bayesian methods for hidden markov models: recursive computing in the 21st century. Journal of the American Statistical Association 97 (457), pp. 337–351. Cited by: §3.4.
-  (2015-08) Neural machine translation of rare words with subword units. In ACL16, (en). Note: arXiv: 1508.07909Comment: accepted at ACL 2016; new in this version: figure 3 External Links: Cited by: §1.
-  (2017) Improved subword modeling for WFST-based speech recognition. In In INTERSPEECH 2017–18th Annual Conference of the International Speech Communication Association., Cited by: §1.
-  (2013-12) Learning a subword vocabulary based on unigram likelihood. In Proc. 2013 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), Olomouc, Czech Republic, pp. 7–12 (en). External Links: Cited by: §1.
-  (2013) Morfessor 2.0: python implementation and extensions for morfessor baseline. Report Technical Report 25/2013 in Aalto University publication series SCIENCE + TECHNOLOGY, Department of Signal Processing and Acoustics, Aalto University, Helsinki, Finland (eng). Cited by: §2.1, §2.1.
-  (2011) Empirical comparison of evaluation methods for unsupervised learning of morphology. Traitement Automatique des Langues 52 (2), pp. 45–90 (en). External Links: Cited by: §4.1.
-  (1967) Error bounds for convolutional codes and an asymptotically optimum decoding algorithm. IEEE Transactions on Information Theory 13 (2), pp. 260–269. Cited by: §1.1.
9 Language Resource References
Grönroos, Stig-Arne and Hiovain, Katri and Smit, Peter and Rauhala, Ilona Erika and Jokinen, Päivi Kristiina and Kurimo, Mikko and Virpioja, Sami. (2015). North Sámi active learning morphological segmentation annotations. Aalto University, 2.0.
Kudo, Taku and Richardson, John. (2018). SentencePiece. Taku Kudo.
Kurimo, Mikko and Virpioja, Sami and Turunen, Ville T. (2010). Morpho Challenge 2010 dataset. Aalto University.
Sametinget. (2004). Den samiske tekstbanken. UiT Norgga árktalaš universitehta.
Virpioja, Sami and Smit, Peter and Grönroos, Stig-Arne and Kurimo, Mikko. (2013). Morfessor 2.0. Aalto University, 2.0.6.