Morphological Segmentation Inside-Out

11/12/2019 ∙ by Ryan Cotterell, et al. ∙ Johns Hopkins University 0

Morphological segmentation has traditionally been modeled with non-hierarchical models, which yield flat segmentations as output. In many cases, however, proper morphological analysis requires hierarchical structure – especially in the case of derivational morphology. In this work, we introduce a discriminative, joint model of morphological segmentation along with the orthographic changes that occur during word formation. To the best of our knowledge, this is the first attempt to approach discriminative segmentation with a context-free model. Additionally, we release an annotated treebank of 7454 English words with constituency parses, encouraging future research in this area.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

Code Repositories

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In NLP, supervised morphological segmentation has typically been viewed as either a sequence-labeling or a segmentation task segstudy. In contrast, we consider a hierarchical approach, employing a context-free grammar (CFG). CFGs provide a richer model of morphology: They capture (i) the intuition that words themselves have internal constituents, which belong to different categories, as well as (ii) the order in which affixes are attached. Moreover, many morphological processes, e.g., compounding and reduplication, are best modeled as hierarchical; thus, context-free models are expressively more appropriate.

The purpose of morphological segmentation is to decompose words into smaller units, known as morphemes, which are typically taken to be the smallest meaning-bearing units in language. This work concerns itself with modeling hierarchical structure over these morphemes. Note a simple flat morphological segmentation can also be straightforwardly derived from the CFG parse tree. Segmentations have found use in a diverse set of NLP applications, e.g., automatic speech recognition

afify2006use, keyword spotting narasimhanmorphological, machine translation clifton2011combining and parsing seeker2015graph. In contrast to prior work, we focus on canonical segmentation, i.e., we seek to jointly model orthographic changes and segmentation. For instance, the canonical segmentation of untestably is untestablely, where we map ably to ablely, restoring the letters le.

We make two contributions: (i) We introduce a joint model for canonical segmentation with a CFG backbone. We experimentally show that this model outperforms a semi-Markov model on flat segmentation. (ii) We release the first morphology treebank, consisting of 7454 English word types, each annotated with a full constituency parse.

[.Word [[.Prefix un ] [.Word [.Word test ] [.Suffix able ]]] [.Suffix ly ]]

(a)

[.Word [.Word [.Word [.Prefix un ] [.Word test ]] [.Suffix able ]] [.Suffix ly ]]

(b)

[.Word [.Word [.Prefix un ] [.Word lock ]] [.Suffix able ]]

(c)

[.Word [.Prefix un ] [.Word [.Word lock ] [.Suffix able ]]]

(d)
Figure 1: Canonical segmentation parse trees for untestably and unlockable. For both words, the scope of un is ambiguous. Arguably, (a) is the only correct parse tree for untestably; the reading associated with (b) is hard to get. On the other hand, unlockable is truly ambiguous between “able to be unlocked” (c) and “unable to be locked” (d).

2 The Case For Hierarchical Structure

Why should we analyze morphology hierarchically? It is true that we can model much of morphology with finite-state machinery beesley2003finite, but there are, nevertheless, many cases where hierarchical structure appears requisite. For instance, the flat segmentation of the word untestablyuntestablely is missing important information about how the word was derived. The correct parse [[un[[test]able]]ly], on the other hand, does tell us that this is the order in which the complex form was derived:

testtestableuntestableuntestably.

This gives us insight into the structure of the lexicon—we expect that the segment

testable exists as an independent word, but ably does not.

Moreover, a flat segmentation is often semantically ambiguous. There are two potentially valid readings of untestably depending on how the negative prefix un scopes. The correct tree (see Figure 1) yields the reading “in the manner of not able to be tested.” A second—likely infelicitous reading—where the segment untest forms a constituent yields the reading “in a manner of being able to untest.” Recovering the hierarchical structure allows us to select the correct reading; note there are even cases of true ambiguity; e.g., unlockable has two readings: “unable to be locked” and “able to be unlocked.”

We also note that theoretical linguists often implicitly assume a context-free treatment of word formation, e.g., by employing brackets to indicate different levels of affixation. Others have explicitly modeled word-internal structure with grammars selkirk0; marvin2002topics.

3 Parsing the Lexicon

A novel component of this work is the development of a discriminative parser finkel2008efficient; hall2014less

for morphology. The goal is to define a probability distribution over all trees that could arise from the input word, after reversal of orthographic and phonological processes. We employ the simple grammar shown in

Table 1. Despite its simplicity, it models the order in which morphemes are attached.

More formally, our goal is to map a surface form (e.g., untestably) into its underlying canonical form (e.g., untestablely) and then into a parse tree over its morphemes. We assume , for some discrete alphabet .111For efficiency, we assume , . Note that a parse tree over the string implicitly defines a flat segmentation given our grammar—one can simply extract the characters spanned by all preterminals in the resulting tree. Before describing the joint model in detail, we first consider its pieces individually.

3.1 Restoring Orthographic Changes

To extract a canonical segmentation naradowsky2009improving; cotterell2016canonical, we restore orthographic changes that occur during word formation. To this end, we define the score function

(1)

where is a monotonic alignment between the strings and . The goal is for to assign higher values to better matched pairs, e.g., (untestably, untestablely). We refer to dreyer2008latent for a thorough exposition.

For ease of computation, we can encode this function as a weighted finite-state machine (WFST) mohri2002weighted. This requires, however, that the feature function factors over the topology of the finite-state encoding. Since our model conditions on the word , the feature function can extract features from any part of this string. Features on the output string, , however, are more restricted. In this work, we employ a bigram model over output characters. This implies that each state remembers exactly one character, the previous one. See cotterell-peng-eisner-2014 for details. We can compute the score for two strings and using a weighted generalization of the Levenshtein algorithm. Computing the partition function requires a different dynamic program, which runs in time. Note that since (lower case English letters), it takes roughly times longer to compute the partition function than to score a pair of strings.

Our model includes several simple feature templates, including features that fire on individual edit actions as well as conjunctions of edit actions and characters in the surrounding context. See cotterell2016canonical for details.

Root Word
Word Prefix Word
Word Word Suffix
Word
Prefix
Suffix
Table 1: The context-free grammar used in this work to model word formation. The productions closely resemble those of johnson2006adaptor’s Adaptor Grammar.

3.2 Morphological Analysis as Parsing

Next, we need to score an underlying canonical form (e.g., untestablely) together with a parse tree (e.g., [[un[[test]able]]ly]). Thus, we define the parser score with the following function

(2)

where is the set of anchored productions in the tree . An anchored production is a grammar rule in Chomsky normal form attached to a span, e.g., . Each is then assigned a weight by the linear function , where the function extracts relevant features from the anchored production as well as the corresponding span of the underlying form . This model is typically referred to as a weighted CFG (WCFG) smith2007weighted or a CRF parser.

For , we define three span features: (i) indicator features on the span’s segment, (ii) an indicator feature that fires if the segment appears in an external corpus222We use the Wikipedia dump from 2016-05-01. and (iii) the conjunction of the segment with the label (e.g., Prefix) of the subtree root. Following hall2014less, we employ an indicator feature for each production as well as production backoff features.

4 A Joint Model

OurWe have adjusted the definition of the model from the original paper to directly introduce the alignment between the strings and . complete model is a joint CRF koller2009probabilistic where each of the above scores are factors. We define the following probability distribution over trees, canonical forms and their alignments to the original word

(3)

where

is the parameter vector and the normalizing partition function as

(4)

where is the set of all parse trees for the string . This involves a sum over all possible underlying orthographic forms and all parse trees for those forms.

The joint approach has the advantage that it allows both factors to work together to influence the choice of the underlying form . This is useful as the parser now has access to which words are attested in the language; this helps guide the relatively weak transduction model. On the downside, the partition function now involves a sum over both all strings in and all possible parses of each string! Inference in this joint model is intractable, so we resort to approximate methods.

Finally, we define the marginal distribution

(5)

where is the set of all monotonic alignments between and . This will be our model of morphological segmentation since we are not interested in the latent alignments . author=ryan,color=violet!40,size=,fancyline,caption=,author=ryan,color=violet!40,size=,fancyline,caption=,todo: author=ryan,color=violet!40,size=,fancyline,caption=,Rewrite this section.

Segmentation Tree
Morph. Edit Acc. Const.
Flat 78.89 (0.9) 0.72 (0.04) 72.88 (1.21) N/A
Hier 85.55 (0.6) 0.55 (0.03) 73.19 (1.09) 79.01 (0.5)
Table 2: Results for the 10 splits of the treebank. Segmentation quality is measured by morpheme , edit distance and accuracy; tree quality by constituent .

4.1 Learning and Inference

We use stochastic gradient descent to optimize the log-probability of the training data

; this requires the computation of the gradient of the partition function , which is intractable. We may view this gradient as an expectation:

(6)

For any given , , and , the gradients and may each be computed in linear time. However, the sum over all underlying forms and trees in eq. 6

is still intractable, so we resort to the importance-sampling estimator derived by cotterell2016canonical. Roughly speaking, we approximate the hard-to-sample-from distribution

by taking samples from an easy-to-sample-from proposal distribution . Specifically, we employ a pipeline model for consisting of WFST and then a WCFG sampled from consecutively. We then reweight the samples using the unnormalized score from . Importance sampling has found many uses in NLP ranging from language modeling bengio2003quick and neural MT JeanCMB15 to parsing dyer2016recurrent. Due to a lack of space, we omit the derivation of the importance-sampled approximate gradient.

4.2 Decoding

We also decode by importance sampling. Given , we sample canonical forms and then run the CKY algorithm to get the highest scoring tree.

5 Related Work

We believe our attempt to train discriminative grammars for morphology is novel. Nevertheless, other researchers have described parsers for morphology. Most of this work is unsupervised: johnson2007bayesian applied a Bayesian PCFG to unsupervised morphological segmentation. Similarly, Adaptor Grammars johnson2006adaptor, a non-parametric Bayesian generalization of PCFGs, have been applied to the unsupervised version of the task botha2013adaptor; sirts2013minimally. Relatedly, schmid2005disambiguation performed unsupervised disambiguation of a German morphological analyzer schmid2004smor using a PCFG, using the inside-outside algorithm baker1979trainable. Also, discriminative parsing approaches have been applied to the related problem of Chinese word segmentation zhang2014character.

6 Morphological Treebank

Supervised morphological segmentation has historically been treated as a segmentation problem, devoid of hierarchical structure. A core reason behind this is that—to the best of our knowledge—there are no hierarchically annotated corpora for the task. To remedy this, we provide tree annotations for a subset of the English portion of CELEX baayen1993celex. We reannotated 7454 English types with a full constituency parse.333In many cases, we corrected the flat segmentation as well. The resource will be freely available for future research.

6.1 Annotation Guidelines

The annotation of the morphology treebank was guided by three core principles. The first principle concerns productivity: we exclusively annotate productive morphology. In the context of morphology, productivity refers to the degree that native speakers actively employ the affix to create new words aronoff1976word. We believe that for NLP applications, we should focus on productive affixation. Indeed, this sets our corpus apart from many existing morphologically annotated corpora such as CELEX. For example, CELEX contains warmthwarmth, but th is not a productive suffix and cannot be used to create new words. Thus, we do not want to analyze hearthhearth or, in general, allow wugwugth. Second, we annotate for semantic coherence. When there are several candidate parses, we choose the one that is best compatible with the compositional semantics of the derived form.

Interestingly, multiple trees can be considered valid depending on the linguistic tier of interest. Consider the word unhappier. From a semantic perspective, we have the parse [[un [happy]] er] which gives us the correct meaning “not happy to a greater degree.” However, since the suffix er only attaches to mono- and bisyllabic words, we get [un[[happy] er]] from a phonological perspective. In the linguistics literature, this problem is known as the bracketing paradox pesetsky1985morphology; embick2015morpheme. We annotate exclusively at the syntactic-semantic tier.

Thirdly, in the context of derivational morphology, we force spans to be words themselves. Since derivational morphology—by definition—forms new words from existing words lieber2014oxford, it follows that each span rooted with Word or Root in the correct parse corresponds to a word in the lexicon. For example, consider unlickable. The correct parse, under our scheme, is [un [[lick] able]]. Each of the spans (lick, lickable and unlickable) exists as a word. By contrast, the parse [[un [lick]] able] contains the span unlick, which is not a word in the lexicon. The span in the segmented form may involve changes, e.g., [un [[achieve] able]], where achieveable is not a word, but achievable (after deleting e) is.

7 Experiments

We run a simple experiment to show the empirical utility of parsing words—we compare a WCFG-based canonical segmenter with the semi-Markov segmenter introduced in cotterell2016canonical. We divide the corpus into 10 distinct train/dev/test splits with 5454 words for train and 1000 for each of dev and test. We report three evaluation metrics: full form accuracy, morpheme

van1999memory and average edit distance to the gold segmentation with boundaries marked by a distinguished symbol. For the WCFG model, we also report constituent —typical for sentential constituency parsing— as a baseline for future systems. This measures how well we predict the whole tree (not just a segmentation). For all models, we use

regularization and run 100 epochs of

AdaGrad duchi2011adaptive with early stopping. We tune the regularization coefficient by grid search considering .

7.1 Results and Discussion

Table 2 shows the results. The hierarchical WCFG model outperforms the flat semi-Markov model on all metrics on the segmentation task. This shows that modeling structure among the morphemes, indeed, does help segmentation. The largest improvements are found under the morpheme metric ( points). In contrast, accuracy improves by . Edit distance is in between with an improvement of characters. Accuracy, in general, is an all or nothing metric since it requires getting every canonical segment correct. Morpheme , on the other hand, gives us partial credit. Thus, what this shows us is that the WCFG gets a lot more of the morphemes in the held-out set correct, even if it only gets a few more complete forms correct. We provide additional results evaluating the entire tree with constituency as a future baseline.

8 Conclusion

We presented a discriminative CFG-based model for canonical morphological segmentation and showed empirical improvements on its ability to segment words under three metrics. We argue that our hierarchical approach to modeling morphemes is more often appropriate than the traditional flat segmentation. Additionally, we have annotated 7454 words with a morphological constituency parse. The corpus is available online at http://ryancotterell.github.io/data/morphological-treebank to allow for exact comparison and to spark future research.

Acknowledgements

The first author was supported by a DAAD Long-Term Research Grant and an NDSEG fellowship. The third author was supported by DFG (SCHU 2246/10-1).

References