Sequence Modeling via Segmentations

02/24/2017
by   Chong Wang, et al.
0

Segmental structure is a common pattern in many types of sequences such as phrases in human languages. In this paper, we present a probabilistic model for sequences via their segmentations. The probability of a segmented sequence is calculated as the product of the probabilities of all its segments, where each segment is modeled using existing tools such as recurrent neural networks. Since the segmentation of a sequence is usually unknown in advance, we sum over all valid segmentations to obtain the final probability for the sequence. An efficient dynamic programming algorithm is developed for forward and backward computations without resorting to any approximation. We demonstrate our approach on text segmentation and speech recognition tasks. In addition to quantitative results, we also show that our approach can discover meaningful segments in their respective application contexts.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

06/26/2016

Training LDCRF model on unsegmented sequences using Connectionist Temporal Classification

Many machine learning problems such as speech recognition, gesture recog...
11/14/2012

Sequence Transduction with Recurrent Neural Networks

Many machine learning tasks can be expressed as the transformation---or ...
11/18/2015

Segmental Recurrent Neural Networks

We introduce segmental recurrent neural networks (SRNNs) which define, g...
04/15/2021

Neural Sequence Segmentation as Determining the Leftmost Segments

Prior methods to text segmentation are mostly at token level. Despite th...
05/28/2018

Strongly polynomial efficient approximation scheme for segmentation

Partitioning a sequence of length n into k coherent segments is one of t...
02/26/2015

A hypothesize-and-verify framework for Text Recognition using Deep Recurrent Neural Networks

Deep LSTM is an ideal candidate for text recognition. However text recog...
05/09/2018

Minimum Segmentation for Pan-genomic Founder Reconstruction in Optimal Time

Given a threshold L and a set R = {R_1, ..., R_m} of m haplotype sequenc...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Segmental structure is a common pattern in many types of sequences, typically, phrases in human languages and letter combinations in phonotactics rules. For instances,

  • [noitemsep]

  • Phrase structure. “Machine learning is part of artificial intelligence

    [Machine learning] [is] [part of] [artificial intelligence].

  • Phonotactics rules. “thought” [th][ou][ght].

The words or letters in brackets “[ ]” are usually considered as meaningful segments for the original sequences. In this paper, we hope to incorporate this type of segmental structure information into sequence modeling.

Mathematically, we are interested in constructing a conditional probability distribution

, where output is a sequence and input

may or may not be a sequence. Suppose we have a segmented sequence. Then the probability of this sequence is calculated as the product of the probabilities of its segments, each of which is modeled using existing tools such as recurrent neural networks (RNNs), long-short term memory (LSTM) 

(Hochreiter & Schmidhuber, 1997)

, or gated recurrent units (GRU) 

(Chung et al., 2014). When the segmentation for a sequence is unknown, we sum over the probabilities from all valid segmentations. In the case that the input is also a sequence, we further need to sum over all feasible alignments between inputs and output segmentations. This sounds complicated. Fortunately, we show that both forward and backward computations can be tackled with a dynamic programming algorithm without resorting to any approximations.

This paper is organized as follows. In Section 2, we describe our mathematical model which constructs the probability distribution of a sequence via its segments, and discuss related work. In Section 3, we present an efficient dynamic programming algorithm for forward and backward computations, and a beam search algorithm for decoding the output. Section 4 includes two case studies to demonstrate the usefulness of our approach through both quantitative and qualitative results. We conclude this paper and discuss future work in Section 5.

2 Sequence modeling via segmentations

In this section, we present our formulation of sequence modeling via segmentations. In our model, the output is always a sequence, while the input may or may not be a sequence. We first consider the non-sequence input case, and then move to the sequence input case. We then show how to carry over information across segments when needed. Related work is also discussed here.

2.1 Case I: Mapping from non-sequence to sequence

Figure 1: For Section 2.1. Given output and its segmentation and , input controls the initial states of both segments. Note that is omitted here.

Assume the input

is a fixed-length vector. Let the output sequence be

. We are interested in modeling the probability via the segmentations of . Denote by the set containing all valid segmentations of . Then for any segmentation , we have , where is the concatenation operator and is the number of segments in this segmentation. For example, let and Then one possible could be like , where denotes the end of a segment. Note that symbol will be ignored in the concatenation operator . Empty segments, those containing only , are not permitted in our setting. Note that while the number of distinct segments for a length- sequence is , the number of distinct segmentations, that is, , is exponentially large.

Since the segmentation is unknown in advance, the probability of the sequence is defined as the sum of the probabilities from all the segmentations in ,

(1)

where is the probability for segmentation given input , and is the probability for segment given input and the concatenation of all previous segments . Figure 1 illustrates a possible relationship between and given one particular segmentation. We choose to model the segment probability using recurrent neural networks (RNNs), such as LSTM or GRU, with a softmax probability function. Input and concatenation determine the initial state for this RNN. (All segments’ RNNs share the same network parameters.) However, since is exponentially large, Eq. 1 cannot be directly computed. We defer the computational details to Section 3.

2.2 Case II: Mapping from sequence to sequence

Now we assume the input is also a sequence and the output remains as . We make a monotonic alignment assumption—each input element emits one segment , which is then concatenated as to obtain . Different from the case when the input is not a sequence, we allow empty segments in the emission, i.e., for some , such that any segmentation of will always consist of exactly segments with possibly some empty ones. In other words, all valid segmentations for the output is in set . Since an input element can choose to emit an empty segment, we name this particular method as “Sleep-WAke Networks” (SWAN). See Figure 2 for an example of the emitted segmentation of .

Again, as in Eq. 1, the probability of the sequence is defined as the sum of the probabilities of all the segmentations in ,

(2)

where is the probability of segment given input element and the concatenation of all previous segments . In other words, input element emits segment . Again this segment probability can be modeled using an RNN with a softmax probability function with and providing the information for the initial state. The number of possible segments for is . Similar to Eq. 1, a direct computation of Eq. 2 is not feasible since is exponentially large. We address the computational details in Section 3.

Figure 2: For Section 2.2. SWAN emits one particular segmentation of with waking (emits ) and waking (emits and ) while , and sleeping. SWAN needs to consider all valid segmentations like this for .

2.3 Carrying over information across segments

Figure 3: For Section 2.3. SWAN carries over information across segments using a separate RNN. Here the segments are , and emitted by input elements , and respectively.

Note that we do not assume that the segments in a segmentation are conditionally independent. Take Eq. 2 as an example, the probability of a segment given is defined as , which also depends on the concatenation of all previous segments . We take an approach inspired by the sequence transducer (Graves, 2012) to use a separate RNN to model . The hidden state of this RNN and input are used as the initial state of the RNN for segment . (We simply add them together in our speech recognition experiment.) This allows all previous emitted outputs to affect this segment . Figure 3 illustrates this idea. The significance of this approach is that it still permits the exact dynamic programming algorithm as we will describe in Section 3.

2.4 Related work

Our approach, especially SWAN, is inspired by connectionist temporal classification (CTC) (Graves et al., 2006) and the sequence transducer (Graves, 2012). CTC defines a distribution over the output sequence that is not longer than the input sequence. To appropriate map the input to the output, CTC marginalizes out all possible alignments using dynamic programming. Since CTC does not model the interdependencies among the output sequence, the sequence transducer introduces a separate RNN as a prediction network to bring in output-output dependency, where the prediction network works like a language model.

SWAN can be regarded as a generalization of CTC to allow segmented outputs. Neither CTC nor the sequence transducer takes into account segmental structures of output sequences. Instead, our method constructs a probabilistic distribution over output sequences by marginalizing all valid segmentations. This introduces additional nontrivial computational challenges beyond CTC and the sequence transducer. When the input is also a sequence, our method then marginalizes the alignments between the input and the output segmentations. Since outputs are modeled with segmental structures, our method can be applied to the scenarios where the input is not a sequence or the input length is shorter than the output length, while CTC cannot. When we need to carry information across segments, we borrow the idea of the sequence transducer to use a separate RNN. Although it is suspected that using a separate RNN could result in a loosely-coupled model (Graves, 2013; Jaitly et al., 2016) that might hinder the performance, we do not find it to be an issue in our approach. This is perhaps due to our use of the output segmentation—the hidden states of the separate RNN are not directly used for prediction but as the initial states of the RNN for the segments, which strengthens their dependencies on each other.

SWAN itself is most similar to the recent work on the neural transducer (Jaitly et al., 2016), although we start with a different motivation. The motivation of the neural transducer is to allow incremental predictions as input streamingly arrives, for example in speech recognition. From the modeling perspective, it also assumes that the output is decomposed into several segments and the alignments are unknown in advance. However, its assumption that hidden states are carried over across the segments prohibits exact marginalizing all valid segmentations and alignments. So they resorted to find an approximate “best” alignment with a dynamic programming-like algorithm during training or they might need a separate GMM-HMM model to generate alignments in advance to achieve better results. Otherwise, without carrying information across segments results in sub-optimal performance as shown in Jaitly et al. (2016). In contrast, our method of connecting the segments described in Section 2.3 preserves the advantage of exact marginalization over all possible segmentations and alignments while still allowing the previous emitted outputs to affect the states of subsequent segments. This allows us to obtain a comparable good performance without using an additional alignment tool.

Another closely related work is the online segment to segment neural transduction (Yu et al., 2016). This work treats the alignments between the input and output sequences as latent variables and seeks to marginalize them out. From this perspective, SWAN is similar to theirs. However, our work explicitly takes into account output segmentations, extending the scope of its application to the case when the input is not a sequence. Our work is also related to semi-Markov conditional random fields (Sarawagi & Cohen, 2004), segmental recurrent neural networks (Kong et al., 2015) and segmental hidden dynamic model (Deng & Jaitly, 2015), where the segmentation is applied to the input sequence instead of the output sequence.

3 Forward, backward and decoding

In this section, we first present the details of forward and backward computations using dynamic programming. We then describe the beam search decoding algorithm. With these algorithms, our approach becomes a standalone loss function that can be used in many applications. Here we focus on developing the algorithm for the case when the input is a sequence. When the input is not a sequence, the corresponding algorithms can be similarly derived.

3.1 Forward and backward propagations

Forward.

Consider calculating the result for Eq. 2. We first define the forward and backward probabilities,222The forward and backward probabilities are terms for dynamic programming and not to be confused with forward and backward propagations in general machine learning.

where forward represents the probability that input emits output and backward represents the probability that input emits output . Using and , we can verify the following, for any ,

(3)

where the summation of from to is to enumerate all possible two-way partitions of output . A special case is that . Furthermore, we have following dynamic programming recursions according to the property of the segmentations,

(4)
(5)

where is the probability of the segment emitted by and is similarly defined. When , notation indicates an empty segment with previous output as . For simplicity, we omit the notation for those previous outputs, since it does not affect the dynamic programming algorithm. As we discussed before, is modeled using an RNN with a softmax probability function. Given initial conditions and , we can efficiently compute the probability of the entire output .

Backward.

We only show how to compute the gradient w.r.t since others can be similarly derived. Given the representation of in Eq. 3 and the dynamic programming recursion in Eq. 4, we have

(6)

where is defined as

(7)

Thus, the gradient w.r.t. is a weighted linear combination of the contributions from related segments.

More efficient computation for segment probabilities.

The forward and backward algorithms above assume that all segment probabilities, as well as their gradients , for and , are already computed. There are of such segments. And if we consider each recurrent step as a unit of computation, we have the computational complexity as . Simply enumerating everything, although parallelizable for different segments, is still expensive.

We employ two additional strategies to allow more efficient computations. The first is to limit the maximum segment length to be , which reduces the computational complexity to . The second is to explore the structure of the segments to further reduce the complexity to . This is an important improvement, without which we find the training would be extremely slow.

The key observation for the second strategy is that the computation for the longest segment can be used to cover those for the shorter ones. First consider forward propagation with and fixed. Suppose we want to compute for any , which contains segments, with the length ranging from to . In order to compute for the longest segment , we need the probabilities for , , …, and , where , , are the recurrent states. Note that this process also gives us the probability distributions needed for the shorter segments when . For backward propagation, we observe that, from Eq. 6, each segment has its own weight on the contribution to the gradient, which is for , . Thus all we need is to assign proper weights to the corresponding gradient entries for the longest segment in order to integrate the contributions from the shorter ones. Figure 4 illustrates the forward and backward procedure.

Figure 4: Illustration for an efficient computation for segments , with one pass on the longest segment , where is the vocabulary size and is the symbol for the end of a segment. In this example, we use and . Thus we have four possible segments , , and given input . (a) Forward pass. Shaded small circles indicate the softmax probabilities needed to compute the probabilities of all four segments. (b) Backward pass. The weights are defined in Eq.7 for for four segments mentioned above. Shaded small circles are annotated with the gradient values while unshaded ones have zero gradients. For example, has a gradient of since appears in three segment , and .

3.2 Beam search decoding

Although it is possible compute the output sequence probability using dynamic programming during training, it is impossible to do a similar thing during decoding since the output is unknown. We thus resort to beam search. The beam search for SWAN is more complex than the simple left-to-right beam search algorithm used in standard sequence-to-sequence models (Sutskever et al., 2014). In fact, for each input element , we are doing a simple left-to-right beam search decoder. In addition, different segmentations might imply the same output sequence and we need to incorporate this information into beam search as well. To achieve this, each time after we process an input element , we merge the partial candidates with different segments into one candidate if they indicate the same partial sequence. This is reasonable because the emission of the next input element only depends on the concatenation of all previous segments as discussed in Section 2.3. Algorithm 1 shows the details of the beam search decoding algorithm.

  Input: input , beam size , maximum segment length , and .
  for  to  do
     // A left-to-right beam search given .
     Set local beam size , and .
     for  to  do
         for  do
            Compute the distribution of the next output for current segment, .
         end for
         if  then
            // Reaching the maximum segment length.
            for  do
               
            end for
            Choose candidates with highest probabilities from and move them into and .
         else
            Choose a set containing candidates with highest probabilities out of all pairs , where and .
            for  do
               .
               if  then
                   Move from and into and .
                   .
               else
                   
               end if
            end for
         end if
         if  then
            break
         end if
     end for
     Update and .
     // Merge duplicate candidates in .
     while There exists for any  do
         
         Remove from and .
     end while
  end for
  Return: output with the highest probability from .
Algorithm 1 SWAN beam search decoding

4 Experiments

In this section, we apply our method to two applications, one unsupervised and the other supervised. These include 1) content-based text segmentation, where the input to our distribution is a vector (constructed using a variational autoencoder for text) and 2) speech recognition, where the input to our distribution is a sequence (of acoustic features).

4.1 Content-based text segmentation

This text segmentation task corresponds to an application of a simplified version of the non-sequence-input model in Section 2.1, where we drop the term in Eq.1.

Model description.

In this task, we would like to automatically discover segmentations for textual content. To this end, we build a simple model inspired by latent Dirichlet allocation (LDA) (Blei et al., 2003) and neural variational inference for texts (Miao et al., 2016).

LDA assumes that the words are exchangeable within a document—“bag of words” (BoW). We generalize this assumption to the segments within each segmentation—“bag of segments”. In other words, if we had a pre-segmented document, all segments would be exchangeable. However, since we do not have a pre-segmented document, we assume that for any valid segmentation. In addition, we choose to drop the term in Eq.1 in our sequence distribution so that we do not carry over information across segments. Otherwise, the segments are not exchangeable. This is designed to be comparable with the exchangeability assumption in LDA, although we can definitely use the carry-over technique in other occasions.

Similar to LDA, for a document with words , we assume that a topic-proportion like vector, , controls the distribution of the words. In more details, we define , where . Then the log likelihood of words is defined as

where the last inequality follows the variational inference principle (Jordan, 1999) with variational distribution . Here is modeled as Eq.1 with as the input vector “” and being another weight matrix. Note again that is not used in .

For variational distribution , we use variational autoencoder to model it as an inference network (Kingma & Welling, 2013; Rezende et al., 2014). We use the form similar to Miao et al. (2016)

, where the inference network is a feed-forward neural network and its input is the BoW of the document —

.

Predictive likelihood comparison with LDA.

We use two datasets including AP (Associated Press, documents) from Blei et al. (2003) and CiteULike333http://www.citeulike.org scientific article abstracts ( documents) from Wang & Blei (2011). Stop words are removed and a vocabulary size of is chosen by tf-idf for both datasets. Punctuations and stop words are considered to be known segment boundaries for this experiment. For LDA, we use the variational EM implementation taken from authors’ website.444http://www.cs.columbia.edu/~blei/lda-c/

We vary the number of topics to be , , , and

. And we use a development set for early stopping with up to 100 epochs. For our model, the inference network is a 2-layer feed-forward neural network with ReLU nonlinearity. A two-layer GRU is used to model the segments in the distribution

. And we vary the hidden unit size (as well as the word embedding size) to be , and , and the maximum segment length to be , and . We use Adam algorithm (Kingma & Ba, 2014) for optimization with batch size and learning rate .

We use the evaluation setup from Hoffman et al. (2013) for comparing two different models in terms of predictive log likelihood on a heldout set. We randomly choose of documents for training and the rest is left for testing. For each document in testing, we use first of the words,

, for estimating

and the rest, , for evaluating the likelihood. We use the mean of from variational distribution for LDA or the output of inference network for our model. For our model, , where is chosen as the mean of . Table 1 shows the empirical results. When the maximum segment length , our model is better on AP but worse on CiteULike than LDA. When increases from 1 to 2 and 3, our model gives monotonically higher predictive likelihood on both datasets, demonstrating that bringing in segmentation information leads to a better model.

#LDA Topics AP CiteULike
100 -9.25 -7.86
150 -9.23 -7.85
200 -9.22 -7.83
250 -9.23 -7.82
300 -9.22 -7.82
#Hidden AP CiteULike
100 1 -8.42 -8.12
100 2 -8.31 -7.68
100 3 -8.29 -7.61
150 1 -8.38 -8.12
150 2 -8.30 -7.67
150 3 -8.28 -7.60
200 1 -8.41 -8.13
200 2 -8.32 -7.67
200 3 -8.30 -7.61
Table 1: Predictive log likelihood comparison. Higher values indicate better results. is the maximum segment length. The top table shows LDA results and the bottom one shows ours.

Example of text segmentations.

In order to improve the readability of the example segmentation, we choose to keep the stop words in the vocabulary, different from the setting in the quantitative comparison with LDA. Thus, stop words are not treated as boundaries for the segments. Figure 5 shows an example text. The segmentation is obtained by finding the path with the highest probability in dynamic programming.555This is done by replacing the “sum” operation with “max” operation in Eq. 4. As we can see, many reasonable segments are found using this automatic procedure.

Figure 5: Example text with automatic segmentation, obtained using the path with highest probability. Words in the same brackets “[ ]” belong to the same segment. “UNK” indicates a word not in the vocabulary. The maximum segment length .

4.2 Speech recognition

We also apply our model to speech recognition, and present results on both phoneme-level and character-level experiments. This corresponds to an application of SWAN described in Section 2.2.

Dataset.

We evaluate SWAN on the TIMIT corpus following the setup in Deng et al. (2006)

. The audio data is encoded using a Fourier-transform-based filter-bank with 40 coefficients (plus energy) distributed on a mel-scale, together with their first and second temporal derivatives. Each input vector is therefore size 123. The data is normalized so that every element of the input vectors has zero mean and unit variance over the training set. All 61 phoneme labels are used during training and decoding, then mapped to 39 classes for scoring in the standard way 

(Lee & Hon, 1989).

Phoneme-level results.

Our SWAN model consists of a -layer bidirectional GRU with 300 hidden units as the encoder and two -layer unidirectional GRU(s) with 600 hidden units, one for the segments and the other for connecting the segments in SWAN. We set the maximum segment length

. To reduce the temporal input size for SWAN, we add a temporal convolutional layer with stride

and width at the end of the encoder. For optimization, we largely followed the strategy in Zhang et al. (2017). We use Adam (Kingma & Ba, 2014) with learning rate

. We then use stochastic gradient descent with learning rate

for fine-tuning. Batch size is used during training. We use dropout with probability of across the layers except for the input and output layers. Beam size is used for decoding. Table 3 shows the results compared with some previous approaches. SWAN achieves competitive results without using a separate alignment tool.

We also examine the properties of SWAN’s outputs. We first estimate the average segment length666The average segment length is defined as the length of the output (excluding end of segment symbol ) divided by the number of segments (not counting the ones only containing ). for the output. We find that is usually smaller than from the settings with good performances. Even when we increase the maximum segment length to , we still do not see a significantly increase of the average segment length. We suspect that the phoneme labels are relatively independent summarizations of the acoustic features and it is not easy to find good phoneme-level segments. The most common segment patterns we observe are ‘sil ?’, where ‘sil’ is the silence phoneme label and ‘?’ denotes some other phoneme label (Lee & Hon, 1989)

. On running time, SWAN is about 5 times slower than CTC. (Note that CTC was written in CUDA C, while SWAN is written in torch.)


ground truth one thing he thought nobody knows about it yet
best path onethinghethoughtnobodyknowsaboutityet
max decoding onethanhethoughotnobodynoseaboutatyet
ground truth jeff thought you argued in favor of a centrifuge purchase
best path jeffthoughtyouarguedinfavorofacentrifugepurchase
max decoding jaffthorodyaregivingfaverofersentfugeperches
ground truth he trembled lest his piece should fail
best path hetrembledlesthispieceshouldfail
max decoding hetremblenesthispeasudefail
Table 2: Examples of character-level outputs with their segmentations, where “” represents the segment boundary, “” represents the space symbol in SWAN’s outputs, the “best path” represents the most probable segmentation given the ground truth, and the “max decoding” represents the beam search decoding result with beam size 1.
Model PER (%)
 BiLSTM-5L-250H (Graves et al., 2013) 18.4
TRANS-3L-250H (Graves et al., 2013) 18.3
Attention RNN (Chorowski et al., 2015) 17.6
Neural Transducer (Jaitly et al., 2016) 18.2
CNN-10L-maxout (Zhang et al., 2017) 18.2
SWAN (this paper) 18.1
Table 3: TIMIT phoneme recognition results. “PER” is the phoneme error rate on the core test set.

Character-level results.

In additional to phoneme-level recognition experiments, we also evaluate our model on the task to directly output the characters like Amodei et al. (2016). We use the original word level transcription from the TIMIT corpus, convert them into lower cases, and separate them to character level sequences (the vocabulary includes from ‘a’ to ‘z’, apostrophe and the space symbol.) We find that using temporal convolutional layer with stride and width at the end of the decoder and setting yields good results. In general, we found that starting with a larger is useful. We believe that a larger allows more explorations of different segmentations and thus helps optimization since we consider the marginalization of all possible segmentations. We obtain a character error rate (CER) of for SWAN compared to for CTC.777As far as we know, there is no public CER result of CTC for TIMIT, so we empirically find the best one as our baseline. We use Baidu’s CTC implementation: https://github.com/baidu-research/warp-ctc.

We examine the properties of SWAN for this character-level recognition task. Different from the observation from the phoneme-level task, we find the average segment length is around from the settings with good performances, longer than that of the phoneme-level setting. This is expected since the variability of acoustic features for a character is much higher than that for a phone and a longer segment of characters helps reduce that variability. Table 2 shows some example decoding outputs. As we can see, although not perfect, these segments often correspond to important phonotactics rules in the English language and we expect these to get better when we have more labeled speech data. In Figure 6, we show an example of mapping the character-level alignment back to the speech signals, together with the ground truth phonemes. We can observe that the character level sequence roughly corresponds to the phoneme sequence in terms of phonotactics rules.

Finally, from the examples in Table 2, we find that the space symbol is often assigned to a segment together with its preceding character(s) or as an independent segment. We suspect this is because the space symbol itself is more like a separator of segments than a label with actual acoustic meanings. So in future work, we plan to treat the space symbol between words as a known segmentation boundary that all valid segmentations should comply with, which will lead to a smaller set of possible segments. We believe this will not only make it easier to find appropriate segments, but also significantly reduce the computational complexity.

Figure 6: Spectrogram of a test example of the output sequence, “please take this”. Here “” represents the boundary and represents the space symbol in SWAN’s result. The “phonemes” sequence is the ground truth phoneme labels. (The full list of phoneme labels and their explanations can be found in Lee & Hon (1989).) The “best path” sequence is from SWAN. Note that the time boundary is not precise due to the convolutional layer.

5 Conclusion and Future work

In this paper, we present a new probability distribution for sequence modeling and demonstrate its usefulness on two different tasks. Due to the generality, it can be used as a loss function in many sequence modeling tasks. We plan to investigate following directions in future work. The first is to validate our approach on large-scale speech datasets. The second is machine translation, where segmentations can be regarded as “phrases.” We believe this approach has the potential to bring together the merits of traditional phrase-based translation (Koehn et al., 2003)

and recent neural machine translation 

(Sutskever et al., 2014; Bahdanau et al., 2014). For example, we can restrict the number of valid segmentations with a known phrase set. Finally, applications in other domains including DNA sequence segmentation (Braun & Muller, 1998) might benefit from our approach as well.

References