Neural Machine Translation and Sequence-to-sequence Models: A Tutorial

03/05/2017 ∙ by Graham Neubig, et al. ∙ 0

This tutorial introduces a new and powerful set of techniques variously called "neural machine translation" or "neural sequence-to-sequence models". These techniques have been used in a number of tasks regarding the handling of human language, and can be a powerful tool in the toolbox of anyone who wants to model sequential data of some sort. The tutorial assumes that the reader knows the basics of math and programming, but does not assume any particular experience with neural networks or natural language processing. It attempts to explain the intuition behind the various methods covered, then delves into them with enough mathematical detail to understand them concretely, and culiminates with a suggestion for an implementation exercise, where readers can test that they understood the content in practice.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

Code Repositories

machine-translation

Machine translation related tutorials


view repo

nmt_exercise

Exercise for the Neural Machine Translation tutorial. https://arxiv.org/abs/1703.01619


view repo

ML_practices

machine learning practices based on tensorflow


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

This tutorial introduces a new and powerful set of techniques variously called “neural machine translation” or “neural sequence-to-sequence models”. These techniques have been used in a number of tasks regarding the handling of human language, and can be a powerful tool in the toolbox of anyone who wants to model sequential data of some sort. The tutorial assumes that the reader knows the basics of math and programming, but does not assume any particular experience with neural networks or natural language processing. It attempts to explain the intuition behind the various methods covered, then delves into them with enough mathematical detail to understand them concretely, and culiminates with a suggestion for an implementation exercise, where readers can test that they understood the content in practice.

1.1 Background

Before getting into the details, it might be worth describing each of the terms that appear in the title “Neural Machine Translation and Sequence-to-sequence Models”. Machine translation is the technology used to translate between human language. Think of the universal translation device showing up in sci-fi movies to allow you to communicate effortlessly with those that speak a different language, or any of the plethora of online translation web sites that you can use to assimilate content that is not in your native language. This ability to remove language barriers, needless to say, has the potential to be very useful, and thus machine translation technology has been researched from shortly after the advent of digital computing.

We call the language input to the machine translation system the source language, and call the output language the target language. Thus, machine translation can be described as the task of converting a sequence of words in the source, and converting into a sequence of words in the target. The goal of the machine translation practitioner is to come up with an effective model that allows us to perform this conversion accurately over a broad variety of languages and content.

Figure 1: An example of sequence-to-sequence modeling tasks.

The second part of the title, sequence-to-sequence models, refers to the broader class of models that include all models that map one sequence to another. This, of course, includes machine translation, but it also covers a broad spectrum of other methods used to handle other tasks as shown in Figure  1. In fact, if we think of a computer program as something that takes in a sequence of input bits, then outputs a sequence of output bits, we could say that every single program is a sequence-to-sequence model expressing some behavior (although of course in many cases this is not the most natural or intuitive way to express things).

The motivation for using machine translation as a representative of this larger class of sequence-to-sequence models is many-fold:

  1. Machine translation is a widely-recognized and useful instance of sequence-to-sequence models, and allows us to use many intuitive examples demonstrating the difficulties encountered when trying to tackle these problems.

  2. Machine translation is often one of the main driving tasks behind the development of new models, and thus these models tend to be tailored to MT first, then applied to other tasks.

  3. However, there are also cases where MT has learned from other tasks as well, and introducing these tasks helps explain the techniques used in MT as well.

1.2 Structure of this Tutorial

This tutorial first starts out with a general mathematical definition of statistical techniques for machine translation in Section  2

. The rest of this tutorial will sequentially describe techniques of increasing complexity, leading up to attentional models, which represent the current state-of-the-art in the field.

First, Sections 3-6 focus on language models

, which calculate the probability of a target sequence of interest. These models are not capable of performing translation or sequence transduction, but will provide useful preliminaries to understand sequence-to-sequence models.

  • Section  3 describes -gram language models, simple models that calculate the probability of words based on their counts in a set of data. It also describes how we evaluate how well these models are doing using measures such as perplexity.

  • Section  4 describes log-linear language models, models that instead calculate the probability of the next word based on features of the context. It describes how we can learn the parameters of the models through stochastic gradient descent – calculating derivatives and gradually updating the parameters to increase the likelihood of the observed data.

  • Section  5 introduces the concept of neural networks, which allow us to combine together multiple pieces of information more easily than log-linear models, resulting in increased modeling accuracy. It gives an example of feed-forward neural language models, which calculate the probability of the next word based on a few previous words using neural networks.

  • Section  6 introduces recurrent neural networks, a variety of neural networks that have mechanisms to allow them to remember information over multiple time steps. These lead to recurrent neural network language models, which allow for the handling of long-term dependencies that are useful when modeling language or other sequential data.

Finally, Sections 7 and 8 describe actual sequence-to-sequence models capable of performing machine translation or other tasks.

  • Section  7 describes encoder-decoder models, which use a recurrent neural network to encode

    the target sequence into a vector of numbers, and another network to

    decode this vector of numbers into an output sentence. It also describes search algorithms to generate output sequences based on this model.

  • Section  8 describes attention, a method that allows the model to focus on different parts of the input sentence while generating translations. This allows for a more efficient and intuitive method of representing sentences, and is often more effective than its simpler encoder-decoder counterpart.

2 Statistical MT Preliminaries

First, before talking about any specific models, this chapter describes the overall framework of statistical machine translation (SMT) [16] more formally.

First, we define our task of machine translation as translating a source sentence into a target sentence .Thus, any type of translation system can be defined as a function

(1)

which returns a translation hypothesis given a source sentence as input.

Statistical machine translation systems are systems that perform translation by creating a probabilistic model for the probability of given , , and finding the target sentence that maximizes this probability:

(2)

where

are the parameters of the model specifying the probability distribution. The parameters

are learned from data consisting of aligned sentences in the source and target languages, which are called parallel corpora in technical terminology.Within this framework, there are three major problems that we need to handle appropriately in order to create a good translation system:

Modeling:

First, we need to decide what our model will look like. What parameters will it have, and how will the parameters specify a probability distribution?

Learning:

Next, we need a method to learn appropriate values for parameters from training data.

Search:

Finally, we need to solve the problem of finding the most probable sentence (solving “argmax”). This process of searching for the best hypothesis and is often called decoding.111This is based on the famous quote from Warren Weaver, likening the process of machine translation to decoding an encoded cipher.

The remainder of the material here will focus on solving these problems.

3 -gram Language Models

While the final goal of a statistical machine translation system is to create a model of the target sentence given the source sentence , , in this chapter we will take a step back, and attempt to create a language model of only the target sentence . Basically, this model allows us to do two things that are of practical use.

Assess naturalness:

Given a sentence , this can tell us, does this look like an actual, natural sentence in the target language? If we can learn a model to tell us this, we can use it to assess the fluency of sentences generated by an automated system to improve its results. It could also be used to evaluate sentences generated by a human for purposes of grammar checking or error correction.

Generate text:

Language models can also be used to randomly generate text by sampling a sentence from the target distribution: .222 means “is sampled from”. Randomly generating samples from a language model can be interesting in itself – we can see what the model “thinks” is a natural-looking sentences – but it will be more practically useful in the context of the neural translation models described in the following chapters.

In the following sections, we’ll cover a few methods used to calculate this probability .

3.1 Word-by-word Computation of Probabilities

As mentioned above, we are interested in calculating the probability of a sentence . Formally, this can be expressed as

(3)

the joint probability that the length of the sentence is (), that the identity of the first word in the sentence is , the identity of the second word in the sentence is , up until the last word in the sentence being . Unfortunately, directly creating a model of this probability distribution is not straightforward,333Although it is possible, as shown by whole-sentence language models in [88]. as the length of the sequence is not determined in advance, and there are a large number of possible combinations of words.444Question: If is the size of the target vocabulary, how many are there for a sentence of length ?

Figure 2: An example of decomposing language model probabilities word-by-word.

As a way to make things easier, it is common to re-write the probability of the full sentence as the product of single-word probabilities. This takes advantage of the fact that a joint probability – for example – can be calculated by multiplying together conditional probabilities for each of its elements. In the example, this means that .

Figure  2 shows an example of this incremental calculation of probabilities for the sentence “she went home”. Here, in addition to the actual words in the sentence, we have introduced an implicit sentence end (“”) symbol, which we will indicate when we have terminated the sentence. Stepping through the equation in order, this means we first calculate the probability of “she” coming at the beginning of the sentence, then the probability of “went” coming next in a sentence starting with “she”, the probability of “home” coming after the sentence prefix “she went”, and then finally the sentence end symbol “” after “she went home”. More generally, we can express this as the following equation:

(4)

where . So coming back to the sentence end symbol , the reason why we introduce this symbol is because it allows us to know when the sentence ends. In other words, by examining the position of the symbol, we can determine the term in our original LM joint probability in Equation  3. In this example, when we have as the 4th word in the sentence, we know we’re done and our final sentence length is 3.

Once we have the formulation in Equation  4, the problem of language modeling now becomes a problem of calculating the next word given the previous words . This is much more manageable than calculating the probability for the whole sentence, as we now have a fixed set of items that we are looking to calculate probabilities for. The next couple of sections will show a few ways to do so.

3.2 Count-based -gram Language Models

Figure 3:

An example of calculating probabilities using maximum likelihood estimation.

The first way to calculate probabilities is simple: prepare a set of training data from which we can count word strings, count up the number of times we have seen a particular string of words, and divide it by the number of times we have seen the context. This simple method, can be expressed by the equation below, with an example shown in Figure  3

(5)

Here is the count of the number of times this particular word string appeared at the beginning of a sentence in the training data. This approach is called maximum likelihood estimation (MLE, details later in this chapter), and is both simple and guaranteed to create a model that assigns a high probability to the sentences in training data.

However, let’s say we want to use this model to assign a probability to a new sentence that we’ve never seen before. For example, if we want to calculate the probability of the sentence “i am from utah .” based on the training data in the example. This sentence is extremely similar to the sentences we’ve seen before, but unfortunately because the string “i am from utah” has not been observed in our training data, , becomes zero, and thus the probability of the whole sentence as calculated by Equation  5 also becomes zero. In fact, this language model will assign a probability of zero to every sentence that it hasn’t seen before in the training corpus, which is not very useful, as the model loses ability to tell us whether a new sentence a system generates is natural or not, or generate new outputs.

To solve this problem, we take two measures. First, instead of calculating probabilities from the beginning of the sentence, we set a fixed window of previous words upon which we will base our probability calculations, approximating the true probability. If we limit our context to previous words, this would amount to:

(6)

Models that make this assumption are called -gram models. Specifically, when models where are called unigram models, bigram models, trigram models, and four-gram, five-gram, etc.

The parameters of -gram models consist of probabilities of the next word given previous words:

(7)

and in order to train an -gram model, we have to learn these parameters from data.555Question: How many parameters does an -gram model with a particular have? In the simplest form, these parameters can be calculated using maximum likelihood estimation as follows:

(8)

where is the count of the word string anywhere in the corpus. Sometimes these equations will reference where . In this case, we assume that where is a special sentence start symbol.

If we go back to our previous example and set , we can see that while the string “i am from utah .” has never appeared in the training corpus, “i am”, “am from”, “from utah”, “utah .”, and “. ” are all somewhere in the training corpus, and thus we can patch together probabilities for them and calculate a non-zero probability for the whole sentence.666Question: What is this probability?

However, we still have a problem: what if we encounter a two-word string that has never appeared in the training corpus? In this case, we’ll still get a zero probability for that particular two-word string, resulting in our full sentence probability also becoming zero. -gram models fix this problem by smoothing probabilities, combining the maximum likelihood estimates for various values of . In the simple case of smoothing unigram and bigram probabilities, we can think of a model that combines together the probabilities as follows:

(9)

where is a variable specifying how much probability mass we hold out for the unigram distribution. As long as we set , regardless of the context all the words in our vocabulary will be assigned some probability. This method is called interpolation, and is one of the standard ways to make probabilistic models more robust to low-frequency phenomena.

If we want to use even more context – , , , or more – we can recursively define our interpolated probabilities as follows:

(10)

The first term on the right side of the equation is the maximum likelihood estimate for the model of order , and the second term is the interpolated probability for all orders up to .

There are also more sophisticated methods for smoothing, which are beyond the scope of this section, but summarized very nicely in [19].

Context-dependent smoothing coefficients:

Instead of having a fixed , we condition the interpolation coefficient on the context: . This allows the model to give more weight to higher order -grams when there are a sufficient number of training examples for the parameters to be estimated accurately and fall back to lower-order

-grams when there are fewer training examples. These context-dependent smoothing coefficients can be chosen using heuristics

[118] or learned from data [77].

Back-off:

In Equation  9, we interpolated together two probability distributions over the full vocabulary . In the alternative formulation of back-off, the lower-order distribution only is used to calculate probabilities for words that were given a probability of zero in the higher-order distribution. Back-off is more expressive but also more complicated than interpolation, and the two have been reported to give similar results [41].

Modified distributions:

It is also possible to use a different distribution than . This can be done by subtracting a constant value from the counts before calculating probabilities, a method called discounting. It is also possible to modify the counts of lower-order distributions to reflect the fact that they are used mainly as a fall-back for when the higher-order distributions lack sufficient coverage.

Currently, Modified Kneser-Ney smoothing (MKN; [19]), is generally considered one of the standard and effective methods for smoothing -gram language models. MKN uses context-dependent smoothing coefficients, discounting, and modification of lower-order distributions to ensure accurate probability estimates.

3.3 Evaluation of Language Models

Once we have a language model, we will want to test whether it is working properly. The way we test language models is, like many other machine learning models, by preparing three sets of data:

Training data

is used to train the parameters of the model.

Development data

is used to make choices between alternate models, or to tune the hyper-parameters of the model. Hyper-parameters in the model above could include the maximum length of in the -gram model or the type of smoothing method.

Test data

is used to measure our final accuracy and report results.

For language models, we basically want to know whether the model is an accurate model of language, and there are a number of ways we can define this. The most straight-forward way of defining accuracy is the likelihood of the model with respect to the development or test data. The likelihood of the parameters with respect to this data is equal to the probability that the model assigns to the data. For example, if we have a test dataset , this is:

(11)

We often assume that this data consists of several independent sentences or documents , giving us

(12)

Another measure that is commonly used is log likelihood

(13)

The log likelihood is used for a couple reasons. The first is because the probability of any particular sentence according to the language model can be a very small number, and the product of these small numbers can become a very small number that will cause numerical precision problems on standard computing hardware. The second is because sometimes it is more convenient mathematically to deal in log space. For example, when taking the derivative in gradient-based methods to optimize parameters (used in the next section), it is more convenient to deal with the sum in Equation  13 than the product in Equation  11.

It is also common to divide the log likelihood by the number of words in the corpus

(14)

This makes it easier to compare and contrast results across corpora of different lengths.

The final common measure of language model accuracy is perplexity, which is defined as the exponent of the average negative log likelihood per word

(15)

An intuitive explanation of the perplexity is “how confused is the model about its decision?” More accurately, it expresses the value “if we randomly picked words from the probability distribution calculated by the language model at each time step, on average how many words would it have to pick to get the correct one?” One reason why it is common to see perplexities in research papers is because the numbers calculated by perplexity are bigger, making the differences in models more easily perceptible by the human eye.777And, some cynics will say, making it easier for your research papers to get accepted.

3.4 Handling Unknown Words

Finally, one important point to keep in mind is that some of the words in the test set will not appear even once in the training set . These words are called unknown words, and need to be handeled in some way. Common ways to do this in language models include:

Assume closed vocabulary:

Sometimes we can assume that there will be no new words in the test set. For example, if we are calculating a language model over ASCII characters, it is reasonable to assume that all characters have been observed in the training set. Similarly, in some speech recognition systems, it is common to simply assign a probability of zero to words that don’t appear in the training data, which means that these words will not be able to be recognized.

Interpolate with an unknown words distribution:

As mentioned in Equation  10, we can interpolate between distributions of higher and lower order. In the case of unknown words, we can think of this as a distribution of order “0”, and define the 1-gram probability as the interpolation between the unigram distribution and unknown word distribution

(16)

Here, needs to be a distribution that assigns a probability to all words , not just ones in our vocabulary derived from the training corpus. This could be done by, for example, training a language model over characters that “spells out” unknown words in the case they don’t exist in in our vocabulary. Alternatively, as a simpler approximation that is nonetheless fairer than ignoring unknown words, we can guess the total number of words in the language where we are modeling, where , and define

as a uniform distribution over this vocabulary:

.

Add an word:

As a final method to handle unknown words we can remove some of the words in from our vocabulary, and replace them with a special symbol representing unknown words. One common way to do so is to remove singletons, or words that only appear once in the training corpus. By doing this, we explicitly predict in which contexts we will be seeing an unknown word, instead of implicitly predicting it through interpolation like mentioned above. Even if we predict the symbol, we will still need to estimate the probability of the actual word, so any time we predict at position , we further multiply in the probability of .

3.5 Further Reading

To read in more detail about -gram language models, [41] gives a very nice introduction and comprehensive summary about a number of methods to overcome various shortcomings of vanilla -grams like the ones mentioned above.

There are also a number of extensions to -gram models that may be nice for the interested reader.

Large-scale language modeling:

Language models are an integral part of many commercial applications, and in these applications it is common to build language models using massive amounts of data harvested from the web for other sources. To handle this data, there is research on efficient data structures [48, 82], distributed parameter servers [14], and lossy compression algorithms [104].

Language model adaptation:

In many situations, we want to build a language model for specific speaker or domain. Adaptation techniques make it possible to create large general-purpose models, then adapt these models to more closely match the target use case [6].

Longer-distance language count-based models:

As mentioned above, -gram models limit their context to , but in reality there are dependencies in language that can reach much farther back into the sentence, or even span across whole documents. The recurrent neural network language models that we will introduce in Section  6 are one way to handle this problem, but there are also non-neural approaches such as cache language models [61], topic models [13], and skip-gram models [41].

Syntax-based language models:

There are also models that take into account the syntax of the target sentence. For example, it is possible to condition probabilities not on words that occur directly next to each other in the sentence, but those that are “close” syntactically [96].

3.6 Exercise

The exercise that we will be doing in class will be constructing an -gram LM with linear interpolation between various levels of -grams. We will write code to:

  • Read in and save the training and testing corpora.

  • Learn the parameters on the training corpus by counting up the number of times each -gram has been seen, and calculating maximum likelihood estimates according to Equation  8.

  • Calculate the probabilities of the test corpus using linearly interpolation according to Equation  9 or Equation  10.

To handle unknown words, you can use the uniform distribution method described in Section  3.4, assuming that there are 10,000,000 words in the English vocabulary. As a sanity check, it may be better to report the number of unknown words, and which portions of the per-word log-likelihood were incurred by the main model, and which portion was incurred by the unknown word probability .

In order to do so, you will first need data, and to make it easier to start out you can use some pre-processed data from the German-English translation task of the IWSLT evaluation campaign888http://iwslt.org here: http://phontron.com/data/iwslt-en-de-preprocessed.tar.gz.

Potential improvements to the model include reading [19] and implementing a better smoothing method, implementing a better method for handling unknown words, or implementing one of the more advanced methods in Section  3.5.

4 Log-linear Language Models

This chapter will discuss another set of language models: log-linear language models [87, 20], which take a very different approach than the count-based -grams described above.999It should be noted that the cited papers call these maximum entropy language models. This is because models in this chapter can be motivated in two ways: log-linear models that calculate un-normalized log-probability scores for each function and normalize them to probabilities, and maximum-entropy models that spread their probability mass as evenly as possible given the constraint that they must model the training data. While the maximum-entropy interpretation is quite interesting theoretically and interested readers can reference [11] to learn more, the explanation as log-linear models is simpler conceptually, and thus we will use this description in this chapter.

4.1 Model Formulation

Like -gram language models, log-linear language models still calculate the probability of a particular word given a particular context . However, their method for doing so is quite different from count-based language models, based on the following procedure.

Calculating features: Log-linear language models revolve around the concept of features. In short, features are basically, “something about the context that will be useful in predicting the next word”. More formally, we define a feature function that takes a context as input, and outputs a real-valued feature vector that describe the context using different features.101010Alternative formulations that define feature functions that also take the current word as input are also possible, but in this book, to simplify the transition into neural language models described in Section  5, we consider features over only the context.

Figure 4: An example of feature values for a particular context.

For example, from our bi-gram models from the previous chapter, we know that “the identity of the previous word” is something that is useful in predicting the next word. If we want to express the identity of the previous word as a real-valued vector, we can assume that each word in our vocabulary is associated with a word ID , where . Then, we define our feature function to return a feature vector , where if , then the th element is equal to one and the remaining elements in the vector are equal to zero. This type of vector is often called a one-hot vector, an example of which is shown in Figure  4(a). For later user, we will also define a function which returns a vector where only the th element is one and the rest are zero (assume the length of the vector is the appropriate length given the context).

Of course, we are not limited to only considering one previous word. We could also calculate one-hot vectors for both and , then concatenate them together, which would allow us to create a model that considers the values of the two previous words. In fact, there are many other types of feature functions that we can think of (more in Section  4.4), and the ability to flexibly define these features is one of the advantages of log-linear language models over standard -gram models.

Calculating scores: Once we have our feature vector, we now want to use these features to predict probabilities over our output vocabulary . In order to do so, we calculate a score vector that corresponds to the likelihood of each word: words with higher scores in the vector will also have higher probabilities. We do so using the model parameters , which specifically come in two varieties: a bias vector , which tells us how likely each word in the vocabulary is overall, and a weight matrix which describes the relationship between feature values and scores. Thus, the final equation for calculating our scores for a particular context is:

(17)
Figure 5: An example of the weights for a log linear model in a certain context.

One thing to note here is that in the special case of one-hot vectors or other sparse vectors where most of the elements are zero. Because of this we can also think about Equation  17 in a different way that is numerically equivalent, but can make computation more efficient. Specifically, instead of multiplying the large feature vector by the large weight matrix, we can add together the columns of the weight matrix for all active (non-zero) features as follows:

(18)

where is the th column of . This allows us to think of calculating scores as “look up the vector for the features active for this instance, and add them together”, instead of writing them as matrix math. An example calculation in this paradigm where we have two feature functions (one for the directly preceding word, and one for the word before that) is shown in Figure  5.

Calculating probabilities: It should be noted here that scores are arbitrary real numbers, not probabilities: they can be negative or greater than one, and there is no restriction that they add to one. Because of this, we run these scores through a function that performs the following transformation:

(19)

By taking the exponent and dividing by the sum of the values over the entire vocabulary, these scores can be turned into probabilities that are between 0 and 1 and sum to 1.

This function is called the softmax function, and often expressed in vector form as follows:

(20)

Through applying this to the scores calculated in the previous section, we now have a way to go from features to language model probabilities.

4.2 Learning Model Parameters

Now, the only remaining missing link is how to acquire the parameters , consisting of the weight matrix and bias . Basically, the way we do so is by attempting to find parameters that fit the training corpus well.

To do so, we use standard machine learning methods for optimizing parameters. First, we define a loss function – a function expressing how poorly we’re doing on the training data. In most cases, we assume that this loss is equal to the negative log likelihood:

(21)

We assume we can also define the loss on a per-word level:

(22)

Next, we optimize the parameters to reduce this loss. While there are many methods for doing so, in recent years one of the go-to methods is stochastic gradient descent (SGD). SGD is an iterative process where we randomly pick a single word (or mini-batch, discussed in Section  5) and take a step to improve the likelihood with respect to . In order to do so, we first calculate the derivative of the loss with respect to each of the features in the full feature set :

(23)

We can then use this information to take a step in the direction that will reduce the loss according to the objective function

(24)

where is our learning rate, specifying the amount with which we update the parameters every time we perform an update. By doing so, we can find parameters for our model that reduce the loss, or increase the likelihood, on the training data.

This vanilla variety of SGD is quite simple and still a very competitive method for optimization in large-scale systems. However, there are also a few things to consider to ensure that training remains stable:

Adjusting the learning rate:

SGD requires also requires us to carefully choose : if is too big, training can become unstable and diverge, and if is too small, training may become incredibly slow or fall into bad local optima. One way to handle this problem is learning rate decay: starting with a higher learning rate, then gradually reducing the learning rate near the end of training. Other more sophisticated methods are listed below.

Early stopping:

It is common to use a held-out development set, measure our log-likelihood on this set, and save the model that has achieved the best log-likelihood on this held-out set. This is useful in case the model starts to over-fit to the training set, losing its generalization capability, we can re-wind to this saved model. As another method to prevent over-fitting and smooth convergence of training, it is common to measure log likelihood on a held-out development set, and when the log likelihood stops improving or starts getting worse, reduce the learning rate.

Shuffling training order:

One of the features of SGD is that it processes training data one at a time. This is nice because it is simple and can be efficient, but it also causes problems if there is some bias in the order in which we see the data. For example, if our data is a corpus of news text where news articles come first, then sports, then entertainment, there is a chance that near the end of training our model will see hundreds or thousands of entertainment examples in a row, resulting in the parameters moving to a space that favors these more recently seen training examples. To prevent this problem, it is common (and highly recommended) to randomly shuffle the order with which the training data is presented to the learning algorithm on every pass through the data.

There are also a number of other update rules that have been proposed to improve gradient descent and make it more stable or efficient. Some representative methods are listed below:

SGD with momentum [90]:

Instead of taking a single step in the direction of the current gradient, SGD with momentum keeps an exponentially decaying average of past gradients. This reduces the propensity of simple SGD to “jitter” around, making optimization move more smoothly across the parameter space.

AdaGrad [30]:

AdaGrad focuses on the fact that some parameters are updated much more frequently than others. For example, in the model above, columns of the weight matrix corresponding to infrequent context words will only be updated a few times for every pass through the corpus, while the bias will be updated on every training example. Based on this, AdaGrad dynamically adjusts the training rate for each parameter individually, with frequently updated (and presumably more stable) parameters such as getting smaller updates, and infrequently updated parameters such as getting larger updates.

Adam [60]:

Adam is another method that computes learning rates for each parameter. It does so by keeping track of exponentially decaying averages of the mean and variance of past gradients, incorporating ideas similar to both momentum and AdaGrad. Adam is now one of the more popular methods for optimization, as it greatly speeds up convergence on a wide variety of datasets, facilitating fast experimental cycles. However, it is also known to be prone to over-fitting, and thus, if high performance is paramount, it should be used with some caution and compared to more standard SGD methods.

[89] provides a good overview of these various methods with equations and notes a few other concerns when performing stochastic optimization.

4.3 Derivatives for Log-linear Models

Now, the final piece in the puzzle is the calculation of derivatives of the loss function with respect to the parameters. To do so, first we step through the full loss function in one pass as below:

(25)
(26)
(27)
(28)

And thus, using the chain rule to calculate

(29)
(30)

we find that the derivative of the loss function for the bias and each column of the weight matrix is:

(31)
(32)

Confirming these equations is left as a (highly recommended) exercise to the reader. Hint: when performing this derivation, it is easier to work with the log probability than working with directly.

4.4 Other Features for Language Modeling

One reason why log-linear models are nice is because they allow us to flexibly design features that we think might be useful for predicting the next word. For example, these could include:

Context word features:

As shown in the example above, we can use the identity of or the identity of .

Context class:

Context words can be grouped into classes of similar words (using a method such as Brown clustering [15]), and instead of looking up a one-hot vector with a separate entry for every word, we could look up a one-hot vector with an entry for each class [18]. Thus, words from the same class could share statistical strength, allowing models to generalize better.

Context suffix features:

Maybe we want a feature that fires every time the previous word ends with “…ing” or other common suffixes. This would allow us to learn more generalized patterns about words that tend to follow progressive verbs, etc.

Bag-of-words features:

Instead of just using the past words, we could use all previous words in the sentence. This would amount to calculating the one-hot vectors for every word in the previous sentence, and then instead of concatenating them simply summing them together. This would lose all information about what word is in what position, but could capture information about what words tend to co-occur within a sentence or document.

It is also possible to combine together multiple features (for example is a particular word and is another particular word). This is one way to create a more expressive feature set, but also has a downside of greatly increasing the size of the feature space. We discuss these features in more detail in Section  5.1.

4.5 Further Reading

The language model in this section was basically a featurized version of an -gram language model. There are quite a few other varieties of linear featurized models including:

Whole-sentence language models:

These models, instead of predicting words one-by-one, predict the probability over the whole sentence then normalize [88]. This can be conducive to introducing certain features, such as a probability distribution over lengths of sentences, or features such as “whether this sentence contains a verb”.

Discriminative language models:

In the case that we want to use a language model to determine whether the output of a system is good or not, sometimes it is useful to train directly on this system output, and try to re-rank the outputs to achieve higher accuracy [86]. Even if we don’t have real negative examples, it can be possible to “hallucinate” negative examples that are still useful for training [80].

4.6 Exercise

In the exercise for this chapter, we will construct a log-linear language model and evaluate its performance. I highly suggest that you try to use the NumPy library to hold and perform calculations over feature vectors, as this will make things much easier. If you have never used NumPy before, you can take a look at this tutorial to get started: https://docs.scipy.org/doc/numpy-dev/user/quickstart.html.

Writing the program will entail:

  • Writing a function to read in the training and test corpora, and converting the words into numerical IDs.

  • Writing the feature function , which takes in a string and returns which features are active (for example, as a baseline these can be features with the identity of the previous two words).

  • Writing code to calculate the loss function.

  • Writing code to calculate gradients and perform stochastic gradient descent updates.

  • Writing (or re-using from the previous exercise) code to evaluate the language models.

Similarly to the -gram language models, we will measure the per-word log likelihood and perplexity on our text corpus, and compare it to -gram language models. Handling unknown words will similarly require that you use the uniform distribution with 10,000,000 words in the English vocabulary.

Potential improvements to the model include designing better feature functions, adjusting the learning rate and measuring the results, and researching and implementing other types of optimizers such as AdaGrad or Adam.

5 Neural Networks and Feed-forward Language Models

In this chapter, we describe language models based on neural networks, a way to learn more sophisticated functions to improve the accuracy of our probability estimates with less feature engineering.

5.1 Potential and Problems with Combination Features

Figure 6: An example of the effect that combining multiple words can have on the probability of the next word.

Before moving into the technical detail of neural networks, first let’s take a look at a motivating example in Figure  6. From the example, we can see is compatible with (in the context “farmers grow hay”), and is also compatible (in the context “cows eat hay”). If we are using a log-linear model with one set of features dependent on , and another set of features dependent on , neither set of features can rule out the unnatural phrase “farmers eat hay.”

One way we can fix this problem is by creating another set of features where we learn one vector for each pair of words . If this is the case, our vector for the context could assign a low score to “hay”, resolving this problem. However, adding these combination features has one major disadvantage: it greatly expands the parameters: instead of parameters for each pair , we need parameters for each triplet . These numbers greatly increase the amount of memory used by the model, and if there are not enough training examples, the parameters may not be learned properly.

Because of both the importance of and difficulty in learning using these combination features, a number of methods have been proposed to handle these features, such as

kernelized support vector machines

[28] and neural networks [91, 39]. Specifically in this section, we will cover neural networks, which are both flexible and relatively easy to train on large data, desiderata for sequence-to-sequence models.

5.2 A Brief Overview of Neural Networks

To understand neural networks in more detail, let’s take a very simple example of a function that we cannot learn with a simple linear classifier like the ones we used in the last chapter: a function that takes an input

and outputs if both and are equal and otherwise. This function is shown in Figure  7.

Figure 7:

A function that cannot be solved by a linear transformation.

A first attempt at solving this function might define a linear model (like the log-linear models from the previous chapter) that solves this problem using the following form:

(33)

However, this class of functions is not powerful enough to represent the function at hand.111111Question: Prove this by trying to solve the system of equations.

Thus, we turn to a slightly more complicated class of functions taking the following form:

(34)

Computation is split into two stages: calculation of the hidden layer, which takes in input and outputs a vector of hidden variables , and calculation of the output layer, which takes in and calculates the final result . Both layers consist of an affine transform121212A fancy name for a multiplication followed by an addition. using weights and biases , followed by a function, which calculates the following:

(35)

This function is one example of a class of neural networks called

multi-layer perceptrons

(MLPs). In general, MLPs consist one or more hidden layers that consist of an affine transform followed by a non-linear function (such as the step function used here), culminating in an output layer that calculates some variety of output.

Figure 8: A simple neural network that represents the nonlinear function of Figure  7.

Figure  8 demonstrates why this type of network does a better job of representing the non-linear function of Figure  7. In short, we can see that the first hidden layer transforms the input into a hidden vector in a different space that is more conducive for modeling our final function. Specifically in this case, we can see that is now in a space where we can define a linear function (using and ) that correctly calculates the desired output .

As mentioned above, MLPs are one specific variety of neural network. More generally, neural networks can be thought of as a chain of functions (such as the affine transforms and step functions used above, but also including many, many others) that takes some input and calculates some desired output. The power of neural networks lies in the fact that chaining together a variety of simpler functions makes it possible to represent more complicated functions in an easily trainable, parameter-efficient way. In fact, the simple single-layer MLP described above is a universal function approximator [51], which means that it can approximate any function to arbitrary accuracy if its hidden vector is large enough.

We will see more about training in Section  5.3 and give some more examples of how these can be more parameter efficient in the discussion of neural network language models in Section  5.5.

5.3 Training Neural Networks

Now that we have a model in Equation  34, we would like to train its parameters , , , and . To do so, remembering our gradient-based training methods from the last chapter, we need to define the loss function , calculate the derivative of the loss with respect to the parameters, then take a step in the direction that will reduce the loss. For our loss function, let’s use the squared-error loss, a commonly used loss function for regression problems which measures the difference between the calculated value and correct value as follows

(36)

Next, we need to calculate derivatives. Here, we run into one problem: the function is not very derivative friendly, with its derivative being:

(37)

Because of this, it is more common to use other non-linear functions, such as the hyperbolic tangent (tanh) function. The tanh function, as shown in Figure  9

, looks very much like a softened version of the step function that has a continuous gradient everywhere, making it more conducive to training with gradient-based methods. There are a number of other alternatives as well, the most popular of which being the rectified linear unit (RelU)

(38)

shown in the left of Figure  9. In short, RelUs solve the problem that the tanh function gets “saturated” and has very small gradients when the absolute value of input is very large ( is a large negative or positive number). Empirical results have often shown it to be an effective alternative to tanh, including for the language modeling task described in this chapter [110].

Figure 9: Types of non-linearities.

So let’s say we swap in a tanh non-linearity instead of the step function to our network, we can now proceed to calculate derivatives like we did in Section  4.3. First, we perform the full calculation of the loss function:

(39)

Then, again using the chain rule, we calculate the derivatives of each set of parameters:

(40)

We could go through all of the derivations above by hand and precisely calculate the gradients of all parameters in the model. Interested readers are free to do so, but even for a simple model like the one above, it is quite a lot of work and error prone. For more complicated models, like the ones introduced in the following chapters, this is even more the case.

Figure 10: Computation graphs for the function itself, and the loss function.

Fortunately, when we actually implement neural networks on a computer, there is a very useful tool that saves us a large portion of this pain: automatic differentiation (autodiff) [116, 44]. To understand automatic differentiation, it is useful to think of our computation in Equation  39 as a data structure called a computation graph, two examples of which are shown in Figure  10. In these graphs, each node represents either an input to the network or the result of one computational operation, such as a multiplication, addition, tanh, or squared error. The first graph in the figure calculates the function of interest itself and would be used when we want to make predictions using our model, and the second graph calculates the loss function and would be used in training.

Automatic differentiation is a two-step dynamic programming algorithm that operates over the second graph and performs:

  • Forward calculation, which traverses the nodes in the graph in topological order, calculating the actual result of the computation as in Equation  39.

  • Back propagation, which traverses the nodes in reverse topological order, calculating the gradients as in Equation  40.

The nice thing about this formulation is that while the overall function calculated by the graph can be relatively complicated, as long as it can be created by combining multiple simple nodes for which we are able to calculate the function and derivative , we are able to use automatic differentiation to calculate its derivatives using this dynamic program without doing the derivation by hand.

Thus, to implement a general purpose training algorithm for neural networks, it is necessary to implement these two dynamic programs, as well as the atomic forward function and backward derivative computations for each type of node that we would need to use. While this is not trivial in itself, there are now a plethora of toolkits that either perform general-purpose auto-differentiation [7, 50], or auto-differentiation specifically tailored for machine learning and neural networks [1, 12, 26, 105, 78]. These implement the data structures, nodes, back-propogation, and parameter optimization algorithms needed to train neural networks in an efficient and reliable way, allowing practitioners to get started with designing their models. In the following sections, we will take this approach, taking a look at how to create our models of interest in a toolkit called DyNet,131313http://github.com/clab/dynet which has a programming interface that makes it relatively easy to implement the sequence-to-sequence models covered here.141414It is also developed by the author of these materials, so the decision might have been a wee bit biased.

5.4 An Example Implementation

[mathescape, linenos, numbersep=5pt, gobble=2, frame=lines, framesep=2mm]python import dynet as dy import random # Parameters of the model and training HIDDEN_SIZE = 20 NUM_EPOCHS = 20 # Define the model and SGD optimizer model = dy.Model() W_xh_p = model.add_parameters((HIDDEN_SIZE, 2)) b_h_p = model.add_parameters(HIDDEN_SIZE) W_hy_p = model.add_parameters((1, HIDDEN_SIZE)) b_y_p = model.add_parameters(1) trainer = dy.SimpleSGDTrainer(model) # Define the training data, consisting of (x,y) tuples data = [([1,1],1), ([-1,1],-1), ([1,-1],-1), ([-1,-1],1)] # Define the function we would like to calculate def calc_function(x): dy.renew_cg() w_xh = dy.parameter(w_xh_p) b_h = dy.parameter(b_h_p) W_hy = dy.parameter(W_hy_p) b_y = dy.parameter(b_y_p) x_val = dy.inputVector(x) h_val = dy.tanh(w_xh * x_val + b_h) y_val = W_hy * h_val + b_y return y_val # Perform training for epoch in range(NUM_EPOCHS): epoch_loss = 0 random.shuffle(data) for x, ystar in data: y = calc_function(x) loss = dy.squared_distance(y, dy.scalarInput(ystar)) epoch_loss += loss.value() loss.backward() trainer.update() print(”Epoch # Print results of prediction for x, ystar in data: y = calc_function(x) print(”

Figure 11: An example of training a neural network for a multi-layer perceptron using the toolkit DyNet.

Figure  11 shows an example of implementing the above neural network in DyNet, which we’ll step through line-by-line. Lines 1-2 import the necessary libraries. Lines 4-5 specify parameters of the models: the size of the hidden vector and the number of epochs (passes through the data) for which we’ll perform training. Line 7 initializes a DyNet model, which will store all the parameters we are attempting to learn. Lines 8-11 initialize parameters , , , and to be the appropriate size so that dimensions in the equations for Equation  39 match. Line 12 initializes a “trainer”, which will update the parameters in the model according to an update strategy (here we use simple stochastic gradient descent, but trainers for AdaGrad, Adam, and other strategies also exist). Line 14 creates the training data for the function in Figure  7.

Lines 16-25 define a function that takes input and creates a computation graph to calculate Equation  39. First, line 17 creates a new computation graph to hold the computation for this particular training example. Lines 18-21 take the parameters (stored in the model) and adds them to the computation graph as DyNet variables for this particular training example. Line 22 takes a Python list representing the current input and puts it into the computation graph as a DyNet variable. Line 23 calculates the hidden vector , Line 24 calculates the value , and Line 25 returns it.

Lines 27-36 perform training for NUM_EPOCHS passes over the data (one pass through the training data is usually called an “epoch”). Line 28 creates a variable to keep track of the loss for this epoch for later reporting. Line 29 shuffles the data, as recommended in Section  4.2. Lines 30-35 perform stochastic gradient descent, looping over each of the training examples. Line 31 creates a computation for the function itself, and Line 32 adds computation for the loss function. Line 33 runs the forward calculation to calculate the loss and adds it to the loss for this epoch. Line 34 runs back propagation, and Line 35 updates the model parameters. At the end of the epoch, we print the loss for the epoch in Line 36 to make sure that the loss is going down and our model is converging.

Finally, at the end of training in Lines 38-40, we print the output results. In an actual scenario, this would be done on a separate set of test data.

5.5 Neural-network Language Models

Now that we have the basics down, it is time to apply neural networks to language modeling [76, 9]

. A feed-forward neural network language model is very much like the log-linear language model that we mentioned in the previous section, simply with the addition of one or more non-linear layers before the output.

First, let’s recall the tri-gram log-linear language model. In this case, assume we have two sets of features expressing the identity of (represented as ) and (as ), the equation for the log-linear model looks like this:

(41)

where we add the appropriate columns from the weight matricies to the bias to get the score, then take the softmax to turn it into a probability.

Figure 12: A computation graph for a tri-gram feed-forward neural language model.

Compared to this, a tri-gram neural network model with a single layer is structured as shown in Figure  12 and described in equations below:

(42)

In the first line, we obtain a vector representing the context (in the particular case above, we are handling a tri-gram model so ). Here, is a matrix with columns, and rows, where each column corresponds to an -length vector representing a single word in the vocabulary. This vector is called a word embedding or a word representation, which is a vector of real numbers corresponding to particular words in the vocabulary.151515For the purposes of the model in this chapter, these vectors can basically be viewed as one set of tunable parameters in the neural language model, but there has also been a large amount of interest in learning these vectors for use in other tasks. Some methods are outlined in Section  5.6. The interesting thing about expressing words as vectors of real numbers is that each element of the vector could reflect a different aspect of the word. For example, there may be an element in the vector determining whether a particular word under consideration could be a noun, or another element in the vector expressing whether the word is an animal, or another element that expresses whether the word is countable or not.161616In reality, it is rare that single elements in the vector have such an intuitive meaning unless we impose some sort of constraint, such as sparsity constraints [75]. Figure  13 shows an example of how to define parameters that allow you to look up a vector in DyNet.

[mathescape, linenos, numbersep=5pt, gobble=2, frame=lines, framesep=2mm]python # Define the lookup parameters at model definition time # VOCAB_SIZE is the number of words in the vocabulary # EMBEDDINGS_SIZE is the length of the word embedding vector M_p = model.add_lookup_parameters((VOCAB_SIZE, EMBEDDING_SIZE)) # Load the parameters into the computation graph M = dy.lookup(M_p) # And look up the vector for word i m = M[i]

Figure 13: Code for looking things up in DyNet.

The vector then results from the concatenation of the word vectors for all of the words in the context, so . Once we have this , we run the vectors through a hidden layer to obtain vector . By doing so, the model can learn combination features that reflect information regarding multiple words in the context. This allows the model to be expressive enough to represent the more difficult cases in Figure  6. For example, given a context is “cows eat”, and some elements of the vector identify the word as a “large farm animal” (e.g. “cow”, “horse”, “goat”), while some elements of corresponds to “eat” and all of its relatives (“consume”, “chew”, “ingest”), then we could potentially learn a unit in the hidden layer that is active when we are in a context that represents “things farm animals eat”.

Next, we calculate the score vector for each word: . This is done by performing an affine transform of the hidden vector with a weight matrix and adding a bias vector . Finally, we get a probability estimate by running the calculated scores through a softmax function, like we did in the log-linear language models. For training, if we know we can also calculate the loss function as follows, similarly to the log-linear model:

(43)

DyNet has a convenience function that, given a score vector , will calculate the negative log likelihood loss:

[mathescape, linenos, numbersep=5pt, gobble=2, frame=lines, framesep=2mm]python loss = dy.pickneglogsoftmax(s, e_t)

The reasons why the neural network formulation is nice becomes apparent when we compare this to -gram language models in Section  3:

Better generalization of contexts:

-gram language models treat each word as its own discrete entity. By using input embeddings , it is possible to group together similar words so they behave similarly in the prediction of the next word. In order to do the same thing, -gram models would have to explicitly learn word classes and using these classes effectively is not a trivial problem [15].

More generalizable combination of words into contexts:

In an -gram language model, we would have to remember parameters for all combinations of to represent the context “things farm animals eat”. This would be quadratic in the number of words in the class, and thus learning these parameters is difficult in the face of limited training data. Neural networks handle this problem by learning nodes in the hidden layer that can represent this quadratic combination in a feature-efficient way.

Ability to skip previous words:

-gram models generally fall back sequentially from longer contexts (e.g. “the two previous words ”) to shorter contexts (e.g. “the previous words ”), but this doesn’t allow them to “skip” a word and only reference for example, “the word two words ago ”. Log-linear models and neural networks can handle this skipping naturally.

5.6 Further Reading

In addition to the methods described above, there are a number of extensions to neural-network language models that are worth discussing.

Softmax approximations:

One problem with the training of log-linear or neural network language models is that at every training example, they have to calculate the large score vector , then run a softmax over it to get probabilities. As the vocabulary size grows larger, this can become quite time-consuming. As a result, there are a number of ways to reduce training time. One example are methods that sample a subset of the vocabulary where , then calculate the scores and approximate the loss over this smaller subset. Examples of these include methods that simply try to get the true word to have a higher score (by some margin) than others in the subsampled set [27] and more probabilistically motivated methods, such as importance sampling [10] or noise-contrastive estimation (NCE; [74]

). Interestingly, for other objective functions such as linear regression and special variety of softmax called the

spherical softmax, it is possible to calculate the objective function in ways that do not scale linearly with the vocabulary size [111].

Other softmax structures:

Another interesting trick to improve training speed is to create a softmax that is structured so that its loss functions can be computed efficiently. One way to do so is the class-based softmax [40], which assigns each word to a class , then divides computation into two steps: predicting the probability of class given the context, then predicting the probability of the word given the class and the current context . The advantage of this method is that we only need to calculate scores for the correct class out of classes, then the correct word out of the vocabulary for class , which is size . Thus, our computational complexity becomes instead of .171717Question: What is the ideal class size to achieve the best computational efficiency? The hierarchical softmax [73] takes this a step further by predicting words along a binary-branching tree, which results in a computational complexity of .

Other models to learn word representations:

As mentioned in Section  5.5, we learn word embeddings as a by-product of training our language models. One very nice feature of word representations is that language models can be trained purely on raw text, but the resulting representations can capture semantic or syntactic features of the words, and thus can be used to effectively improve down-stream tasks that don’t have a lot of annotated data, such as part-of-speech tagging or parsing [107].181818Manning (2015) called word embeddings the “Sriracha sauce of NLP”, because you can add them to anything to make it better http://nlp.stanford.edu/~manning/talks/NAACL2015-VSM-Compositional-Deep-Learning.pdf Because of their usefulness, there have been an extremely large number of approaches proposed to learn different varieties of word embeddings,191919So many that Daumé III (2016) called word embeddings the “Sriracha sauce of NLP: it sounds like a good idea, you add too much, and now you’re crying” https://twitter.com/haldaume3/status/706173575477080065 from early work based on distributional similarity and dimensionality reduction [93, 108] to more recent models based on predictive models similar to language models [107, 71], with the general current thinking being that predictive models are the more effective and flexible of the two [5].The most well-known methods are the continuous-bag-of-words and skip-gram models implemented in the software word2vec,202020https://code.google.com/archive/p/word2vec/ which define simple objectives for predicting words using the immediately surrounding context or vice-versa. word2vec uses a sampling-based approach and parallelization to easily scale up to large datasets, which is perhaps the primary reason for its popularity. One thing to note is that these methods are not language models in themselves, as they do not calculate a probability of the sentence , but many of the parameter estimation techniques can be shared.

5.7 Exercise

In the exercise for this chapter, we will use DyNet to construct a feed-forward language model and evaluate its performance.

Writing the program will entail:

  • Writing a function to read in the data and (turn it into numerical IDs).

  • Writing a function to calculate the loss function by looking up word embeddings, then running them through a multi-layer perceptron, then predicting the result.

  • Writing code to perform training using this function.

  • Writing evaluation code that measures the perplexity on a held-out data set.

Language modeling accuracy should be measured in the same way as previous exercises and compared with the previous models.

Potential improvements to the model include tuning the various parameters of the model. How big should be? Should we add additional hidden layers? What optimizer with what learning rate should we use? What happens if we implement one of the more efficient versions of the softmax explained in Section  5.6?

6 Recurrent Neural Network Language Models

The neural-network models presented in the previous chapter were essentially more powerful and generalizable versions of -gram models. In this section, we talk about language models based on recurrent neural networks (RNNs), which have the additional ability to capture long-distance dependencies in language.

6.1 Long Distance Dependencies in Language

Figure 14: An example of long-distance dependencies in language.

Before speaking about RNNs in general, it’s a good idea to think about the various reasons a model with a limited history would not be sufficient to properly model all phenomena in language.

One example of a long-range grammatical constraint is shown in Figure  14. In this example, there is a strong constraint that the starting “he” or “her” and the final “himself” or “herself” must match in gender. Similarly, based on the subject of the sentence, the conjugation of the verb will change. These sorts of dependencies exist regardless of the number of intervening words, and models with a finite history , like the one mentioned in the previous chapter, will never be able to appropriately capture this. These dependencies are frequent in English but even more prevalent in languages such as Russian, which has a large number of forms for each word, which must match in case and gender with other words in the sentence.212121See https://en.wikipedia.org/wiki/Russian_grammar for an overview.

Another example where long-term dependencies exist is in selectional preferences [85]. In a nutshell, selectional preferences are basically common sense knowledge of “what will do what to what”. For example, “I ate salad with a fork” is perfectly sensible with “a fork” being a tool, and “I ate salad with my friend” also makes sense, with “my friend” being a companion. On the other hand, “I ate salad with a backpack” doesn’t make much sense because a backpack is neither a tool for eating nor a companion. These selectional preference violations lead to nonsensical sentences and can also span across an arbitrary length due to the fact that subjects, verbs, and objects can be separated by a great distance.

Finally, there are also dependencies regarding the topic or register of the sentence or document. For example, it would be strange if a document that was discussing a technical subject suddenly started going on about sports – a violation of topic consistency. It would also be unnatural for a scientific paper to suddenly use informal or profane language – a lack of consistency in register.

These and other examples describe why we need to model long-distance dependencies to create workable applications.

6.2 Recurrent Neural Networks

Figure 15: Examples of computation graphs for neural networks. (a) shows a single time step. (b) is the unrolled network. (c) is a simplified version of the unrolled network, where gray boxes indicate a function that is parameterized (in this case by , , and ).

Recurrent neural networks (RNNs; [33]) are a variety of neural network that makes it possible to model these long-distance dependencies. The idea is simply to add a connection that references the previous hidden state when calculating hidden state , written in equations as:

(44)

As we can see, for time steps , the only difference from the hidden layer in a standard neural network is the addition of the connection from the hidden state at time step connecting to that at time step . As this is a recursive equation that uses from the previous time step. This single time step of a recurrent neural network is shown visually in the computation graph shown in Figure  15(a).

When performing this visual display of RNNs, it is also common to “unroll” the neural network in time as shown in Figure  15(b), which makes it possible to explicitly see the information flow between multiple time steps. From unrolling the network, we can see that we are still dealing with a standard computation graph in the same form as our feed-forward networks, on which we can still do forward computation and backward propagation, making it possible to learn our parameters. It also makes clear that the recurrent network has to start somewhere with an initial hidden state . This initial state is often set to be a vector full of zeros, treated as a parameter and learned, or initialized according to some other information (more on this in Section  7).

Finally, for simplicity, it is common to abbreviate the whole recurrent neural network step with a single block “RNN” as shown in Figure  15. In this example, the boxes corresponding to RNN function applications are gray, to show that they are internally parameterized with , , and . We will use this convention in the future to represent parameterized functions.

RNNs make it possible to model long distance dependencies because they have the ability to pass information between timesteps. For example, if some of the nodes in encode the information that “the subject of the sentence is male”, it is possible to pass on this information to , which can in turn pass it on to and on to the end of the sentence. This ability to pass information across an arbitrary number of consecutive time steps is the strength of recurrent neural networks, and allows them to handle the long-distance dependencies described in Section  6.1.

Once we have the basics of RNNs, applying them to language modeling is (largely) straight-forward [72]. We simply take the feed-forward language model of Equation  42 and enhance it with a recurrent connection as follows:

(45)

One thing that should be noted is that compared to the feed-forward language model, we are only feeding in the previous word instead of the two previous words. The reason for this is because (if things go well) we can expect that information about and all previous words are already included in , making it unnecessary to feed in this information directly.

Also, for simplicity of notation, it is common to abbreviate the equation for with a function , following the simplified view of drawing RNNs in Figure  15(c):

(46)

6.3 The Vanishing Gradient and Long Short-term Memory

Figure 16:

An example of the vanishing gradient problem.

However, while the RNNs in the previous section are conceptually simple, they also have problems: the vanishing gradient problem and the closely related cousin, the exploding gradient problem.

A conceptual example of the vanishing gradient problem is shown in Figure  16. In this example, we have a recurrent neural network that makes a prediction after several times steps, a model that could be used to classify documents or perform any kind of prediction over a sequence of text. After it makes its prediction, it gets a loss that it is expected to back-propagate over all time steps in the neural network. However, at each time step, when we run the back propagation algorithm, the gradient gets smaller and smaller, and by the time we get back to the beginning of the sentence, we have a gradient so small that it effectively has no ability to have a significant effect on the parameters that need to be updated. The reason why this effect happens is because unless is exactly one, it will tend to either diminish or amplify the gradient , and when this diminishment or amplification is done repeatedly, it will have an exponential effect on the gradient of the loss.222222

This is particularly detrimental in the case where we receive a loss only once at the end of the sentence, like the example above. One real-life example of such a scenario is document classification, and because of this, RNNs have been less successful in this task than other methods such as convolutional neural networks, which do not suffer from the vanishing gradient problem

[59, 63]. It has been shown that pre-training an RNN as a language model before attempting to perform classification can help alleviate this problem to some extent [29].

One method to solve this problem, in the case of diminishing gradients, is the use of a neural network architecture that is specifically designed to ensure that the derivative of the recurrent function is exactly one. A neural network architecture designed for this very purpose, which has enjoyed quite a bit of success and popularity in a wide variety of sequential processing tasks, is the long short-term memory (LSTM; [49]) neural network architecture. The most fundamental idea behind the LSTM is that in addition to the standard hidden state used by most neural networks, it also has a memory cell , for which the gradient is exactly one. Because this gradient is exactly one, information stored in the memory cell does not suffer from vanishing gradients, and thus LSTMs can capture long-distance dependencies more effectively than standard recurrent neural networks.

Figure 17: A single time step of long short-term memory (LSTM). The information flow between the and cell is modulated using parameterized input and output gates.

So how do LSTMs do this? To understand this, let’s take a look at the LSTM architecture in detail, as shown in Figure  17 and the following equations:

(47)
(48)
(49)
(50)
(51)

Taking the equations one at a time: Equation  47 is the update, which is basically the same as the RNN update in Equation  44; it takes in the input and hidden state, performs an affine transform and runs it through the tanh non-linearity.

Equation  48 and Equation  49 are the input gate and output gate of the LSTM respectively. The function of “gates”, as indicated by their name, is to either allow information to pass through or block it from passing. Both of these gates perform an affine transform followed by the sigmoid function, also called the logistic function232323 To be more accurate, the sigmoid function is actually any mathematical function having an s-shaped curve, so the tanh function is also a type of sigmoid function. The logistic function is also a slightly broader class of functions . However, in the machine learning literature, the “sigmoid” is usually used to refer to the particular variety in Equation  52.

(52)

which squashes the input between 0 (which will approach as becomes more negative) and 1 (which will approach as becomes more positive). The output of the sigmoid is then used to perform a componentwise multiplication

with the output of another function. This results in the “gating” effect: if the result of the sigmoid is close to one for a particular vector position, it will have little effect on the input (the gate is “open”), and if the result of the sigmoid is close to zero, it will block the input, setting the resulting value to zero (the gate is “closed”).

Equation  50 is the most important equation in the LSTM, as it is the equation that implements the intuition that must be equal to one, which allows us to conquer the vanishing gradient problem. This equation sets to be equal to the update modulated by the input gate plus the cell value for the previous time step . Because we are directly adding to , if we consider only this part of Equation  50, we can easily confirm that the gradient will indeed be one.242424In actuality is also affected by , and thus is not exactly one, but the effect is relatively indirect. Especially for vector elements with close to zero, the effect will be minimal.

Finally, Equation  51 calculates the next hidden state of the LSTM. This is calculated by using a tanh function to scale the cell value between -1 and 1, then modulating the output using the output gate value . This will be the value actually used in any downstream calculation, such as the calculation of language model probabilities.

(53)

6.4 Other RNN Variants

Because of the importance of recurrent neural networks in a number of applications, many variants of these networks exist. One modification to the standard LSTM that is used widely (in fact so widely that most people who refer to “LSTM” are now referring to this variant) is the addition of a forget gate [38]. The equations for the LSTM with a forget gate are shown below:

(54)
(55)

Compared to the standard LSTM, there are two changes. First, in Equation  54, we add an additional gate, the forget gate. Second, in Equation  55, we use the gate to modulate the passing of the previous cell to the current cell . This forget gate is useful in that it allows the cell to easily clear its memory when justified: for example, let’s say that the model has remembered that it has seen a particular word strongly correlated with another word, such as “he” and “himself” or “she” and “herself” in the example above. In this case, we would probably like the model to remember “he” until it is used to predict “himself”, then forget that information, as it is no longer relevant. Forget gates have the advantage of allowing this sort of fine-grained information flow control, but they also come with the risk that if is set to zero all the time, the model will forget everything and lose its ability to handle long-distance dependencies. Thus, at the beginning of neural network training, it is common to initialize the bias of the forget gate to be a somewhat large value (e.g. 1), which will make the neural net start training without using the forget gate, and only gradually start forgetting content after the net has been trained to some extent.

While the LSTM provides an effective solution to the vanishing gradient problem, it is also rather complicated (as many readers have undoubtedly been feeling). One simpler RNN variant that has nonetheless proven effective is the gated recurrent unit (GRU; [24]), expressed in the following equations:

(56)
(57)
(58)
(59)

The most characteristic element of the GRU is Equation  59, which interpolates between a candidate for the updated hidden state and the previous state . This interpolation is modulated by an update gate (Equation  57), where if the update gate is close to one, the GRU will use the new candidate hidden value, and if the update is close to zero, it will use the previous value. The candidate hidden state is calculated by Equation  58, which is similar to a standard RNN update but includes an additional modulation of the hidden state input by a reset gate calculated in Equation  56. Compared to the LSTM, the GRU has slightly fewer parameters (it performs one less parameterized affine transform) and also does not have a separate concept of a “cell”. Thus, GRUs have been used by some to conserve memory or computation time.

Figure 18:

An example of (a) stacked RNNs and (b) stacked RNNs with residual connections.

One other important modification we can do to RNNs, LSTMs, GRUs, or really any other neural network layer is simple but powerful: stack multiple layers on top of each other (stacked RNNs Figure  18(a)). For example, in a 3-layer stacked RNN, the calculation at time step would look as follows:

where is the hidden state for the th layer at time step , and is an abbreviation for the RNN equation in Equation  44. Similarly, we could substitute this function for , , or any other recurrence step. The reason why stacking multiple layers on top of each other is useful is for the same reason that non-linearities proved useful in the standard neural networks introduced in Section  5: they are able to progressively extract more abstract features of the current words or sentences. For example, [98] find evidence that in a two-layer stacked LSTM, the first layer tends to learn granular features of words such as part of speech tags, while the second layer learns more abstract features of the sentence such as voice or tense.

While stacking RNNs has potential benefits, it also has the disadvantage that it suffers from the vanishing gradient problem in the vertical direction, just as the standard RNN did in the horizontal direction. That is to say, the gradient will be back-propagated from the layer close to the output () to the layer close to the input (), and the gradient may vanish in the process, causing the earlier layers of the network to be under-trained. A simple solution to this problem, analogous to what the LSTM does for vanishing gradients over time, is residual networks (Figure  18(b)) [47]. The idea behind these networks is simply to add the output of the previous layer directly to the result of the next layer as follows:

As a result, like the LSTM, there is no vanishing of gradients due to passing through the function, and even very deep networks can be learned effectively.

6.5 Online, Batch, and Minibatch Training

As the observant reader may have noticed, the previous sections have gradually introduced more and more complicated models; we started with a simple linear model, added a hidden layer, added recurrence, added LSTM, and added more layers of LSTMs. While these more expressive models have the ability to model with higher accuracy, they also come with a cost: largely expanded parameter space (causing more potential for overfitting) and more complicated operations (causing much greater potential computational cost). This section describes an effective technique to improve the stability and computational efficiency of training these more complicated networks, minibatching.

Up until this point, we have used the stochastic gradient descent learning algorithm introduced in Section  4.2 that performs updates according to the following iterative process. This type of learning, which performs updates a single example at a time is called online learning.

1:procedure Online
2:     for several epochs of training do
3:         for each training example in the data do
4:              Calculate gradients of the loss
5:              Update the parameters according to this gradient
6:         end for
7:     end for
8:end procedure
Algorithm 1 A fully online training algorithm

In contrast, we can also think of a batch learning algorithm, which treats the entire data set as a single unit, calculates the gradients for this unit, then only performs update after making a full pass through the data.

1:procedure Batch
2:     for several epochs of training do
3:         for each training example in the data do
4:              Calculate and accumulate gradients of the loss
5:         end for
6:         Update the parameters according to the accumulated gradient
7:     end for
8:end procedure
Algorithm 2 A batch learning algorithm

These two update strategies have trade-offs.

  • Online training algorithms usually find a relatively good solution more quickly, as they don’t need to make a full pass through the data before performing an update.

  • However, at the end of training, batch learning algorithms can be more stable, as they are not overly influenced by the most recently seen training examples.

  • Batch training algorithms are also more prone to falling into local optima; the randomness in online training algorithms often allows them to bounce out of local optima and find a better global solution.

Minibatching is a happy medium between these two strategies. Basically, minibatched training is similar to online training, but instead of processing a single training example at a time, we calculate the gradient for training examples at a time. In the extreme case of , this is equivalent to standard online training, and in the other extreme where equals the size of the corpus, this is equivalent to fully batched training. In the case of training language models, it is common to choose minibatches of to sentences to process at a single time. As we increase the number of training examples, each parameter update becomes more informative and stable, but the amount of time to perform one update increases, so it is common to choose an that allows for a good balance between the two.

Figure 19: An example of combining multiple operations together when minibatching.

One other major advantage of minibatching is that by using a few tricks, it is actually possible to make the simultaneous processing of training examples significantly faster than processing different examples separately. Specifically, by taking multiple training examples and grouping similar operations together to be processed simultaneously, we can realize large gains in computational efficiency due to the fact that modern hardware (particularly GPUs, but also CPUs) have very efficient vector processing instructions that can be exploited with appropriately structured inputs. As shown in Figure  19, common examples of this in neural networks include grouping together matrix-vector multiplies from multiple examples into a single matrix-matrix multiply or performing an element-wise operation (such as ) over multiple vectors at the same time as opposed to processing single vectors individually. Luckily, in DyNet, the library we are using, this is relatively easy to do, as much of the machinery for each elementary operation is handled automatically. We’ll give an example of the changes that we need to make when implementing an RNN language model below.

Figure 20: An example of minibatching in an RNN language model.

The basic idea in the batched RNN language model (Figure  20) is that instead of processing a single sentence, we process multiple sentences at the same time. So, instead of looking up a single word embedding, we look up multiple word embeddings (in DyNet, this is done by replacing the lookup function with the lookup_batch function, where we pass in an array of word IDs instead of a single word ID). We then run these batched word embeddings through the RNN and softmax as normal, resulting in two separate probability distributions over words in the first and second sentences. We then calculate the loss for each word (again in DyNet, replacing the pickneglogsoftmax function with the pickneglogsoftmax_batch function and pass word IDs). We then sum together the losses and use this as the loss for our entire sentence.

One sticking point, however, is that we may need to create batches with sentences of different sizes, also shown in the figure. In this case, it is common to perform sentence padding and masking

to make sure that sentences of different lengths are treated properly. Padding works by simply adding the “end-of-sentence” symbol to the shorter sentences until they are of the same length as the longest sentence in the batch. Masking works by multiplying all loss functions calculated over these padded symbols by zero, ensuring that the losses for sentence end symbols don’t get counted twice for the shorter sentences.

By taking these two measures, it becomes possible to process sentences of different lengths, but there is still a problem: if we perform lots of padding on sentences of vastly different lengths, we’ll end up wasting a lot of computation on these padded symbols. To fix this problem, it is also common to sort the sentences in the corpus by length before creating mini-batches to ensure that sentences in the same mini-batch are approximately the same size.

6.6 Further Reading

Because of the prevalence of RNNs in a number of tasks both on natural language and other data, there is significant interest in extensions to them. The following lists just a few other research topics that people are handling:

What can recurrent neural networks learn?:

RNNs are surprisingly powerful tools for language, and thus many people have been interested in what exactly is going on inside them. [57] demonstrate ways to visualize the internal states of LSTM networks, and find that some nodes are in charge of keeping track of length of sentences, whether a parenthesis has been opened, and other salietn features of sentences. [65] show ways to analyze and visualize which parts of the input are contributing to particular decisions made by an RNN-based model, by back-propagating information through the network.

Other RNN architectures:

There are also quite a few other recurrent network architectures. [42] perform an interesting study where they ablate various parts of the LSTM and attempt to find the best architecture for particular tasks. [123] take it a step further, explicitly training the model to find the best neural network architecture.

6.7 Exercise

In the exercise for this chapter, we will construct a recurrent neural network language model using LSTMs.

Writing the program will entail:

  • Writing a function such as lstm_step or gru_step that takes the input of the previous time step and updates it according to the appropriate equations. For reference, in DyNet, the componentwise multiply and sigmoid functions are dy.cmult and dy.logistic respectively.

  • Adding this function to the previous neural network language model and measuring the effect on the held-out set.

  • Ideally, implement mini-batch training by using the functionality implemented in DyNet, lookup_batch and pickneglogsoftmax_batch.

Language modeling accuracy should be measured in the same way as previous exercises and compared with the previous models.

Potential improvements to the model include: Measuring the speed/stability improvements achieved by mini-batching. Comparing the differences between recurrent architectures such as RNN, GRU, or LSTM.

7 Neural Encoder-Decoder Models

From Section  3 to Section  6, we focused on the language modeling problem of calculating the probability of a sequence . In this section, we return to the statistical machine translation problem (mentioned in Section  2) of modeling the probability of the output given the input .

7.1 Encoder-decoder Models

The first model that we will cover is called an encoder-decoder model [22, 36, 53, 101]. The basic idea of the model is relatively simple: we have an RNN language model, but before starting calculation of the probabilities of , we first calculate the initial state of the language model using another RNN over the source sentence . The name “encoder-decoder” comes from the idea that the first neural network running over “encodes” its information as a vector of real-valued numbers (the hidden state), then the second neural network used to predict “decodes” this information into the target sentence.

Figure 21: A computation graph of the encoder-decoder model.

If the encoder is expressed as , the decoder is expressed as , and we have a softmax that takes ’s hidden state at time step and turns it into a probability, then our model is expressed as follows (also shown in Figure  21):

(60)

In the first two lines, we look up the embedding and calculate the encoder hidden state for the th word in the source sequence . We start with an empty vector , and by