Log-Linear RNNs: Towards Recurrent Neural Networks with Flexible Prior Knowledge

07/08/2016 ∙ by Marc Dymetman, et al. ∙ xerox 0

We introduce LL-RNNs (Log-Linear RNNs), an extension of Recurrent Neural Networks that replaces the softmax output layer by a log-linear output layer, of which the softmax is a special case. This conceptually simple move has two main advantages. First, it allows the learner to combat training data sparsity by allowing it to model words (or more generally, output symbols) as complex combinations of attributes without requiring that each combination is directly observed in the training data (as the softmax does). Second, it permits the inclusion of flexible prior knowledge in the form of a priori specified modular features, where the neural network component learns to dynamically control the weights of a log-linear distribution exploiting these features. We conduct experiments in the domain of language modelling of French, that exploit morphological prior knowledge and show an important decrease in perplexity relative to a baseline RNN. We provide other motivating iillustrations, and finally argue that the log-linear and the neural-network components contribute complementary strengths to the LL-RNN: the LL aspect allows the model to incorporate rich prior knowledge, while the NN aspect, according to the "representation learning" paradigm, allows the model to discover novel combination of characteristics.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Recurrent Neural Networks (Goodfellow et al., 2016, Chapter 10) have recently shown remarkable success in sequential data prediction and have been applied to such NLP tasks as Language Modelling (Mikolov et al., 2010), Machine Translation (Sutskever et al., 2014; Bahdanau et al., 2015), Parsing (Vinyals et al., 2014)

, Natural Language Generation

(Wen et al., 2015) and Dialogue (Vinyals and Le, 2015), to name only a few. Specially popular RNN architectures in these applications have been models able to exploit long-distance correlations, such as LSTMs (Hochreiter and Schmidhuber, 1997; Gers et al., 2000) and GRUs (Cho et al., 2014), which have led to groundbreaking performances.

RNNs (or more generally, Neural Networks), at the core, are machines that take as input a real vector and output a real vector, through a combination of linear and non-linear operations.

When working with symbolic data, some conversion from these real vectors from and to discrete values, for instance words in a certain vocabulary, becomes necessary. However most RNNs have taken an oversimplified view of this mapping. In particular, for converting output vectors into distributions over symbolic values, the mapping has mostly been done through a softmax

operation, which assumes that the RNN is able to compute a real value for each individual member of the vocabulary, and then converts this value into a probability through a direct exponentiation followed by a normalization.

This rather crude “softmax approach”, which implies that the output vector has the same dimensionality as the vocabulary, has had some serious consequences.

To focus on only one symptomatic defect of this approach, consider the following. When using words as symbols, even large vocabularies cannot account for all the actual words found either in training or in test, and the models need to resort to a catch-all “unknown” symbol unk, which provides a poor support for prediction and requires to be supplemented by diverse pre- and post-processing steps (Luong et al., 2014; Jean et al., 2015). Even for words inside the vocabulary, unless they have been witnessed many times in the training data, prediction tends to be poor because each word is an “island”, completely distinct from and without relation to other words, which needs to be predicted individually.

One partial solution to the above problem consists in changing the granularity by moving from word to character symbols (Sutskever et al., 2011; Ling et al., 2015). This has the benefit that the vocabulary becomes much smaller, and that all the characters can be observed many times in the training data. While character-based RNNs have thus some advantages over word-based ones, they also tend to produce non-words and to necessitate longer prediction chains than words, so the jury is still out, with emerging hybrid architectures that attempt to capitalize on both levels (Luong and Manning, 2016).

Here, we propose a different approach, which removes the constraint that the dimensionality of the RNN output vector has to be equal to the size of the vocabulary and allows generalization across related words. However, its crucial benefit is that it introduces a principled and powerful way of incorporating prior knowledge inside the models.

The approach involves a very direct and natural extension of the softmax, by considering it as a special case of an conditional exponential family, a class of models better known as log-linear models and widely used in “pre-NN” NLP. We argue that this simple extension of the softmax allows the resulting “log-linear RNN” to compound the aptitude of log-linear models for exploiting prior knowledge and predefined features with the aptitude of RNNs for discovering complex new combinations of predictive traits.

2 Log-Linear RNNs

Figure 1: A generic RNN.

2.1 Generic RNNs

Let us first recap briefly the generic notion of RNN, abstracting away from different styles of implementation (LSTM (Hochreiter and Schmidhuber, 1997; Graves, 2012), GRU (Cho et al., 2014)

, attention models

(Bahdanau et al., 2015), different number of layers, etc.).

An RNN is a generative process for predicting a sequence of symbols , where the symbols are taken in some vocabulary , and where the prediction can be conditioned by a certain observed context . This generative process can be written as:

where is a real-valued parameter vector.111We will sometimes write this as to stress the difference between the “context” and the prefix . Note that some RNNs are “non-conditional”, i.e. do not exploit a context . Generically, this conditional probability is computed according to:

(1)
(2)
(3)
(4)

Here is the hidden state at the previous step , is the output symbol produced at that step and is a neural-network based function (e.g. a LSTM network) that computes the next hidden state based on , , and . The function ,222We do not distinguish between the parameters for and for , and write for both. is then typically computed through an MLP, which returns a real-valued vector of dimension

. This vector is then normalized into a probability distribution over

through the softmax transformation:

with the normalization factor:

and finally the next symbol is sampled from this distribution. See Figure 1.

Training of such a model is typically done through back-propagation of the cross-entropy loss:

where is the actual symbol observed in the training set.

2.2 Log-Linear models

Definition

Log-linear models play a considerable role in statistics and machine learning; special classes are often known through different names depending on the application domains and on various details:

exponential families (typically for unconditional versions of the models) (Nielsen and Garcia, 2009) maximum entropy models (Berger et al., 1996; Jaynes, 1957), conditional random fields (Lafferty et al., 2001), binomial and multinomial logistic regression (Hastie et al., 2001, Chapter 4). These models have been especially popular in NLP, for example in Language Modelling (Rosenfeld, 1996), in sequence labelling (Lafferty et al., 2001), in machine translation (Berger et al., 1996; Och and Ney, 2002), to name only a few.

Here we follow the exposition (Jebara, 2013), which is useful for its broad applicability, and which defines a conditional log-linear model — which we could also call a conditional exponential family — as a model of the form (in our own notation):

(5)

Let us describe the notation:

  • is a variable in a set , which we will take here to be discrete (i.e. countable), and sometimes finite.333The model is applicable over continuous (measurable) spaces, but to simplify the exposition we will concentrate on the discrete case, which permits to use sums instead of integrals. We will use the terms domain or vocabulary for this set.

  • is the conditioning variable (also called condition).

  • is a parameter vector in , which (for reasons that will appear later) we will call the adaptor vector.444In the NLP literature, this parameter vector is often denoted by .

  • is a feature function ; note that we sometimes write or instead of to stress the fact that is a condition.

  • is a nonnegative function ; we will call it the background function of the model.555Jebara (2013) calls it the prior of the family.

  • , called the partition function, is a normalization factor:

When the context is unambiguous, we will sometimes leave the condition as well as the parameter vector implicit, and also simply write instead of ; thus we will write:

(6)

or more compactly:

(7)

The background as a “prior”

If in equation (7) the background function is actually a normalized probability distribution over (that is, ) and if the parameter vector is null, then the distribution is identical to .

Suppose that we have an initial belief that the parameter vector should be close to , then by reparametrizing equation (7) in the form:

(8)

with and , then our initial belief is represented by taking . In other words, we can always assume that our initial belief is represented by the background probability along with a null parameter vector . Deviations from this initial belief are then representation by variations of the parameter vector away from and a simple form of regularization can be obtained by penalizing some -norm of this parameter vector.666Contrarily to the generality of the presentation by Jebara (2013), many presentations of log-linear models in the NLP context do not make an explicit reference to , which is then implicitely taken to be uniform. However, the more statistically oriented presentations (Jordan, 20XX; Nielsen and Garcia, 2009) of the strongly related (unconditional) exponential family models do, which makes the mathematics neater and is necessary in presence of non-finite or continuous spaces. One advantage of the explicit introduction of , even for finite spaces, is that it makes it easier to speak about the prior knowledge we have about the overall process.

Gradient of cross-entropy loss

An important property of log-linear models is that they enjoy an extremely intuitive form for the gradient of their log-likelihood (aka cross-entropy loss).

If is a training instance observed under condition , and if the current model is according to equation (5), its likelihood loss at is defined as: . Then a simple calculation shows that the gradient (also called the “Fisher score” at ) is given by:

(9)

In other words, the gradient is minus the difference between the model expectation of the feature vector and its actual value at .777More generally, if we have a training set consisting of pairs of the form , then the gradient of the log-likelihood for this training set is given by:

In other words, this gradient is the difference between the feature vectors at the true labels minus the expected feature vectors under the current distribution (Jebara, 2013).

2.3 Log-Linear RNNs

We can now define what we mean by a log-linear RNN. The model, illustrated in Figure 2, is similar to a standard RNN up to two differences:

The first difference is that we allow a more general form of input to the network at each time step; namely, instead of allowing only the latest symbol to be used as input, along with the condition , we now allow an arbitrary feature vector to be used as input; this feature vector is of fixed dimensionality , and we allow it to be computed in an arbitrary (but deterministic) way from the combination of the currently known prefix and the context . This is a relatively minor change, but one that usefully expands the expressive power of the network. We will sometimes call the features the input features.

Figure 2: A Log-Linear RNN.

The second, major, difference is the following. We do compute in the same way as previously from , however, after this point, rather than applying a softmax to obtain a distribution over , we now apply a log-linear model. While for the standard RNN we had:

in the LL-RNN, we define:

(10)

In other words, we assume that we have a priori fixed a certain background function , where the condition is given by , and also defined features defining a feature vector , of fixed dimensionality . We will sometimes call these features the output features. Note that both the background and the features have access to the context .

In Figure 2, we have indicated with

LL
(LogLinear) the operation (10) that combines with the feature vector and the background to produce the probability distribution over . We note that, here, is a vector of size , which may or may not be equal to the size of the vocabulary, by contrast to the case of the softmax of Figure 1.


Overall, the LL-RNN is then computed through the following equations:

(11)
(12)
(13)
(14)

For prediction, we now use the combined process , and we train this process, similarly to the RNN case, according to its cross-entropy loss relative to the actually observed symbol :

(15)

At training time

, in order to use this loss for backpropagation in the RNN, we have to be able to compute its gradient relative to the previous layer, namely

. From equation (9), we see that this gradient is given by:

(16)

with .

This equation provides a particularly intuitive formula for the gradient, namely, as the difference between the expectation of according to the log-linear model with parameters and the observed value . However, this expectation can be difficult to compute. For a finite (and not too large) vocabulary , the simplest approach is to simply evaluate the right-hand side of equation (13) for each , to normalize by the sum to obtain , and to weight each accordingly. For standard RNNs (which are special cases of LL-RNNs, see below), this is actually what the simpler approaches to computing the softmax gradient do, but more sophisticated approaches have been proposed, such as employing a “hierarchical softmax” (Morin and Bengio, 2005). In the general case (large or infinite ), the expectation term in (19) needs to be approximated, and different techniques may be employed, some specific to log-linear models (Elkan, 2008; Jebara, 2013)

, some more generic, such as contrastive divergence

(Hinton, 2002) or Importance Sampling; a recent introduction to these generic methods is provided in (Goodfellow et al., 2016, Chapter 18), but, despite its practical importance, we will not pursue this topic further here.

2.4 LL-RNNs generalize RNNs

It is easy to see that LL-RNNs generalize RNNs. Consider a finite vocabulary , and the -dim “one-hot” representation of , relative to a certain fixed ordering of the elements of :

We assume (as we implicitly did in the discussion of standard RNNs) that is coded through some fixed-vector and we then define:

(17)

where denotes vector concatenation; thus we “forget” about the initial portion of the prefix, and only take into account and , encoded in a similar way as in the case of RNNs.

We then define to be uniformly for all (“uniform background”), and to be:

Neither nor depend on , and we have:

in other words:

Thus, we are back to the definition of RNNs in equations (1-4). As for the gradient computation of equation (19):

(18)

it takes the simple form:

(19)

in other words this gradient is the vector of dimension , with coordinates corresponding to the different elements of , where:

if , (20)
for the other ’s. (21)

This corresponds to the computation in the usual softmax case.

3 A motivating illustration: rare words

We now come back to the our starting point in the introduction: the problem of unknown or rare words, and indicate a way to handle this problem with LL-RNNs, which may also help building intuition about these models.


Let us consider some moderately-sized corpus of English sentences, tokenized at the word level, and then consider the vocabulary , of size 10K, consisting of the 9999 most frequent words to occur in this corpus plus one special symbol UNK used for tokens not among those words (“unknown words”).

After replacing the unknown words in the corpus by UNK, we can train a language model for the corpus by training a standard RNN, say of the LSTM type. Note that if translated into a LL-RNN according to section 2.4, this model has 10K features (9999 features for identity with a specific frequent word, the last one for identity with the symbol UNK), along with a uniform background .

This model however has some serious shortcomings, in particular:

  • Suppose that none of the two tokens Grenoble and 37 belong to (i.e. to the 9999 most frequent words of the corpus), then the learnt model cannot distinguish the probability of the two test sentences: the cost was 37 euros / the cost was Grenoble euros.

  • Suppose that several sentences of the form the cost was NN euros appear in the corpus, with NN taking (say) values 9, 13, 21, all belonging to , and that on the other hand 15 also belongs to , but appears in non-cost contexts; then the learnt model cannot give a reasonable probability to the cost was 15 euros, because it is unable to notice the similarity between 15 and the tokens 9, 13, 21.

Let’s see how we can improve the situation by moving to a LL-RNN.

We start by extending to a much larger finite set of words , in particular one that includes all the words in the union of the training and test corpora,888We will see later that the restriction that is finite can be lifted. and we keep uniform over . Concerning the (input) features, for now we keep them at their standard RNN values (namely as in (17)). Concerning the features, we keep the 9999 word-identity features that we had, but not the UNK-identity one; however, we do add some new features (say ):

  • A binary feature that tells us whether the token can be a number;

  • A binary feature that tells us whether the token can be a location, such as a city or a country;

  • A few binary features , , …, covering the main POS’s for English tokens. Note that a single word may have simultaneously several such features firing, for instance flies is both a noun and a verb.999Rather than using the notation , …, we sometimes use the notation , …, for obvious reasons of clarity.

  • Some other features, covering other important classes of words.

Each of the features has a corresponding weight that we index in a similar way .

Note again that we do allow the features to overlap freely, nothing preventing a word to be both a location and an adjective, for example (e.g. Nice in We visited Nice / Nice flowers were seen everywhere), and to also appear in the 9999 most frequent words. For exposition reasons (ie in order to simplify the explanations below) we will suppose that a number N will always fire the feature , but no other feature, apart from the case where it also belongs to , in which case it will also fire the word-identity feature that corresponds to it, which we will denote by , with .


Why is this model superior to the standard RNN one?

To answer this question, let’s consider the encoding of N in feature space, when is a number. There are two slightly different cases to look at:

  1. N does not belong to . Then we have , and for other ’s.

  2. N belongs to . Then we have , and for other ’s.

Let us now consider the behavior of the LL-RNN during training, when at a certain point, let’s say after having observed the prefix the cost was, it is now coming to the prediction of the next item , which we assume is actually a number N in the training sample.

We start by assuming that N does not belong to .

Let us consider the current value of the weight vector calculated by the network at this point. According to equation (9), the gradient is:

where is the cross-entropy loss and is the probability distribution associated with the log-linear weights .

In our case the first term is a vector that is null everywhere but on coordinate , on which it is equal to . As for the second term, it can be seen as the model average of the feature vector when is sampled according to . One can see that this vector has all its coordinates in the interval , and in fact strictly between and .101010This last fact is because, for a vector with finite coordinates, can never be , and also because we are making the mild assumption that for any feature , there exist and such that ; the strict inequalities follow immediately. As a consequence, the gradient is strictly positive on the coordinate and strictly negative on all the other coordinates. In other words, the backpropagation signal sent to the neural network at this point is that it should modify its parameters in such a way as to increase the weight, and decrease all the other weights in .

A slightly different situation occurs if we assume now that N belongs to . In that case is null everywhere but on its two coordinates and , on which it is equal to . By the same reasoning as before we see that the gradient is then strictly positive on the two corresponding coordinates, and strictly negative everywhere else. Thus, the signal sent to the network is to modify its parameter towards increasing the and weights, and decrease them everywhere else.

Overall, on each occurrence of a number in the training set, the network is then learning to increase the weights corresponding to the features (either both and or only , depending on whether N is in or not) firing on this number, and to decrease the weights for all the other features. This contrasts with the behavior of the previous RNN model where only in the case of N did the weight change. This means that at the end of training, when predicting the word that follows the prefix The cost was, the LL-RNN network will have a tendency to produce a weight vector with especially high weight on , some positive weights on those for which N has appeared in similar contexts, and negative weights on features not firing in similar contexts.111111If only numbers appeared in the context The cost was, then this would mean all “non-numeric” features, but such words as high, expensive, etc. may of course also appear, and their associated features would also receive positive increments.

Now, to come back to our initial example, let us compare the situation with the two next-word predictions The cost was 37 and The cost was Grenoble. The LL-RNN model predicts the next word with probability:

While the prediction fires the feature , the prediction does not fire any of the features that tend to be active in the context of the prefix The cost was, and therefore . This is in stark contrast to the behavior of the original RNN, for which both and were undistinguishable unknown words.

We note that, while the model is able to capitalize on the generic notion of number through its feature , it is also able to learn to privilege certain specific numbers belonging to if they tend to appear more frequently in certain contexts. A log-linear model has the important advantage of being able to handle redundant features121212This property of log-linear models was what permitted a fundamental advance in Statistical Machine Translation beyond the initial limited noisy-channel models, by allowing a freer combination of different assesments of translation quality, without having to bother about overlapping assesments (Berger et al., 1996; Och and Ney, 2002). such as and which both fire on . Depending on prior expectations about typical texts in the domain being handled, it may then be useful to introduce features for distinguishing between different classes of numbers, for instance “small numbers” or “year-like numbers”, allowing the LL-RNN to make useful generalizations based on these features. Such features need not be binary, for example a small-number feature could take values decreasing from 1 to 0, with the higher values reserved for the smaller numbers.


While our example focussed on the case of numbers, it is clear that our observations equally apply to other features that we mentioned, such as , which can serve to generalize predictions in such contexts as We are travelling to.

In principle, generally speaking, any features that can support generalization, such as features representing semantic classes (e.g. nodes in the Wordnet hierarchy), morphosyntactic classes (lemma, gender, number, etc.) or the like, can be useful.

4 Some potential applications

The extension from softmax to log-linear outputs, while formally simple, opens a significant range of potential applications other than the handling of rare words. We now briefly sketch a few directions.

[leftmargin=0cm]

A priori constrained sequences

For some applications, sequences to be generated may have to respect certain a priori constraints. One such case is the approach to semantic parsing of (Xiao et al., 2016), where starting from a natural language question an RNN decoder produces a sequential encoding of a logical form, which has to conform to a certain grammar. The model used is implicitely a simple case of LL-RNN, where (in our present terminology) the output feature vector remains the usual oneHot, but the background is not uniform anymore, but constrains the generated sequence to conform to the grammar.

Language model adaptation

We saw earlier that by taking to be uniform and to be a oneHot, an LL-RNN is just a standard RNN. The opposite extreme case is obtained by supposing that we already know the exact generative process for producing from the context . If we define to be identical to this true underlying process, then in order to have the best performance in test, it is sufficient for the adaptor vector to be equal to the null vector, because then, according to (13), is equal to the underlying process. The task for the RNN to learn a such that is null or close to null is an easy one (just take the higher level parameter matrices to be null or close to null), and in this case the adaptor has actually nothing to adapt to.

A more interesting, intermediary, case is when is not too far from the true process. For example,

could be a word-based language model (n-gram type, LSTM type, etc.) trained on some large monolingual corpus, while the current focus is on modeling a specific domain for which much less data is available. Then training the RNN-based adaptor

on the specific domain data would still be able to rely on for test words not seen in the specific data, but learn to upweight the prediction of words often seen in these specific data.131313For instance, focussing on the simple case of an adaptor over a oneHot , as soon as is positive on a certain word , then the probability of this word is increased relative to what the background indicates.

Input features

In a standard RNN, a word is vector-encoded through a one-hot representation both when it is produced as the current output of the network but also when it is used as the next input to the network. In section 3, we saw the interest of defining the “output” features to go beyond word-identity features — i.e. beyond the identification —, but we kept the “input” features as in standard RNNs, namely we kept . However, let us note an issue there. This usual encoding of the input means that if has rarely (or not at all) been seen in the training data, then the network will have few clues to distinguish this word from another rarely observed word (for example the adjective preposterous) when computing in equation 11. The network, in the context of the prefix the cost was, is able to give a reasonable probability to 37 thanks to . However, when assessing the probability of euros in the context of the prefix the cost was 37, this is not distinguished by the network from the prefix the cost was preposterous, which would not allow euros as the next word. A promising way to solve this problem here is to take , namely to encode the input using the same features as the output . This allows the network to “see” that 37 is a number and that preposterous is an adjective, and to compute its hidden state based on this information. We should note, however, that there is no requirement that be equal to in general; the point is that we can include in features which can help the network predict the next word.

Infinite domains

In the example of section 3, the vocabulary was large, but finite. This is quite artificial, especially if we want to account for words representing numbers, or words taken in some open-ended set, such as entity names. Let us go back to the equation (5) defining log-linear models, and let us ignore the context for simplicity: with . When is finite, then the normalization factor is also finite, and therefore the probability is well defined; in particular, it is well-defined when uniformly. However, when is (countably) infinite, then this is unfortunately not true anymore. For instance, with uniformly, and with , then is infinite and the probability is undefined. By contrast, let’s assume that the background function is in , i.e. . Let’s also suppose that the feature vector is uniformly bounded (that is, all its coordinates are such that , for ). Then, for any , is finite, and therefore is well-defined.

Thus, the standard RNNs, which have (implicitely) a uniform background , have no way to handle infinite vocabularies, while LL-RNNs, by using a finite-mass , can. One simple way to ensure that property on tokens representing numbers, for example, is to associate them with a geometric background distribution, decaying fast with their length, and a similar treatment can be done for named entities.

Condition-based priming

Many applications of RNNs, such as machine translation (Sutskever et al., 2014) or natural language generation (Wen et al., 2015), etc., depend on a condition (source sentence, semantic representation, etc.). When translated into LL-RNNs, this condition is taken into account through the input feature vector , see (17), but does not appear in or .

However, there is opportunity for exploiting the condition inside or . To sketch a simple example, in NLG, one may be able to predefine some weak unigram language model for the realization that depends on the semantic input , for example by constraining named entities that appear in the realization to have some evidence in the input. Such a language model can be usefully represented through the background process , providing a form of “priming” for the combined LL-RNN, helping it to avoid irrelevant tokens.

A similar approach was recently exploited in Goyal et al. (2016), in the context of a character-based seq2seq LSTM for generating utterances from input “dialog acts”. In this approach, the background , formulated as a weighted finite-state automaton over characters, is used both for encouraging the system to generate character strings that correspond to possible dictionary words, as well as to allow it to generate strings corresponding to such non-dictionary tokens as named-entities, numbers, addresses, and the like, but only when such strings have evidence in the input dialog act.

5 Experiments: French language model using morphological features

5.1 Datasets

Our datasets are based on the annotated French corpora141414http://universaldependencies.org/#fr. provided by the Universal Dependencies initiative151515http://universaldependencies.org (Version 1).. These corpora are tagged at the POS level as well as at the dependency level. In our experiments, we only exploit the POS annotations, and we use lowercased versions of the corpora.

Table 2 shows the sentence sizes of our different datasets, and Table 2 overall statistics in terms of word-tokens and word-types.

Training Validation Test1 Test2
13,826 728 298 1596
Table 1: Number of sentences in the different datasets. The UD (Universal Dependency) Training set (14554 sents.) is the union of our Training and Validation sets. The UD Validation set is our Test2 set, while the UD Test set is our Test1 set.
Total sentences Tokens Avg. sent. length Types Token-type ratio
16448 463069 28.15 42894 10.80
Table 2: Data statistics.

5.2 Features

The corpora provide POS and Morphological tags for each word token in the context of the sentence in which it appears. Table 3 shows the 52 tags that we use, which we treat as binary features. In addition, we select the most frequent word types appearing in the entire corpus, and we use additional binary features which identify whether a given word is identical to one of the most frequent words, or whether it is outside this set. In total, we then use binary features.

POS:ADJ Case:Abl PronType:Dem
POS:ADP Case:Acc PronType:In
POS:ADV Case:Nom PronType:Int
POS:AUX Case:Voc PronType:Neg
POS:CONJ Definite:Def PronType:Prs
POS:DET Definite:Ind PronType:Rel
POS:INTJ Degree:Cmp Reflex:Yes
POS:NIL Gender:Fem Tense:Fut
POS:NOUN Gender:Masc Tense:Imp
POS:NUM Gender:Neut Tense:Past
POS:PART Mood:Cnd Tense:Pres
POS:PRON Mood:Imp VerbForm:Fin
POS:PROPN Mood:IndN VerbForm:Inf
POS:PUNC Mood:SubT VerbForm:Part
POS:SCON Number:PlurJ
POS:SYM Number:Sing
POS:VERB Person:1
POS:X Person:2
POS:_ Person:3
Table 3: POS and Morphological features.

We collect all the word types appearing in the entire corpus and we associate with each a binary vector of size which is the boolean union of the binary vectors associated with all the tokens for that type. In case of an ambiguous word, the binary vector may have ones on several POS simultaneously.161616Thus for instance, currently, the vector associated with the word le has ones not only on the DET (determiner) and the PRON (pronoun) features, but also on the PROPN (proper noun) feature, due to the appearance of the name Le in the corpus… Thus, here, we basically use the corpus as a proxy for a morphological analyser of French, and we do not use the contextual information provided by the token-level tags.

5.3 Models

In these experiments, we use a finite word vocabulary consisting of the 42894 types found in the entire corpus (including the validation and test sets). We then compare our LL-RNN with a vanilla RNN, both over this vocabulary

. Thus none of the models has unknown words. Both models are implemented in Keras

(Chollet, 2015)

over a Theano

(Theano Development Team, 2016) backend.

The baseline RNN is using one-hot encodings for the words in

, and consists of an embedding layer of dimension 256 followed by two LSTM (Hochreiter and Schmidhuber, 1997)

layers of dimension 256, followed by a dense layer and finally a softmax layer both of dimension

. The LSTM sequence length for predicting the next word is fixed at 8 words. SGD is done through rmsprop and the learning rate is fixed at 0.001.

The LL-RNN has the same architecture and parameters, but for the following differences. First, the direct embedding of the input words is replaced by an embedding of dimension 256 of the representation of the words in the space of the features (that is the input feature vector is of dimension ). This is followed by the same two LSTMs as before, both of dimension 256. This is now followed by a dense layer of output dimension (the weights over the output feature vector , here identical to .). This layer is then transformed in a deterministic way into a probability distribution over , after incorporation of a fixed background probability distribution over . This background has been precomputed as the unigram probability distribution of the word types over the entire corpus.171717

We thus use also the test corpora to estimate these unigram probabilities. This is because the background requires to have some estimate of the probability of all the words it may encounter not only in training but also in test. However, this method is only a proxy to a proper estimate of the background for all possible words, which we leave to future development. We note that, similarly, for the baseline RNN, we need to know before hand all words it may encounter, otherwise we have to resort to the UNK category, which we did not want to do in order to be able to do a direct comparison of perplexities.

5.4 Results

Table 4 shows the perplexity results we obtain for different cases of the LL-RNNs as compared to the baseline RNN. We use a notation such as LL-RNN (2500) to indicate a value

for the number of frequent word types considered as features. For each model, we stopped the training after the validation loss (not shown) did not improve for three epochs.

Training Test1 Test2
RNN 4.17 6.07 6.15
LL-RNN (10000) 4.68 5.17 5.17
LL-RNN (5000) 4.33 5.13 5.12
LL-RNN (3000) 4.65 5.11 5.09
LL-RNN (2500) 4.74 5.08 5.07
LL-RNN (2000) 4.77 5.11 5.10
LL-RNN (1000) 4.84 5.13 5.13
LL-RNN (500) 4.93 5.20 5.18
LL-RNN (10) 5.30 5.45 5.44
Table 4: Log-perplexities per word (base ) of different models. The perplexity per word corresponding to a log-perplexity of 5.08 (resp. 6.07) is 161 (resp. 433).

We observe a considerable improvement of perplexity between the baseline and all the LL-RNN models, the largest one being for — where the perplexity is divided by a factor of — with some tendency of the models to degrade when becomes either very large or very small.

An initial, informal, qualitative look at the sentences generated by the RNN model on the one hand and by the best LL-RNN model on the other hand, seems to indicate a much better ability of the LL-RNN to account for agreement in gender and number at moderate distances (see Table 5), but a proper evaluation has not yet been performed.

elle TOPFORM:elle, POS:PRON, POS:PROPN, Gender:Fem, Number:Sing, Person:3, PronType:Prs
est TOPFORM:est, POS:ADJ, POS:AUX, POS:NOUN, POS:PROPN, POS:SCONJ, POS:VERB, POS:X, Gender:Fem, Gender:Masc, Mood:Ind, Number:Sing, Person:3, Tense:Pres, VerbForm:Fin
très TOPFORM:très, POS:ADV
souvent TOPFORM:souvent, POS:ADV
représentée TOPFORM:@notTop, POS:VERB, Gender:Fem, Number:Sing, Tense:Past, VerbForm:Part
en TOPFORM:en, POS:ADP, POS:ADV, POS:PRON, Person:3
réaction TOPFORM: réaction, POS:NOUN, Gender:Fem, Number:Sing
à TOPFORM:à, POS:ADP, POS:AUX, POS:NOUN, POS:VERB, Mood:Ind, Number:Sing, Person:3, Tense:Pres, VerbForm:Fin
l’ TOPFORM:l’, POS:DET, POS:PART, POS:PRON, POS:PROPN, Definite:Def, Gender:Fem, Gender:Masc, Number:Sing, Person:3, PronType:Prs
image TOPFORM:image, POS:NOUN, Gender:Fem, Number:Sing
de TOPFORM:de, POS:ADP, POS:DET, POS:PROPN, POS:X, Definite:Ind, Gender:Fem, Gender:Masc, Number:Plur, Number:Sing, PronType:Dem
la TOPFORM:la, POS:ADV, POS:DET, POS:NOUN, POS:PRON, POS:PROPN, POS:X, Definite:Def, Gender:Fem, Gender:Masc, Number:Sing, Person:3, PronType:Prs
république TOPFORM:république, POS:NOUN, POS:PROPN, Gender:Fem, Number:Sing
en TOPFORM:en, POS:ADP, POS:ADV, POS:PRON, Person:3
1999 TOPFORM:1999, POS:NUM
. TOPFORM:., POS:PUNCT
Table 5: Example of a sentence generated by LL-RNN (2500). The right column shows the non-null features for each word. Note that représentée, which is not among the most frequent 2500 words (TOPFORM:@notTop), has proper agreement (Gender:Fem, Number:Sing) with the distant pronoun elle.191919As a side remark, we observe some flawed features, due to a small number of gold-annotation errors, such as the fact that à appears both with the correct POS:ADP (adposition - a generic term covering prepositions), but also some impossible POS’s (AUX, NOUN,VERB). We have not attempted to filter out these (relatively rare) gold-annotation mistakes, but doing so could only improve the results.

6 Discussion

LL-RNNs simply extend RNNs by replacing the softmax parametrization of the output with a log-linear one, but this elementary move has two major consequences.

The first consequence is that two elements , rather than being individuals without connections, can now share attributes. This is a fundamental property for linguistics, where classical approaches represent words as combination of “linguistic features”, such as POS, lemma, number, gender, case, tense, aspect, register, etc. With the standard RNN softmax approach, two words that are different on even a single dimension have to be predicted independently, which can only be done effectively in presence of large training sets. In the LL-RNN approach, by associating different features to the different linguistic “features”, the model can learn to predict a plural number based on observation of plural numbers, an accusative based on the observation of accusatives, and so on, and then predict word forms that are combinations that have never been observed in the training data. We saw an example of this phenomenon in the experiments of section 5.202020Similar observations have been done, in the quite different “factored” model recently proposed by García-Martínez et al. (2016). If the linguistic features encompass semantic classes (possibly provided by Wordnet, or else by semantically-oriented embeddings) then generalizations become possible over these semantic classes also. By contrast, in the softmax case, not only the models are deficient in presence of sparsity of training data for word forms, but they also require to waste capacity of the RNN parameters to make them able to map to the large vectors that are required to discriminate between the many elements of ; with LL-based RNNs, the parametrization can in principle be smaller, because fewer features need to be specified to obtain word level predictions.

The second consequence is that we can exploit rich prior knowledge through the input features , the background , and the output features . We already gave some illustrations of incorporating prior knowledge in this way, but there are many other possibilities. For example, in a dialogue application that requires some answer utterances to contain numerical data that can only be obtained by access to a knowledge base, a certain binary “expert feature” could take the value 1 if and only if is either a non-number word or a specific number obtained by some (more or less complex) process exploiting the context in conjunction with the knowledge base. In combination with a background and other features in , who would be responsible for the linguistic quality of the answer utterance, the feature, when activated, would ensure that if a number is produced at this point, it is equal to , but would not try to decide at exactly which point a number should be produced (this is better left to the “language specialists”: and the other features). Whether the feature is activated would be decided by the RNN: a large value of the coordinate of would activate the feature, a small (close to null) value deactivate it.212121The idea is reminiscent of the approach of Le et al. (2016), who use LSTM-based mixtures of experts for a similar purpose; the big difference is that here, instead of using a linear mixture, we use a “log-linear mixture”, i.e. our features are combined multiplicatively rather than additively, with exponents given by the RNN, that is they are “collaborating”, while in their approach the experts are “competing”: their expert corresponding to needs to decide on its own at which exact point it should produce the number, rather than relying on the linguistic specialist to do it.
This “multiplicative” aspect of the LL-RNNs can be related to the product of experts introduced by Hinton (2002). However, in his case, the focus is on learning the individual experts, which are then combined through a direct product, not involving exponentiations, and therefore not in the log-linear class. In our case, the focus is on exploiting predefined experts (or features), but on letting a “controlling” RNN decide about their exponents.

We conclude by a remark concerning the complementarity of the log-linear component and the neural network component in the LL-RNN approach. On its own, as has been amply demonstrated in recent years, a standard softmax-based RNN is already quite powerful. On its own, a stand-alone log-linear model is also quite powerful, as older research also demonstrated. Roughly, the difference between a log-linear model and a LL-RNN model is that in the first, the log-linear weights (in our notation, ) are fixed after training, while in the LL-RNN they dynamically vary under the control of the neural network component.222222Note how a standard log-linear model with oneHot features over would not make sense: with fixed, it would always predict the same distribution for the next word. By contrast, a LL-RNN over the same features does make sense: it is a standard RNN. Standard log-linear models have to employ more interesting features. However, the strengths of the two classes of models lie in different areas. The log-linear model is very good at exploiting prior knowledge in the form of complex features, but it has no ability to discover new combinations of features. On the other hand, the RNN is very good at discovering which combinations of characteristics of its input are predictive of the output (representation learning), but is ill-equipped for exploiting prior knowledge. We argue that the LL-RNN approach is a way to capitalize on these complementary qualities.

Acknowledgments

We thank Salah Ait-Mokhtar, Matthias Gallé, Claire Gardent, Éric Gaussier, Raghav Goyal and Florent Perronnin for discussions at various stages of this research.

References

  • Bahdanau et al. (2015) Bahdanau, D., Cho, K., and Bengio, Y. (2015). Neural Machine Translation by Jointly Learning to Align and Translate. ICLR, pages 1–15.
  • Berger et al. (1996) Berger, A. L., Della Pietra, S. A., and Della Pietra, V. J. (1996).

    A maximum entropy approach to natural language processing.

    Computational linguistics, 22(1):39–71.
  • Cho et al. (2014) Cho, K., van Merrienboer, B., Bahdanau, D., and Bengio, Y. (2014). On the Properties of Neural Machine Translation: Encoder-Decoder Approaches. Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, pages 103–111.
  • Chollet (2015) Chollet, F. (2015). Keras. https://github.com/fchollet/keras.
  • Elkan (2008) Elkan, C. (2008). Log-linear models and conditional random fields. Tutorial notes at CIKM, 8:1–12.
  • García-Martínez et al. (2016) García-Martínez, M., Barrault, L., and Bougares, F. (2016). Factored Neural Machine Translation. arXiv :1609.04621.
  • Gers et al. (2000) Gers, F. A., Schmidhuber, J. A., and Cummins, F. A. (2000). Learning to forget: Continual prediction with lstm. Neural Comput., 12(10):2451–2471.
  • Goodfellow et al. (2016) Goodfellow, I., Bengio, Y., and Courville, A. (2016). Deep learning. Book in preparation for MIT Press.
  • Goyal et al. (2016) Goyal, R., Dymetman, M., and Gaussier, E. (2016). Natural Language Generation through Character-based RNNs with Finite-State Prior Knowledge. In Proc. COLING, Osaka, Japan.
  • Graves (2012) Graves, A. (2012). Supervised sequence labelling with recurrent neural networks, volume 385. Springer.
  • Hastie et al. (2001) Hastie, T., Tibshirani, R., and Friedman, J. (2001). The Elements of Statistical Learning. Springer Series in Statistics. Springer New York Inc., New York, NY, USA.
  • Hinton (2002) Hinton, G. E. (2002). Training products of experts by minimizing contrastive divergence. Neural computation, 14(8):1771–1800.
  • Hochreiter and Schmidhuber (1997) Hochreiter, S. and Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8):1735–1780.
  • Jaynes (1957) Jaynes, E. T. (1957). Information theory and statistical mechanics. Phys. Rev., 106:620–630.
  • Jean et al. (2015) Jean, S., Cho, K., Memisevic, R., and Bengio, Y. (2015). On Using Very Large Target Vocabulary for Neural Machine Translation. Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 000:1–10.
  • Jebara (2013) Jebara, T. (2013). Log-Linear Models, Logistic Regression and Conditional Random Fields. Lecture notes: www.cs.columbia.edu/~jebara/6772/notes/notes4.pdf.
  • Jordan (20XX) Jordan, M. (20XX). The exponential family: Basics. Lecture notes: http://people.eecs.berkeley.edu/~jordan/courses/260-spring10/other-readings/chapter8.pdf.
  • Lafferty et al. (2001) Lafferty, J. D., McCallum, A., and Pereira, F. C. N. (2001). Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning, ICML ’01, pages 282–289, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.
  • Le et al. (2016) Le, P., Dymetman, M., and Renders, J.-M. (2016). LSTM-based mixture-of-experts for knowledge-aware dialogues. In Proceedings of the 1st Workshop on Representation Learning for NLP, pages 94–99, Berlin, Germany. Association for Computational Linguistics.
  • Ling et al. (2015) Ling, W., Trancoso, I., Dyer, C., and Black, A. W. (2015). Character-based Neural Machine Translation. ICLR’16, pages 1–11.
  • Luong and Manning (2016) Luong, M.-T. and Manning, C. D. (2016). Achieving Open Vocabulary Neural Machine Translation with Hybrid Word-Character Models. arXiv:1604.00788v2.
  • Luong et al. (2014) Luong, M.-T., Sutskever, I., Le, Q. V., Vinyals, O., and Zaremba, W. (2014). Addressing the Rare Word Problem in Neural Machine Translation. arXiv:1410.8206v3.
  • Mikolov et al. (2010) Mikolov, T., Karafiat, M., Burget, L., Cernocky, J., and Khudanpur, S. (2010). Recurrent Neural Network based Language Model. Interspeech, (September):1045–1048.
  • Morin and Bengio (2005) Morin, F. and Bengio, Y. (2005). Hierarchical probabilistic neural network language model.

    Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics

    , pages 246–252.
  • Nielsen and Garcia (2009) Nielsen, F. and Garcia, V. (2009). Statistical exponential families: A digest with flash cards. arXiv:0911.4863.
  • Och and Ney (2002) Och, F. J. and Ney, H. (2002). Discriminative training and maximum entropy models for statistical machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL ’02, pages 295–302, Stroudsburg, PA, USA. Association for Computational Linguistics.
  • Rosenfeld (1996) Rosenfeld, R. (1996). A maximum entropy approach to adaptive statistical language modelling. Computer Speech & Language, 10(3):187–228.
  • Sutskever et al. (2011) Sutskever, I., Martens, J., and Hinton, G. (2011). Generating Text with Recurrent Neural Networks. Neural Networks, 131(1):1017–1024.
  • Sutskever et al. (2014) Sutskever, I., Vinyals, O., and Le, Q. V. (2014). Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112.
  • Theano Development Team (2016) Theano Development Team (2016). Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints, abs/1605.02688.
  • Vinyals et al. (2014) Vinyals, O., Kaiser, L., Koo, T., Petrov, S., Sutskever, I., and Hinton, G. (2014). Grammar as a Foreign Language. arXiv: 1412.7449v3.
  • Vinyals and Le (2015) Vinyals, O. and Le, Q. (2015). A neural conversational model. arXiv:1506.05869.
  • Wen et al. (2015) Wen, T., Gasic, M., Mrksic, N., Su, P., Vandyke, D., and Young, S. J. (2015). Semantically conditioned lstm-based natural language generation for spoken dialogue systems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 1711–1721.
  • Xiao et al. (2016) Xiao, C., Dymetman, M., and Gardent, C. (2016). Sequence-based structured prediction for semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1341–1350, Berlin, Germany. Association for Computational Linguistics.