1 Introduction
Recurrent Neural Networks (Goodfellow et al., 2016, Chapter 10) have recently shown remarkable success in sequential data prediction and have been applied to such NLP tasks as Language Modelling (Mikolov et al., 2010), Machine Translation (Sutskever et al., 2014; Bahdanau et al., 2015), Parsing (Vinyals et al., 2014)
(Wen et al., 2015) and Dialogue (Vinyals and Le, 2015), to name only a few. Specially popular RNN architectures in these applications have been models able to exploit longdistance correlations, such as LSTMs (Hochreiter and Schmidhuber, 1997; Gers et al., 2000) and GRUs (Cho et al., 2014), which have led to groundbreaking performances.RNNs (or more generally, Neural Networks), at the core, are machines that take as input a real vector and output a real vector, through a combination of linear and nonlinear operations.
When working with symbolic data, some conversion from these real vectors from and to discrete values, for instance words in a certain vocabulary, becomes necessary. However most RNNs have taken an oversimplified view of this mapping. In particular, for converting output vectors into distributions over symbolic values, the mapping has mostly been done through a softmax
operation, which assumes that the RNN is able to compute a real value for each individual member of the vocabulary, and then converts this value into a probability through a direct exponentiation followed by a normalization.
This rather crude “softmax approach”, which implies that the output vector has the same dimensionality as the vocabulary, has had some serious consequences.
To focus on only one symptomatic defect of this approach, consider the following. When using words as symbols, even large vocabularies cannot account for all the actual words found either in training or in test, and the models need to resort to a catchall “unknown” symbol unk, which provides a poor support for prediction and requires to be supplemented by diverse pre and postprocessing steps (Luong et al., 2014; Jean et al., 2015). Even for words inside the vocabulary, unless they have been witnessed many times in the training data, prediction tends to be poor because each word is an “island”, completely distinct from and without relation to other words, which needs to be predicted individually.
One partial solution to the above problem consists in changing the granularity by moving from word to character symbols (Sutskever et al., 2011; Ling et al., 2015). This has the benefit that the vocabulary becomes much smaller, and that all the characters can be observed many times in the training data. While characterbased RNNs have thus some advantages over wordbased ones, they also tend to produce nonwords and to necessitate longer prediction chains than words, so the jury is still out, with emerging hybrid architectures that attempt to capitalize on both levels (Luong and Manning, 2016).
Here, we propose a different approach, which removes the constraint that the dimensionality of the RNN output vector has to be equal to the size of the vocabulary and allows generalization across related words. However, its crucial benefit is that it introduces a principled and powerful way of incorporating prior knowledge inside the models.
The approach involves a very direct and natural extension of the softmax, by considering it as a special case of an conditional exponential family, a class of models better known as loglinear models and widely used in “preNN” NLP. We argue that this simple extension of the softmax allows the resulting “loglinear RNN” to compound the aptitude of loglinear models for exploiting prior knowledge and predefined features with the aptitude of RNNs for discovering complex new combinations of predictive traits.
2 LogLinear RNNs
2.1 Generic RNNs
Let us first recap briefly the generic notion of RNN, abstracting away from different styles of implementation (LSTM (Hochreiter and Schmidhuber, 1997; Graves, 2012), GRU (Cho et al., 2014)
(Bahdanau et al., 2015), different number of layers, etc.).An RNN is a generative process for predicting a sequence of symbols , where the symbols are taken in some vocabulary , and where the prediction can be conditioned by a certain observed context . This generative process can be written as:
where is a realvalued parameter vector.^{1}^{1}1We will sometimes write this as to stress the difference between the “context” and the prefix . Note that some RNNs are “nonconditional”, i.e. do not exploit a context . Generically, this conditional probability is computed according to:
(1)  
(2)  
(3)  
(4) 
Here is the hidden state at the previous step , is the output symbol produced at that step and is a neuralnetwork based function (e.g. a LSTM network) that computes the next hidden state based on , , and . The function ,^{2}^{2}2We do not distinguish between the parameters for and for , and write for both. is then typically computed through an MLP, which returns a realvalued vector of dimension
. This vector is then normalized into a probability distribution over
through the softmax transformation:with the normalization factor:
and finally the next symbol is sampled from this distribution. See Figure 1.
Training of such a model is typically done through backpropagation of the crossentropy loss:
where is the actual symbol observed in the training set.
2.2 LogLinear models
Definition
Loglinear models play a considerable role in statistics and machine learning; special classes are often known through different names depending on the application domains and on various details:
exponential families (typically for unconditional versions of the models) (Nielsen and Garcia, 2009) maximum entropy models (Berger et al., 1996; Jaynes, 1957), conditional random fields (Lafferty et al., 2001), binomial and multinomial logistic regression (Hastie et al., 2001, Chapter 4). These models have been especially popular in NLP, for example in Language Modelling (Rosenfeld, 1996), in sequence labelling (Lafferty et al., 2001), in machine translation (Berger et al., 1996; Och and Ney, 2002), to name only a few.Here we follow the exposition (Jebara, 2013), which is useful for its broad applicability, and which defines a conditional loglinear model — which we could also call a conditional exponential family — as a model of the form (in our own notation):
(5) 
Let us describe the notation:

is a variable in a set , which we will take here to be discrete (i.e. countable), and sometimes finite.^{3}^{3}3The model is applicable over continuous (measurable) spaces, but to simplify the exposition we will concentrate on the discrete case, which permits to use sums instead of integrals. We will use the terms domain or vocabulary for this set.

is the conditioning variable (also called condition).

is a parameter vector in , which (for reasons that will appear later) we will call the adaptor vector.^{4}^{4}4In the NLP literature, this parameter vector is often denoted by .

is a feature function ; note that we sometimes write or instead of to stress the fact that is a condition.

is a nonnegative function ; we will call it the background function of the model.^{5}^{5}5Jebara (2013) calls it the prior of the family.

, called the partition function, is a normalization factor:
When the context is unambiguous, we will sometimes leave the condition as well as the parameter vector implicit, and also simply write instead of ; thus we will write:
(6) 
or more compactly:
(7) 
The background as a “prior”
If in equation (7) the background function is actually a normalized probability distribution over (that is, ) and if the parameter vector is null, then the distribution is identical to .
Suppose that we have an initial belief that the parameter vector should be close to , then by reparametrizing equation (7) in the form:
(8) 
with and , then our initial belief is represented by taking . In other words, we can always assume that our initial belief is represented by the background probability along with a null parameter vector . Deviations from this initial belief are then representation by variations of the parameter vector away from and a simple form of regularization can be obtained by penalizing some norm of this parameter vector.^{6}^{6}6Contrarily to the generality of the presentation by Jebara (2013), many presentations of loglinear models in the NLP context do not make an explicit reference to , which is then implicitely taken to be uniform. However, the more statistically oriented presentations (Jordan, 20XX; Nielsen and Garcia, 2009) of the strongly related (unconditional) exponential family models do, which makes the mathematics neater and is necessary in presence of nonfinite or continuous spaces. One advantage of the explicit introduction of , even for finite spaces, is that it makes it easier to speak about the prior knowledge we have about the overall process.
Gradient of crossentropy loss
An important property of loglinear models is that they enjoy an extremely intuitive form for the gradient of their loglikelihood (aka crossentropy loss).
If is a training instance observed under condition , and if the current model is according to equation (5), its likelihood loss at is defined as: . Then a simple calculation shows that the gradient (also called the “Fisher score” at ) is given by:
(9) 
In other words, the gradient is minus the difference between the model expectation of the feature vector and its actual value at .^{7}^{7}7More generally, if we have a training set consisting of pairs of the form , then the gradient of the loglikelihood for this training set is given by:
2.3 LogLinear RNNs
We can now define what we mean by a loglinear RNN. The model, illustrated in Figure 2, is similar to a standard RNN up to two differences:
The first difference is that we allow a more general form of input to the network at each time step; namely, instead of allowing only the latest symbol to be used as input, along with the condition , we now allow an arbitrary feature vector to be used as input; this feature vector is of fixed dimensionality , and we allow it to be computed in an arbitrary (but deterministic) way from the combination of the currently known prefix and the context . This is a relatively minor change, but one that usefully expands the expressive power of the network. We will sometimes call the features the input features.
The second, major, difference is the following. We do compute in the same way as previously from , however, after this point, rather than applying a softmax to obtain a distribution over , we now apply a loglinear model. While for the standard RNN we had:
in the LLRNN, we define:
(10) 
In other words, we assume that we have a priori fixed a certain background function , where the condition is given by , and also defined features defining a feature vector , of fixed dimensionality . We will sometimes call these features the output features. Note that both the background and the features have access to the context .
In Figure 2, we have indicated with (LogLinear) the operation (10) that combines with the feature vector and the background to produce the probability distribution over . We note that, here, is a vector of size , which may or may not be equal to the size of the vocabulary, by contrast to the case of the softmax of Figure 1.
Overall, the LLRNN is then computed through the following equations:
(11)  
(12)  
(13)  
(14) 
For prediction, we now use the combined process , and we train this process, similarly to the RNN case, according to its crossentropy loss relative to the actually observed symbol :
(15) 
At training time
, in order to use this loss for backpropagation in the RNN, we have to be able to compute its gradient relative to the previous layer, namely
. From equation (9), we see that this gradient is given by:(16) 
with .
This equation provides a particularly intuitive formula for the gradient, namely, as the difference between the expectation of according to the loglinear model with parameters and the observed value . However, this expectation can be difficult to compute. For a finite (and not too large) vocabulary , the simplest approach is to simply evaluate the righthand side of equation (13) for each , to normalize by the sum to obtain , and to weight each accordingly. For standard RNNs (which are special cases of LLRNNs, see below), this is actually what the simpler approaches to computing the softmax gradient do, but more sophisticated approaches have been proposed, such as employing a “hierarchical softmax” (Morin and Bengio, 2005). In the general case (large or infinite ), the expectation term in (19) needs to be approximated, and different techniques may be employed, some specific to loglinear models (Elkan, 2008; Jebara, 2013)
, some more generic, such as contrastive divergence
(Hinton, 2002) or Importance Sampling; a recent introduction to these generic methods is provided in (Goodfellow et al., 2016, Chapter 18), but, despite its practical importance, we will not pursue this topic further here.2.4 LLRNNs generalize RNNs
It is easy to see that LLRNNs generalize RNNs. Consider a finite vocabulary , and the dim “onehot” representation of , relative to a certain fixed ordering of the elements of :
We assume (as we implicitly did in the discussion of standard RNNs) that is coded through some fixedvector and we then define:
(17) 
where denotes vector concatenation; thus we “forget” about the initial portion of the prefix, and only take into account and , encoded in a similar way as in the case of RNNs.
We then define to be uniformly for all (“uniform background”), and to be:
Neither nor depend on , and we have:
in other words:
Thus, we are back to the definition of RNNs in equations (14). As for the gradient computation of equation (19):
(18) 
it takes the simple form:
(19) 
in other words this gradient is the vector of dimension , with coordinates corresponding to the different elements of , where:
if ,  (20)  
for the other ’s.  (21) 
This corresponds to the computation in the usual softmax case.
3 A motivating illustration: rare words
We now come back to the our starting point in the introduction: the problem of unknown or rare words, and indicate a way to handle this problem with LLRNNs, which may also help building intuition about these models.
Let us consider some moderatelysized corpus of English sentences, tokenized at the word level, and then consider the vocabulary , of size 10K, consisting of the 9999 most frequent words to occur in this corpus plus one special symbol UNK used for tokens not among those words (“unknown words”).
After replacing the unknown words in the corpus by UNK, we can train a language model for the corpus by training a standard RNN, say of the LSTM type. Note that if translated into a LLRNN according to section 2.4, this model has 10K features (9999 features for identity with a specific frequent word, the last one for identity with the symbol UNK), along with a uniform background .
This model however has some serious shortcomings, in particular:

Suppose that none of the two tokens Grenoble and 37 belong to (i.e. to the 9999 most frequent words of the corpus), then the learnt model cannot distinguish the probability of the two test sentences: the cost was 37 euros / the cost was Grenoble euros.

Suppose that several sentences of the form the cost was NN euros appear in the corpus, with NN taking (say) values 9, 13, 21, all belonging to , and that on the other hand 15 also belongs to , but appears in noncost contexts; then the learnt model cannot give a reasonable probability to the cost was 15 euros, because it is unable to notice the similarity between 15 and the tokens 9, 13, 21.
Let’s see how we can improve the situation by moving to a LLRNN.
We start by extending to a much larger finite set of words , in particular one that includes all the words in the union of the training and test corpora,^{8}^{8}8We will see later that the restriction that is finite can be lifted. and we keep uniform over . Concerning the (input) features, for now we keep them at their standard RNN values (namely as in (17)). Concerning the features, we keep the 9999 wordidentity features that we had, but not the UNKidentity one; however, we do add some new features (say ):

A binary feature that tells us whether the token can be a number;

A binary feature that tells us whether the token can be a location, such as a city or a country;

A few binary features , , …, covering the main POS’s for English tokens. Note that a single word may have simultaneously several such features firing, for instance flies is both a noun and a verb.^{9}^{9}9Rather than using the notation , …, we sometimes use the notation , …, for obvious reasons of clarity.

Some other features, covering other important classes of words.
Each of the features has a corresponding weight that we index in a similar way .
Note again that we do allow the features to overlap freely, nothing preventing a word to be both a location and an adjective, for example (e.g. Nice in We visited Nice / Nice flowers were seen everywhere), and to also appear in the 9999 most frequent words. For exposition reasons (ie in order to simplify the explanations below) we will suppose that a number N will always fire the feature , but no other feature, apart from the case where it also belongs to , in which case it will also fire the wordidentity feature that corresponds to it, which we will denote by , with .
Why is this model superior to the standard RNN one?
To answer this question, let’s consider the encoding of N in feature space, when is a number. There are two slightly different cases to look at:

N does not belong to . Then we have , and for other ’s.

N belongs to . Then we have , and for other ’s.
Let us now consider the behavior of the LLRNN during training, when at a certain point, let’s say after having observed the prefix the cost was, it is now coming to the prediction of the next item , which we assume is actually a number N in the training sample.
We start by assuming that N does not belong to .
Let us consider the current value of the weight vector calculated by the network at this point. According to equation (9), the gradient is:
where is the crossentropy loss and is the probability distribution associated with the loglinear weights .
In our case the first term is a vector that is null everywhere but on coordinate , on which it is equal to . As for the second term, it can be seen as the model average of the feature vector when is sampled according to . One can see that this vector has all its coordinates in the interval , and in fact strictly between and .^{10}^{10}10This last fact is because, for a vector with finite coordinates, can never be , and also because we are making the mild assumption that for any feature , there exist and such that ; the strict inequalities follow immediately. As a consequence, the gradient is strictly positive on the coordinate and strictly negative on all the other coordinates. In other words, the backpropagation signal sent to the neural network at this point is that it should modify its parameters in such a way as to increase the weight, and decrease all the other weights in .
A slightly different situation occurs if we assume now that N belongs to . In that case is null everywhere but on its two coordinates and , on which it is equal to . By the same reasoning as before we see that the gradient is then strictly positive on the two corresponding coordinates, and strictly negative everywhere else. Thus, the signal sent to the network is to modify its parameter towards increasing the and weights, and decrease them everywhere else.
Overall, on each occurrence of a number in the training set, the network is then learning to increase the weights corresponding to the features (either both and or only , depending on whether N is in or not) firing on this number, and to decrease the weights for all the other features. This contrasts with the behavior of the previous RNN model where only in the case of N did the weight change. This means that at the end of training, when predicting the word that follows the prefix The cost was, the LLRNN network will have a tendency to produce a weight vector with especially high weight on , some positive weights on those for which N has appeared in similar contexts, and negative weights on features not firing in similar contexts.^{11}^{11}11If only numbers appeared in the context The cost was, then this would mean all “nonnumeric” features, but such words as high, expensive, etc. may of course also appear, and their associated features would also receive positive increments.
Now, to come back to our initial example, let us compare the situation with the two nextword predictions The cost was 37 and The cost was Grenoble. The LLRNN model predicts the next word with probability:
While the prediction fires the feature , the prediction does not fire any of the features that tend to be active in the context of the prefix The cost was, and therefore . This is in stark contrast to the behavior of the original RNN, for which both and were undistinguishable unknown words.
We note that, while the model is able to capitalize on the generic notion of number through its feature , it is also able to learn to privilege certain specific numbers belonging to if they tend to appear more frequently in certain contexts. A loglinear model has the important advantage of being able to handle redundant features^{12}^{12}12This property of loglinear models was what permitted a fundamental advance in Statistical Machine Translation beyond the initial limited noisychannel models, by allowing a freer combination of different assesments of translation quality, without having to bother about overlapping assesments (Berger et al., 1996; Och and Ney, 2002). such as and which both fire on . Depending on prior expectations about typical texts in the domain being handled, it may then be useful to introduce features for distinguishing between different classes of numbers, for instance “small numbers” or “yearlike numbers”, allowing the LLRNN to make useful generalizations based on these features. Such features need not be binary, for example a smallnumber feature could take values decreasing from 1 to 0, with the higher values reserved for the smaller numbers.
While our example focussed on the case of numbers, it is clear that our observations equally apply to other features that we mentioned, such as , which can serve to generalize predictions in such contexts as We are travelling to.
In principle, generally speaking, any features that can support generalization, such as features representing semantic classes (e.g. nodes in the Wordnet hierarchy), morphosyntactic classes (lemma, gender, number, etc.) or the like, can be useful.
4 Some potential applications
The extension from softmax to loglinear outputs, while formally simple, opens a significant range of potential applications other than the handling of rare words. We now briefly sketch a few directions.

[leftmargin=0cm]
 A priori constrained sequences

For some applications, sequences to be generated may have to respect certain a priori constraints. One such case is the approach to semantic parsing of (Xiao et al., 2016), where starting from a natural language question an RNN decoder produces a sequential encoding of a logical form, which has to conform to a certain grammar. The model used is implicitely a simple case of LLRNN, where (in our present terminology) the output feature vector remains the usual oneHot, but the background is not uniform anymore, but constrains the generated sequence to conform to the grammar.
 Language model adaptation

We saw earlier that by taking to be uniform and to be a oneHot, an LLRNN is just a standard RNN. The opposite extreme case is obtained by supposing that we already know the exact generative process for producing from the context . If we define to be identical to this true underlying process, then in order to have the best performance in test, it is sufficient for the adaptor vector to be equal to the null vector, because then, according to (13), is equal to the underlying process. The task for the RNN to learn a such that is null or close to null is an easy one (just take the higher level parameter matrices to be null or close to null), and in this case the adaptor has actually nothing to adapt to.
A more interesting, intermediary, case is when is not too far from the true process. For example,
could be a wordbased language model (ngram type, LSTM type, etc.) trained on some large monolingual corpus, while the current focus is on modeling a specific domain for which much less data is available. Then training the RNNbased adaptor
on the specific domain data would still be able to rely on for test words not seen in the specific data, but learn to upweight the prediction of words often seen in these specific data.^{13}^{13}13For instance, focussing on the simple case of an adaptor over a oneHot , as soon as is positive on a certain word , then the probability of this word is increased relative to what the background indicates.  Input features

In a standard RNN, a word is vectorencoded through a onehot representation both when it is produced as the current output of the network but also when it is used as the next input to the network. In section 3, we saw the interest of defining the “output” features to go beyond wordidentity features — i.e. beyond the identification —, but we kept the “input” features as in standard RNNs, namely we kept . However, let us note an issue there. This usual encoding of the input means that if has rarely (or not at all) been seen in the training data, then the network will have few clues to distinguish this word from another rarely observed word (for example the adjective preposterous) when computing in equation 11. The network, in the context of the prefix the cost was, is able to give a reasonable probability to 37 thanks to . However, when assessing the probability of euros in the context of the prefix the cost was 37, this is not distinguished by the network from the prefix the cost was preposterous, which would not allow euros as the next word. A promising way to solve this problem here is to take , namely to encode the input using the same features as the output . This allows the network to “see” that 37 is a number and that preposterous is an adjective, and to compute its hidden state based on this information. We should note, however, that there is no requirement that be equal to in general; the point is that we can include in features which can help the network predict the next word.
 Infinite domains

In the example of section 3, the vocabulary was large, but finite. This is quite artificial, especially if we want to account for words representing numbers, or words taken in some openended set, such as entity names. Let us go back to the equation (5) defining loglinear models, and let us ignore the context for simplicity: with . When is finite, then the normalization factor is also finite, and therefore the probability is well defined; in particular, it is welldefined when uniformly. However, when is (countably) infinite, then this is unfortunately not true anymore. For instance, with uniformly, and with , then is infinite and the probability is undefined. By contrast, let’s assume that the background function is in , i.e. . Let’s also suppose that the feature vector is uniformly bounded (that is, all its coordinates are such that , for ). Then, for any , is finite, and therefore is welldefined.
Thus, the standard RNNs, which have (implicitely) a uniform background , have no way to handle infinite vocabularies, while LLRNNs, by using a finitemass , can. One simple way to ensure that property on tokens representing numbers, for example, is to associate them with a geometric background distribution, decaying fast with their length, and a similar treatment can be done for named entities.
 Conditionbased priming

Many applications of RNNs, such as machine translation (Sutskever et al., 2014) or natural language generation (Wen et al., 2015), etc., depend on a condition (source sentence, semantic representation, etc.). When translated into LLRNNs, this condition is taken into account through the input feature vector , see (17), but does not appear in or .
However, there is opportunity for exploiting the condition inside or . To sketch a simple example, in NLG, one may be able to predefine some weak unigram language model for the realization that depends on the semantic input , for example by constraining named entities that appear in the realization to have some evidence in the input. Such a language model can be usefully represented through the background process , providing a form of “priming” for the combined LLRNN, helping it to avoid irrelevant tokens.
A similar approach was recently exploited in Goyal et al. (2016), in the context of a characterbased seq2seq LSTM for generating utterances from input “dialog acts”. In this approach, the background , formulated as a weighted finitestate automaton over characters, is used both for encouraging the system to generate character strings that correspond to possible dictionary words, as well as to allow it to generate strings corresponding to such nondictionary tokens as namedentities, numbers, addresses, and the like, but only when such strings have evidence in the input dialog act.
5 Experiments: French language model using morphological features
5.1 Datasets
Our datasets are based on the annotated French corpora^{14}^{14}14http://universaldependencies.org/#fr. provided by the Universal Dependencies initiative^{15}^{15}15http://universaldependencies.org (Version 1).. These corpora are tagged at the POS level as well as at the dependency level. In our experiments, we only exploit the POS annotations, and we use lowercased versions of the corpora.
Table 2 shows the sentence sizes of our different datasets, and Table 2 overall statistics in terms of wordtokens and wordtypes.
Training  Validation  Test1  Test2 
13,826  728  298  1596 
Total sentences  Tokens  Avg. sent. length  Types  Tokentype ratio 
16448  463069  28.15  42894  10.80 
5.2 Features
The corpora provide POS and Morphological tags for each word token in the context of the sentence in which it appears. Table 3 shows the 52 tags that we use, which we treat as binary features. In addition, we select the most frequent word types appearing in the entire corpus, and we use additional binary features which identify whether a given word is identical to one of the most frequent words, or whether it is outside this set. In total, we then use binary features.
POS:ADJ  Case:Abl  PronType:Dem 
POS:ADP  Case:Acc  PronType:In 
POS:ADV  Case:Nom  PronType:Int 
POS:AUX  Case:Voc  PronType:Neg 
POS:CONJ  Definite:Def  PronType:Prs 
POS:DET  Definite:Ind  PronType:Rel 
POS:INTJ  Degree:Cmp  Reflex:Yes 
POS:NIL  Gender:Fem  Tense:Fut 
POS:NOUN  Gender:Masc  Tense:Imp 
POS:NUM  Gender:Neut  Tense:Past 
POS:PART  Mood:Cnd  Tense:Pres 
POS:PRON  Mood:Imp  VerbForm:Fin 
POS:PROPN  Mood:IndN  VerbForm:Inf 
POS:PUNC  Mood:SubT  VerbForm:Part 
POS:SCON  Number:PlurJ  
POS:SYM  Number:Sing  
POS:VERB  Person:1  
POS:X  Person:2  
POS:_  Person:3 
We collect all the word types appearing in the entire corpus and we associate with each a binary vector of size which is the boolean union of the binary vectors associated with all the tokens for that type. In case of an ambiguous word, the binary vector may have ones on several POS simultaneously.^{16}^{16}16Thus for instance, currently, the vector associated with the word le has ones not only on the DET (determiner) and the PRON (pronoun) features, but also on the PROPN (proper noun) feature, due to the appearance of the name Le in the corpus… Thus, here, we basically use the corpus as a proxy for a morphological analyser of French, and we do not use the contextual information provided by the tokenlevel tags.
5.3 Models
In these experiments, we use a finite word vocabulary consisting of the 42894 types found in the entire corpus (including the validation and test sets). We then compare our LLRNN with a vanilla RNN, both over this vocabulary
. Thus none of the models has unknown words. Both models are implemented in Keras
(Chollet, 2015)over a Theano
(Theano Development Team, 2016) backend.The baseline RNN is using onehot encodings for the words in
, and consists of an embedding layer of dimension 256 followed by two LSTM (Hochreiter and Schmidhuber, 1997)layers of dimension 256, followed by a dense layer and finally a softmax layer both of dimension
. The LSTM sequence length for predicting the next word is fixed at 8 words. SGD is done through rmsprop and the learning rate is fixed at 0.001.
The LLRNN has the same architecture and parameters, but for the following differences. First, the direct embedding of the input words is replaced by an embedding of dimension 256 of the representation of the words in the space of the features (that is the input feature vector is of dimension ). This is followed by the same two LSTMs as before, both of dimension 256. This is now followed by a dense layer of output dimension (the weights over the output feature vector , here identical to .). This layer is then transformed in a deterministic way into a probability distribution over , after incorporation of a fixed background probability distribution over . This background has been precomputed as the unigram probability distribution of the word types over the entire corpus.^{17}^{17}17
We thus use also the test corpora to estimate these unigram probabilities. This is because the background requires to have some estimate of the probability of all the words it may encounter not only in training but also in test. However, this method is only a proxy to a proper estimate of the background for all possible words, which we leave to future development. We note that, similarly, for the baseline RNN, we need to know before hand all words it may encounter, otherwise we have to resort to the UNK category, which we did not want to do in order to be able to do a direct comparison of perplexities.
5.4 Results
Table 4 shows the perplexity results we obtain for different cases of the LLRNNs as compared to the baseline RNN. We use a notation such as LLRNN (2500) to indicate a value
for the number of frequent word types considered as features. For each model, we stopped the training after the validation loss (not shown) did not improve for three epochs.
Training  Test1  Test2  

RNN  4.17  6.07  6.15 
LLRNN (10000)  4.68  5.17  5.17 
LLRNN (5000)  4.33  5.13  5.12 
LLRNN (3000)  4.65  5.11  5.09 
LLRNN (2500)  4.74  5.08  5.07 
LLRNN (2000)  4.77  5.11  5.10 
LLRNN (1000)  4.84  5.13  5.13 
LLRNN (500)  4.93  5.20  5.18 
LLRNN (10)  5.30  5.45  5.44 
We observe a considerable improvement of perplexity between the baseline and all the LLRNN models, the largest one being for — where the perplexity is divided by a factor of — with some tendency of the models to degrade when becomes either very large or very small.
An initial, informal, qualitative look at the sentences generated by the RNN model on the one hand and by the best LLRNN model on the other hand, seems to indicate a much better ability of the LLRNN to account for agreement in gender and number at moderate distances (see Table 5), but a proper evaluation has not yet been performed.
elle  TOPFORM:elle, POS:PRON, POS:PROPN, Gender:Fem, Number:Sing, Person:3, PronType:Prs 
est  TOPFORM:est, POS:ADJ, POS:AUX, POS:NOUN, POS:PROPN, POS:SCONJ, POS:VERB, POS:X, Gender:Fem, Gender:Masc, Mood:Ind, Number:Sing, Person:3, Tense:Pres, VerbForm:Fin 
très  TOPFORM:très, POS:ADV 
souvent  TOPFORM:souvent, POS:ADV 
représentée  TOPFORM:@notTop, POS:VERB, Gender:Fem, Number:Sing, Tense:Past, VerbForm:Part 
en  TOPFORM:en, POS:ADP, POS:ADV, POS:PRON, Person:3 
réaction  TOPFORM: réaction, POS:NOUN, Gender:Fem, Number:Sing 
à  TOPFORM:à, POS:ADP, POS:AUX, POS:NOUN, POS:VERB, Mood:Ind, Number:Sing, Person:3, Tense:Pres, VerbForm:Fin 
l’  TOPFORM:l’, POS:DET, POS:PART, POS:PRON, POS:PROPN, Definite:Def, Gender:Fem, Gender:Masc, Number:Sing, Person:3, PronType:Prs 
image  TOPFORM:image, POS:NOUN, Gender:Fem, Number:Sing 
de  TOPFORM:de, POS:ADP, POS:DET, POS:PROPN, POS:X, Definite:Ind, Gender:Fem, Gender:Masc, Number:Plur, Number:Sing, PronType:Dem 
la  TOPFORM:la, POS:ADV, POS:DET, POS:NOUN, POS:PRON, POS:PROPN, POS:X, Definite:Def, Gender:Fem, Gender:Masc, Number:Sing, Person:3, PronType:Prs 
république  TOPFORM:république, POS:NOUN, POS:PROPN, Gender:Fem, Number:Sing 
en  TOPFORM:en, POS:ADP, POS:ADV, POS:PRON, Person:3 
1999  TOPFORM:1999, POS:NUM 
.  TOPFORM:., POS:PUNCT 
6 Discussion
LLRNNs simply extend RNNs by replacing the softmax parametrization of the output with a loglinear one, but this elementary move has two major consequences.
The first consequence is that two elements , rather than being individuals without connections, can now share attributes. This is a fundamental property for linguistics, where classical approaches represent words as combination of “linguistic features”, such as POS, lemma, number, gender, case, tense, aspect, register, etc. With the standard RNN softmax approach, two words that are different on even a single dimension have to be predicted independently, which can only be done effectively in presence of large training sets. In the LLRNN approach, by associating different features to the different linguistic “features”, the model can learn to predict a plural number based on observation of plural numbers, an accusative based on the observation of accusatives, and so on, and then predict word forms that are combinations that have never been observed in the training data. We saw an example of this phenomenon in the experiments of section 5.^{20}^{20}20Similar observations have been done, in the quite different “factored” model recently proposed by GarcíaMartínez et al. (2016). If the linguistic features encompass semantic classes (possibly provided by Wordnet, or else by semanticallyoriented embeddings) then generalizations become possible over these semantic classes also. By contrast, in the softmax case, not only the models are deficient in presence of sparsity of training data for word forms, but they also require to waste capacity of the RNN parameters to make them able to map to the large vectors that are required to discriminate between the many elements of ; with LLbased RNNs, the parametrization can in principle be smaller, because fewer features need to be specified to obtain word level predictions.
The second consequence is that we can exploit rich prior knowledge through the input features , the background , and the output features . We already gave some illustrations of incorporating prior knowledge in this way, but there are many other possibilities. For example, in a dialogue application that requires some answer utterances to contain numerical data that can only be obtained by access to a knowledge base, a certain binary “expert feature” could take the value 1 if and only if is either a nonnumber word or a specific number obtained by some (more or less complex) process exploiting the context in conjunction with the knowledge base. In combination with a background and other features in , who would be responsible for the linguistic quality of the answer utterance, the feature, when activated, would ensure that if a number is produced at this point, it is equal to , but would not try to decide at exactly which point a number should be produced (this is better left to the “language specialists”: and the other features). Whether the feature is activated would be decided by the RNN: a large value of the coordinate of would activate the feature, a small (close to null) value deactivate it.^{21}^{21}21The idea is reminiscent of the approach of Le et al. (2016), who use LSTMbased mixtures of experts for a similar purpose; the big difference is that here, instead of using a linear mixture, we use a “loglinear mixture”, i.e. our features are combined multiplicatively rather than additively, with exponents given by the RNN, that is they are “collaborating”, while in their approach the experts are “competing”: their expert corresponding to needs to decide on its own at which exact point it should produce the number, rather than relying on the linguistic specialist to do it.
This “multiplicative” aspect of the LLRNNs can be related to the product of experts introduced by Hinton (2002). However, in his case, the focus is on learning the individual experts, which are then combined through a direct product, not involving exponentiations, and therefore not in the loglinear class. In our case, the focus is on exploiting predefined experts (or features), but on letting a “controlling” RNN decide about their exponents.
We conclude by a remark concerning the complementarity of the loglinear component and the neural network component in the LLRNN approach. On its own, as has been amply demonstrated in recent years, a standard softmaxbased RNN is already quite powerful. On its own, a standalone loglinear model is also quite powerful, as older research also demonstrated. Roughly, the difference between a loglinear model and a LLRNN model is that in the first, the loglinear weights (in our notation, ) are fixed after training, while in the LLRNN they dynamically vary under the control of the neural network component.^{22}^{22}22Note how a standard loglinear model with oneHot features over would not make sense: with fixed, it would always predict the same distribution for the next word. By contrast, a LLRNN over the same features does make sense: it is a standard RNN. Standard loglinear models have to employ more interesting features. However, the strengths of the two classes of models lie in different areas. The loglinear model is very good at exploiting prior knowledge in the form of complex features, but it has no ability to discover new combinations of features. On the other hand, the RNN is very good at discovering which combinations of characteristics of its input are predictive of the output (representation learning), but is illequipped for exploiting prior knowledge. We argue that the LLRNN approach is a way to capitalize on these complementary qualities.
Acknowledgments
We thank Salah AitMokhtar, Matthias Gallé, Claire Gardent, Éric Gaussier, Raghav Goyal and Florent Perronnin for discussions at various stages of this research.
References
 Bahdanau et al. (2015) Bahdanau, D., Cho, K., and Bengio, Y. (2015). Neural Machine Translation by Jointly Learning to Align and Translate. ICLR, pages 1–15.

Berger et al. (1996)
Berger, A. L., Della Pietra, S. A., and Della Pietra, V. J. (1996).
A maximum entropy approach to natural language processing.
Computational linguistics, 22(1):39–71.  Cho et al. (2014) Cho, K., van Merrienboer, B., Bahdanau, D., and Bengio, Y. (2014). On the Properties of Neural Machine Translation: EncoderDecoder Approaches. Proceedings of SSST8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, pages 103–111.
 Chollet (2015) Chollet, F. (2015). Keras. https://github.com/fchollet/keras.
 Elkan (2008) Elkan, C. (2008). Loglinear models and conditional random fields. Tutorial notes at CIKM, 8:1–12.
 GarcíaMartínez et al. (2016) GarcíaMartínez, M., Barrault, L., and Bougares, F. (2016). Factored Neural Machine Translation. arXiv :1609.04621.
 Gers et al. (2000) Gers, F. A., Schmidhuber, J. A., and Cummins, F. A. (2000). Learning to forget: Continual prediction with lstm. Neural Comput., 12(10):2451–2471.
 Goodfellow et al. (2016) Goodfellow, I., Bengio, Y., and Courville, A. (2016). Deep learning. Book in preparation for MIT Press.
 Goyal et al. (2016) Goyal, R., Dymetman, M., and Gaussier, E. (2016). Natural Language Generation through Characterbased RNNs with FiniteState Prior Knowledge. In Proc. COLING, Osaka, Japan.
 Graves (2012) Graves, A. (2012). Supervised sequence labelling with recurrent neural networks, volume 385. Springer.
 Hastie et al. (2001) Hastie, T., Tibshirani, R., and Friedman, J. (2001). The Elements of Statistical Learning. Springer Series in Statistics. Springer New York Inc., New York, NY, USA.
 Hinton (2002) Hinton, G. E. (2002). Training products of experts by minimizing contrastive divergence. Neural computation, 14(8):1771–1800.
 Hochreiter and Schmidhuber (1997) Hochreiter, S. and Schmidhuber, J. (1997). Long shortterm memory. Neural computation, 9(8):1735–1780.
 Jaynes (1957) Jaynes, E. T. (1957). Information theory and statistical mechanics. Phys. Rev., 106:620–630.
 Jean et al. (2015) Jean, S., Cho, K., Memisevic, R., and Bengio, Y. (2015). On Using Very Large Target Vocabulary for Neural Machine Translation. Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 000:1–10.
 Jebara (2013) Jebara, T. (2013). LogLinear Models, Logistic Regression and Conditional Random Fields. Lecture notes: www.cs.columbia.edu/~jebara/6772/notes/notes4.pdf.
 Jordan (20XX) Jordan, M. (20XX). The exponential family: Basics. Lecture notes: http://people.eecs.berkeley.edu/~jordan/courses/260spring10/otherreadings/chapter8.pdf.
 Lafferty et al. (2001) Lafferty, J. D., McCallum, A., and Pereira, F. C. N. (2001). Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning, ICML ’01, pages 282–289, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.
 Le et al. (2016) Le, P., Dymetman, M., and Renders, J.M. (2016). LSTMbased mixtureofexperts for knowledgeaware dialogues. In Proceedings of the 1st Workshop on Representation Learning for NLP, pages 94–99, Berlin, Germany. Association for Computational Linguistics.
 Ling et al. (2015) Ling, W., Trancoso, I., Dyer, C., and Black, A. W. (2015). Characterbased Neural Machine Translation. ICLR’16, pages 1–11.
 Luong and Manning (2016) Luong, M.T. and Manning, C. D. (2016). Achieving Open Vocabulary Neural Machine Translation with Hybrid WordCharacter Models. arXiv:1604.00788v2.
 Luong et al. (2014) Luong, M.T., Sutskever, I., Le, Q. V., Vinyals, O., and Zaremba, W. (2014). Addressing the Rare Word Problem in Neural Machine Translation. arXiv:1410.8206v3.
 Mikolov et al. (2010) Mikolov, T., Karafiat, M., Burget, L., Cernocky, J., and Khudanpur, S. (2010). Recurrent Neural Network based Language Model. Interspeech, (September):1045–1048.

Morin and Bengio (2005)
Morin, F. and Bengio, Y. (2005).
Hierarchical probabilistic neural network language model.
Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics
, pages 246–252.  Nielsen and Garcia (2009) Nielsen, F. and Garcia, V. (2009). Statistical exponential families: A digest with flash cards. arXiv:0911.4863.
 Och and Ney (2002) Och, F. J. and Ney, H. (2002). Discriminative training and maximum entropy models for statistical machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL ’02, pages 295–302, Stroudsburg, PA, USA. Association for Computational Linguistics.
 Rosenfeld (1996) Rosenfeld, R. (1996). A maximum entropy approach to adaptive statistical language modelling. Computer Speech & Language, 10(3):187–228.
 Sutskever et al. (2011) Sutskever, I., Martens, J., and Hinton, G. (2011). Generating Text with Recurrent Neural Networks. Neural Networks, 131(1):1017–1024.
 Sutskever et al. (2014) Sutskever, I., Vinyals, O., and Le, Q. V. (2014). Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112.
 Theano Development Team (2016) Theano Development Team (2016). Theano: A Python framework for fast computation of mathematical expressions. arXiv eprints, abs/1605.02688.
 Vinyals et al. (2014) Vinyals, O., Kaiser, L., Koo, T., Petrov, S., Sutskever, I., and Hinton, G. (2014). Grammar as a Foreign Language. arXiv: 1412.7449v3.
 Vinyals and Le (2015) Vinyals, O. and Le, Q. (2015). A neural conversational model. arXiv:1506.05869.
 Wen et al. (2015) Wen, T., Gasic, M., Mrksic, N., Su, P., Vandyke, D., and Young, S. J. (2015). Semantically conditioned lstmbased natural language generation for spoken dialogue systems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 1721, 2015, pages 1711–1721.
 Xiao et al. (2016) Xiao, C., Dymetman, M., and Gardent, C. (2016). Sequencebased structured prediction for semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1341–1350, Berlin, Germany. Association for Computational Linguistics.
Comments
There are no comments yet.