1 Introduction
Many tasks in natural language processing, computational biology or reinforcement learning, rely on estimating functions mapping sequences of observations to real numbers.
Weighted finite automata (WFA) are finite state machines that allow one to succinctly represent such functions. WFA have been widely used in many fields such as grammatical parsing (mohri1998dynamic), sequence modeling and prediction (cortes2004rational) and bioinfomatics (allauzen2008sequence). A probabilistic WFA (PFA)is a WFA satisfying some constraints that computes a probability distribution over strings; PFA are expressively equivalent to
Hidden Markov Models (HMM) (dupont2005links), which have been successfully applied in many tasks such as speech recognition (gales2008application) and human activity recognition (nazabal2015discriminative). Recently, the socalled spectral method has been proposed as an alternative to EM based algorithms to learn HMM (hsuspectral), WFA (bailly2009grammatical), predictive state representations (boots2011closing), and related models. Compared to EM based methods, the spectral method has the benefits of providing consistent estimators and reducing computational complexity.Although WFA have been successfully applied in various areas of machine learning, they are inherently linear models: their computation boils down to the composition of linear maps. Recent positive results in machine learning have shown that models based on composing nonlinear functions are both very expressive and able to capture complex structure in data. For example, by leveraging the expressive power of deep convolutional neural networks in the context of reinforcement learning, agents can be trained to outperform humans in Atari games
(mnih2013playing) or to defeat worldclass go players (silver2016mastering). Deep convolutional networks have also recently led to considerable breakthroughs in computer vision
(krizhevsky2012imagenet), where they showed their ability to disentangle the complex structure of the data by learning a representation which unfold the original complex feature space (where the data lies on a lowdimensional manifold) into a representation space where the structure has been linearized. It is thus natural to wonder to which extent introducing nonlinearity in WFA could be beneficial. We will show that both these advantages of nonlinear models, namely their expressiveness and their ability to learn rich representations, can be brought to the classical WFA computational model.In this paper, we propose a nonlinear WFA model (NLWFA) based on neural networks, along with a learning algorithm. In contrast with WFA, the computation of a NLWFA relies on successive compositions of nonlinear mappings. This model can be seen as an extension of dynamical recognizers (moore1997dynamical)
— which are in some sense a nonlinear extension of deterministic finite automata — to the quantitative setting. In contrast with the training of recurrent neural networks (RNN), our learning algorithm does not rely on backpropagation through time. It is inspired by the spectral learning algorithm for WFA, which can be seen as a twostep process: first find a lowrank factorization of the so called
Hankel matrixleading to a natural embedding of the set of words into a lowdimensional vector space, and then perform regression in this representation space to recover the transition matrices. Similarly, our learning algorithm first finds a nonlinear factorization of the Hankel matrix using an autoencoder network, thus learning a rich nonlinear representation of the set of strings, and then performs nonlinear regression using a feedforward network to recover the transition operators in the representation space.
Related works. NLWFA and RNN are closely related: their computation relies on the composition of nonlinear mappings directed by a sequence of observations. In this paper, we explore a somehow orthogonal direction to the recent RNN literature by trying to connect such models back with classical computational models from formal language theory. Such connections have been explored in the past in the nonquantitative setting with dynamical recognizers (moore1997dynamical), whose inference has been studied in e.g. (pollack1991induction). The ability of RNN to learn classes of formal languages has also been investigated, see e.g. (avcu2017subregular) and references therein. It is well know that predictive state representations (PSR) (littman2002predictive) are strongly related with WFA (thon2015links). A nonlinear extension of PSR has been proposed for deterministic controlled dynamical systems in (rudary2004nonlinear). More recently, building upon reproducing kernel Hilbert space embedding of PSR (boots2013hilbert), nonlinearity is introduced into PSR using recurrent neural networks (downey2017predictive; venkatraman2017predictive). One of the main differences with these approaches is that our learning algorithm does not rely on backpropagation through time and we instead investigate how the spectral learning method for WFA can be beneficially extended to the nonlinear setting.
2 Preliminaries
We first introduce notions on weighted automata and the spectral learning method.
2.1 Weighted finite automaton
Let denote the set of strings over a finite alphabet and let be the empty word. A weighted finite automaton (WFA) with states is a tuple where are the initial and final weight vector respectively, and is the transition matrix for each symbol . A WFA computes a function defined for each word by
By letting for any word we will often use the shorter notation A WFA with states is minimal if its number of states is minimal, i.e., any WFA such that has at least states. A function is recognizable if it can be computed by a WFA. In this case the rank of is the number of states of a minimal WFA computing . If is not recognizable we let .
2.2 Hankel matrix
The Hankel matrix associated with a function is the biinfinite matrix with entries for all words . The spectral learning algorithm for WFA relies on the following fundamental relation between the rank of and the rank of the Hankel matrix (carlyle1971realizations; fliess1974matrices):
Theorem 1.
For any , .
In practice, one deals with finite subblocks of the Hankel matrix. Given a basis , where is a set of prefixes and is a set of suffixes, we denote the corresponding subblock of the Hankel matrix by . Among all possible basis, we are particularly interested in the ones with the same rank as . We say that a basis is complete if .
For an arbitrary basis , we define its pclosure by , where . It turns out that a Hankel matrix over a pclosed basis can be partitioned into blocks of the same size (balle2014spectral):
where for each the matrix is defined by .
2.3 Spectral learning
It is easy to see that the rank of the Hankel matrix is upper bounded by the rank of : if is a WFA with states computing , then admits the rank factorization where the matrices and are defined by and for all . Moreover, one can check that for each . The spectral learning algorithm relies on the nontrivial observation that this construction can be reversed: given any rank factorization , the WFA defined by
is a minimal WFA computing (balle2014spectral, Lemma 4.1), where for denote the finite matrices defined above for a prefix closed complete basis .
3 Nonlinear Weighted Finite Automata
The WFA model assumes that the transition operators are linear. It is natural to wonder whether this linear assumption sometimes induces a too strong model bias (e.g. if one tries to learn a function that is not recognizable by a WFA). Moreover, even for recognizable functions, introducing nonlinearity could potentially reduce the number of states needed to represent the function. Consider the following example: given a WFA , the function is recognizable and can be computed by the WFA with , and , where denotes Kronecker product. One can check that if , then can be as large as , but intuitively the true dimension of the model is using nonlinearity^{1}^{1}1By applying the spectral method on the componentwise square root of the Hankel matrix of , one would recover the WFA of rank .. These two observations motivate us to introduce nonlinear WFA (NLWFA).
3.1 Definition of NLWFA
We will use the notation to stress that a function may be nonlinear. We define a NLWFA of with k states as a tuple , where is a vector of initial weights, is a transition function for each and is a termination function. A NLWFA computes a function defined by
for any word . Similarly to the linear case, we will sometimes use the shorthand notation . This nonlinear model can be seen as a generalization of dynamical recognizers (moore1997dynamical) to the quantitative setting. It is easy to see that one recovers the classical WFA model by restricting the functions and to be linear. Of course some restrictions on these nonlinear functions have to be imposed in order to control the expressiveness of the model. In this paper, we consider nonlinear functions computed by neural networks.
3.2 A Representation learning perspective on the spectral algorithm
Our learning algorithm is inspired by the spectral learning method for WFA. In order to give some insights and further motivate our approach, we will first show how the spectral method can be interpreted as a representation learning scheme.
The spectral method can be summarized as a twostages process consisting of a factorization step and a regression step: first find a low rank factorization of the Hankel matrix and then perform regression to estimate the transition operators .
First focusing on the factorization step, let us observe that one can naturally embed the set of prefixes into the vector space by mapping each prefix to the corresponding row of the Hankel matrix . However, it is easy to check that this representation is highly redundant when the Hankel matrix is of low rank. In the factorization step of the spectral learning algorithm, the rank factorization can be seen as finding a low dimensional representation for each prefix , from which the original Hankel representation can be recovered using the linear map (indeed ). We can formalize this encoderdecoder perspective by defining two maps and by and . One can easily check that , which implies that encodes all the information sufficient to predict the value for any suffix (indeed ).
The regression step of the spectral algorithms consists in recovering the matrices satisfying . From our encoderdecoder perspective, this can be seen as recovering the compositional mappings satisfying for each .
It follows from the previous discussion that nonlinearity could be beneficially brought to WFA and into the spectral learning algorithm in two ways: first by using nonlinear methods to perform the factorization of the Hankel matrix, thus discovering a potentially nonlinear embedding of the Hankel representation, and second by allowing the compositional feature maps associated to each symbol to be nonlinear.
4 Learning NLWFA
Introducing nonlinearity can be achieved in several ways. In this paper, we will use neural networks due to their ability to discover relevant nonlinear lowdimensional representation spaces and their expressive power as function approximators.
4.1 Nonlinear factorization
Introducing nonlinearity in the factorization step boils down to finding two mappings and such that for any prefix . Briefly going back to the linear case, one can check that if , then we have for each prefix , implying that the encoderdecoder maps satisfy and . Thus the factorization step can essentially be interpreted as finding an autoencoder able to project down the Hankel representation to a low dimensional space while preserving the relevant information captured by .
How to extend the factorization step to the nonlinear setting should now appear clearly: by training an autoencoder to learn a lowdimensional representation of the Hankel representations , one will potentially unravel a rich representation of the set of prefixes from which a NLWFA can be recovered.
Let and be the encoder and decoder maps respectively. We will train the autoencoder shown in Figure 1 (left) to achieve
More precisely, if , the model is trained to map the original Hankel representation of each prefix to a latent representation vector in , where , and then map this vector back to the original representation . This is achieved by minimizing the reconstruction error (i.e. the
distance between the original representation and its reconstruction). Instead of linearly factorizing the Hankel matrix, we use an autoencoder framework consisting of two networks, whose hidden layer activation functions are nonlinear
^{2}^{2}2We use the (componentwise) function in our experiments..More precisely, if we denote the nonlinear activation function by , and we let A, B, C, D be the weights matrices from the left to the right of the neural net shown in Figure 1 (left), the function computed by the autoencoder can be written as
where the encoderdecoder functions and are defined by and for vectors .
It is easy to check that if the activation function is the identity, one will exactly recover a rank factorization of the Hankel matrix, thus falling back onto the classical factorization step of the spectral learning algorithm.
4.2 Nonlinear regression
Given the encoderdecoder maps and , we then move on to recovering the transition functions. Recall that we wish to find the compositional feature maps for each satisfying for all . Using the encoder map obtained in the factorization step, the mapping can be written as .
In order to learn these transition maps, we will thus train one neural network for each symbol
to minimize the following squared error loss function
The structure of the simple feedforward network used to learn the transition maps is shown in Figure 1 (right). Let be the two weights matrices, the function computed by this network can be written as
We want to point out that both hidden units and output units of this network are nonlinear. Since this network will be trained to map between latent representations computed by the factorization network, the output units of the transition network and the units corresponding to the latent representation in the factorization network should be of the same nature to facilitate the optimization process.
4.3 Overall learning algorithm
Let be a basis of suffixes and prefixes such that . Let be its closure (i.e. ) and let . For reasons that will be clarified in the next section, we assume that is prefixclosed (i.e. for any , all prefixes of also belong to ). The first step consists in building the estimate of the Hankel matrix from the training data (by using e.g. the empirical frequencies in the train set), where the rows of are indexed by prefixes in and its columns by suffixes in . The learning algorithm for NLWFA then consists of two steps:

Train the factorization network to obtain a nonlinear decomposition of the Hankel matrix through the mappings and satisfying
(1) 
Train the transition networks for each symbol to learn the transition maps satisfying
(2)
The resulting NLWFA is then given by where and is defined by
where
is the onehot encoding of the empty suffix
.4.4 Theoretical analysis
While the definitions of the initial vector and termination function given above may seem adhoc, we will now show that the learning algorithm we derived corresponds to minimizing an error loss function between and the estimated value over all prefixes in . Intuitively, this means that our learning algorithm aims at minimizing the empirical squared error loss over the training set . More formally, we show in the following theorem that if both the factorization network and the transition networks are trained to optimality (i.e. they both achieve training error), then the resulting NLWFA exactly recovers the values given in the first column of the estimate of the Hankel matrix.
Theorem 2.
Proof.
We first show by induction on the length of a word that
If , using the fact that we have by Eq. (2). Now if , we can apply the induction hypothesis on (since is prefixclosed) to obtain by Eq. (2).
To conclude, for any we have by Eq. (1).
∎
Intuitively, it follows that the learning algorithm described in Section 4.3 aims at minimizing the following loss function
where is the estimated value of the target function on the word , and where the NLWFA is a function of the encoderdecoder maps and of the transition maps as described in Section 4.3.
Even though Theorem 2 seems to suggest that our learning algorithm is prone to overfitting, this is not the case. Indeed, akin to the linear spectral learning algorithm, the restriction on the number of states of the NLWFA (which corresponds to the size of the latent representation layer in the factorization network) induces regularization and enforces the learning process to discriminate between signal and noise (i.e. in practice, the networks will not achieve error due to the bottleneck structure of the factorization network).
4.5 Applying nonlinearity independently in the factorization and transition networks
We have shown that nonlinearity can be introduced into the two steps of our learning algorithm. We can thus consider three variants of this algorithm where we either apply nonlinearity in the factorization step only, in the regression step only, or in both steps. It is easy to check that these three different settings correspond to three different NLWFA models depending on whether the termination function only is nonlinear, the transition functions only are nonlinear, or both the termination and transition functions are nonlinear. Indeed, recall that that a NLWFA is defined as a tuple . If no nonlinearity are introduced in the factorization network, the termination function will have the form
(using the notations from the previous sections), which is linear. Similarly, if no nonlinearity are used in the transition networks, the resulting maps will be linear.
One may argue that only applying nonlinearity in the termination function would not lead to an expressive enough model. However, it is worth noting that in this case, after the nonlinear factorization step, even though the transition functions are linear they are operating on a nonlinear feature space. This is similar in spirit to the kernel trick, where a linear model is learned in a feature space resulting from a nonlinear transformation of the initial input space. Moreover, if we go back to the example of the squared function for some WFA with states (see beginning of Section 3), even though may have rank up to , one can easily build a NLWFA with states computing where only the termination function is nonlinear.
5 Experiments
We compare the classical spectral learning algorithm with the three configurations of our neuralnet based NLWFA learning algorithms: applying nonlinearity only in the factorization step (denoted by fac.non), only in the regression step (denoted by tran.non), and in both phases (denoted by both.non). We will perform experiments on a grammatical inference task (i.e. learn a distribution over from samples drawn from this distribution) with both synthetic and real data
5.1 Metrics
We use two metrics to evaluate the trained models on a test set: Pautomac score and word error rate.

The Pautomac score was first proposed for the Pautomac challenge (verwer2014pautomac) and is defined by
where is the normalized probability assigned to by the learned model and is the normalized true probability (both and are normalized to sum to over the test set ). Since the models returned by both our method and the spectral learning algorithm are not ensured to outputs positive values, while the logarithm of a negative value is not defined, we take the absolute values of all the negative outputs.

The word error rate (WER) measures the percentage of incorrectly predicted symbols when, given each prefix of strings in the test set, the most likely next symbol is predicted.
5.2 Synthetic data: probabilistic Dyck language
For the synthetic data experiment, we generate data from a probabilistic Dyck language. Let , we consider the language generated by the following probabilistic context free grammar
i.e. starting from the symbol , we draw one of the rules according to their probability and apply it to transform into the corresponding right hand side; this process is repeated until no symbol are left. One can check that this distribution will generate balanced strings of brackets. It is well known that this distribution cannot be computed by a WFA (since its support is a context free grammar). However, as a WFA can compute any distribution with finite support, it can model the restriction of this distribution to word of length less than some threshold . By using this distribution for our synthetic experiments, we want to showcase the fact that NLWFA can lead to models with better predictive accuracy when the number of states is limited and that they can better capture the complex structure of this distribution.
In our experiments, we use empirical frequencies in a training data set to estimate the Hankel matrix , where the pclosed basis is obtained by selecting the most frequent prefixes and suffixes in the training data. We first assess the ability of NLWFA to better capture the structure in the data when the number of states is limited. We compared the models for different model sizes ranging from to , where is the number of states of the learned WFA and NLWFA. For the latter, we used a three hidden layers structure for the factorization network where the number of hidden units are set to , and . For the transition networks, we use a neural network with hidden units^{3}^{3}3These hyper parameters are not finely tuned, thus some optimization might potentially improve the results.. We used Adamax (kingma2014adam) with learning rate 0.015 and 0.001 respectively to train these two networks.
All models are trained on a training set of size and the Pautomac score and WER on a test set of size are reported in Figure 2 and 3 respectively. For both metrics, we see that NLWFA gives better results for small model sizes. While NLWFA and WFA tend to perform similarly for the Pautomac score for larger model sizes, NLWFA clearly outperforms WFA in terms of WER in this case. This shows that including nonlinearity can increase the prediction power of WFA by discovering the underlying nonlinear structure and can be beneficial when dealing with a small number of states.
We then compared the sample complexity of learning NLWFA and WFA by training the different models on training set of sizes ranging from to . For all models the rank is chosen by crossvalidation. In Figure 4 and Figure 5, we show the performances for the four models on a test set of size
by reporting the average and standard deviation over
runs of this experiment. We can see that NLWFA achieve better results on small sample sizes for the Pautomac score and consistently outperforms the linear model for all sample sizes for WER. This shows that NLWFA can use the training data more efficiently and again that the expressiveness of NLWFA is beneficial to this learning task.5.3 Real data: Penn treebank
The Penn Treebank (taylor2003penn) is a well known benchmark dataset for natural language processing. It consists of approximately 7 million words of partofspeech tagged text, 3 million words of skeletally parsed text, over 2 million words of text parsed for predicate argument structure, and 1.6 million words of transcribed spoken text annotated for speech disfluencies. In this experiment, we use a small portion of the Treebank dataset: the character level of English verbs which was used in the SPICE challenge (balle2017results). This dataset contains 5,987 sentences over an alphabet of 33 symbols as the training set. It also provides two test sets of size 750. We used one of the test sets as a validation set and then tested our models on the other.
For this experiment, the Hankel matrix is of size where the prefixes and suffixes have been selected again by taking the most frequents in the training data. We used a five layers factorization network where the layers are of size , , , and respectively, where is the number of states of the NLWFA. The structure of the transition networks is the same as in the previous experiment. For all models, the rank is selected using the validation set.
In Table 1, we report the results for the two metrics on the test set. We can see that for both metrics, one of the NLWFA models outperforms linear spectral learning. Individually speaking, for modeling the distribution (i.e. the perplexity metric) tran.non gives the best performances, while for the prediction task fac.non shows a significant advantage.
SP  Tran.non  Fac.non  Both.non  

log(Pauto)^{4}^{4}4Since we do not have access to the true probabilities, is estimated using the empirical frequencies in the test set.  21.3807  12.2571  13.8311  13.6604 
WER  0.8033  0.8841  0.7061  0.8334 
6 Discussion
We believe that trying to combine models from formal languages theory (such as weighted automata) and models that have recently led to several successes in machine learning (e.g. neural networks) is an exciting and promising line of research, both from the theoretical and practical sides. This work is a first step in this direction: we proposed a novel nonlinear weighted automata model along with a learning algorithm inspired by the spectral learning method for classical WFA. We showed that nonlinearity can be introduced in two ways in WFA, in the termination function or in the transition maps, which directly translates into the two steps of our learning algorithm.
In our experiment, we showed on both synthetic and real data that (i) NLWFA can lead to models with better predictive accuracy than WFA when the number of states is limited, (ii) NLWFA are able to capture the complex underlying structure of challenging languages (such as the Dyck language used in our experiments) and (iii) NLWFA exhibit better sample complexity when learning on data with a complex grammatical structure.
In the future, we intend to investigate further the properties of NLWFA from both the theoretical and experimental perspectives. For the former, one natural question is whether we could obtain learning guarantees for some specific classes of nonlinear functions. Indeed, one of the main advantages of the spectral learning algorithm is that it provides consistent estimators. While it may be difficult to obtain such guarantees when considering functions computed by neural networks, we believe that studying the case of more tractable nonlinear functions (e.g. polynomials) could be very insightful. We also plan on thoroughly investigating connections between NLWFA and RNN. From the practical perspective, we want to first tune the hyperparameters for NLWFA more extensively on the current datasets to potentially improve the results. In addition, we intend to run further experiments on real data and on different kinds of tasks beside language modeling (e.g. classification, regression). Moreover, due to the strong connection between WFA and PSR, it will be very interesting to use NLWFA in the context of reinforcement learning.
It is worth mentioning that the spectral learning algorithm cannot straightforwardly be used to learn functions that are not probability distributions. Indeed, while it makes sense in the probabilistic setting to fill the entries corresponding to words that are not in the training data to in the Hankel matrix, it is not clear how to fill these entries when one wants to learn a function that is not a probability distribution, e.g. in a regression task. One way to circumvent this issue is to first use matrix completion techniques to fill these missing entries before performing the low rank decomposition of the Hankel matrix (balle2012spectral). In contrast, our learning algorithm can directly be applied to this setting by simply adapting the loss function of the factorization network (i.e. simply ignore the missing entries in the loss function).
Comments
There are no comments yet.