Variational Recurrent Auto-Encoder using LSTM encoder/decoder networks
In this paper we propose a model that combines the strengths of RNNs and SGVB: the Variational Recurrent Auto-Encoder (VRAE). Such a model can be used for efficient, large scale unsupervised learning on time series data, mapping the time series data to a latent vector representation. The model is generative, such that data can be generated from samples of the latent space. An important contribution of this work is that the model can make use of unlabeled data in order to facilitate supervised training of RNNs by initialising the weights and network state.READ FULL TEXT VIEW PDF
Recurrent auto-encoder model summarises sequential data through an encod...
Electrocardiogram signals are omnipresent in medicine. A vital aspect in...
Frequently recurring transient faults in a transmission network may be
In this work, we propose a novel probabilistic sequence model that excel...
With the rise in the employment of deep learning methods in safety-criti...
Processing temporal sequences is central to a variety of applications in...
We present a loss function for neural networks that encompasses an idea ...
Variational Recurrent Auto-Encoder using LSTM encoder/decoder networks
Recurrent Neural Networks (RNNs) exhibit dynamic temporal behaviour which makes them suitable for capturing time dependencies in temporal data. Recently, they have been succesfully applied to handwriting recognition (Graves et al., 2009) and music modelling (Boulanger-Lewandowski et al., 2012). In another more recent development, Cho et al. (2014) introduced a new model structure consisting of two RNN networks, an encoder and a decoder. The encoder encodes the input to an intermediate representation which forms the input for the decoder. The resulting model was able to obtain a state-of-the-art BLEU score.
We propose a new RNN model based on Variational Bayes: the Variational Recurrent Auto Encoder (VRAE). This model is similar to an auto-encoder in the sense that it learns an encoder that learns a mapping from data to a latent representation, and a decoder from latent representation to data. However, the Variational Bayesian approach maps the data to a distribution over latent variables. This type of network can be efficiently trained with Stochastic Gradient Variational Bayes (SGVB), introduced last year at ICLR by Kingma & Welling (2013), and our resulting model has similarities to the Variational Auto-Encoder presented in their paper. Combining RNNs with SGVB is partly inspired by the work of Justin Bayer, the first results of which were presented as workshop at NIPS 2014 (Bayer & Osendorfer, 2014).
A VRAE allows to map time sequences to a latent representation and it enables efficient, large scale unsupervised variational learning on time sequences. Also, a trained VRAE gives a sensible initialisation of weights and network state for a standard RNN. In general, the network states are initialised at zero, however Pascanu et al. (2013)
have shown that the network state is a large factor in explaining the exploding gradients problem. Initializing a standard RNN with weights and a network state obtained from the VRAE will likely make training more efficient and will possibly avoid the exploding gradients problem and enable better scores.
is a way to train models where it is assumed that the data is generated using some unobserved continuous random variable. In general, the marginal likelihood is intractable for these models and sampling based methods are too computationally expensive even for small datasets. SGVB solves this by approximating the true posterior by and then optimizing a lower bound on the log-likelihood. Similar to the nomenclature in Kingma’s paper, we call the encoder and the decoder.
The log-likelihood of a datapoint i can be written as a sum of the lower bound and the KL divergence term between the true posterior and the approximation , with the parameters of the model:
Since the KL divergence is non-negative, is a lower bound on the log-likelihood. This lower bound can be expressed as:
If we want to optimize this lower bound with gradient ascent, we need gradients with respect to all the parameters. Obtaining the gradients of the encoder is relatively straightforward, but obtaining the gradients of the decoder is not. In order to solve this Kingma & Welling (2013) introduced the ”reparametrization trick” in which they reparametrize the random variable as a deterministic variable . In our model the latent variables are univariate Gaussians, so the reparametrization is with .
Modelling the latent variables in this way allows the KL divergence to be integrated analytically, resulting in the following estimator:
For more details refer to Kingma & Welling (2013). They also present an elaborate derivation of this estimator in their appendix.
The encoder contains one set of recurrent connections such that the state is calculated based on the previous state and on the data of the corresponding time step. The distribution over is obtained from the last state of the RNN, , such that:
Where is initialised as a zero vector.
Using the reparametrization trick, is sampled from this encoding and the initial state of the decoding RNN is computed with one set of weights. Hereafter the state is once again updated as a traditional RNN:
For our experiments we used 8 MIDI files (binary data with one dimension for each pitch) of well-known 80s and 90s video game songs111Tetris, Spongebob Theme Song, Super Mario, Mario Underworld, Mario Underwater, Mariokart 64 Choco Mountain, Pokemon Center and Pokemon Surf sampled at 20Hz. Upon inspection, only 49 of the 88 dimensions contained a significant amount of notes, so the other dimensions were removed. The songs are divided into short parts, where each part becomes one data point. In order to have an equal number of data points from each with song, only the first 520 data points from each song were used.
The choice of optimizer proved vital to make the VRAE learn a useful representation, especially adaptive gradients and momentum are important. In our experiments we used Adam, which is an optimizer inspired by RMSprop with included momentum and a correction factor for the zero bias, created byKingma & Ba (2014).
On the left is the lower bound of the log-likelihood per datapoint per time step during training. The first 10 epochs were cut off for scale reasons. On the right is the organisation of all data points in latent space. Each datapoint is encoded, and visualized at the location of the resulting two-dimensional meanof the encoding. ”Mario Underworld” (green triangles), ”Mario” (red triangles) and ”Mariokart” (blue triangles) occupy the most distinct regions.
With a model that has only two-dimensional latent space, it is possible to show the position of each data point in latent space. The data points are only a few seconds long and can therefore not capture all the characteristics of the song. Nevertheless, Figure 1 shows some clustering as certain songs occupy distinct regions in latent space.
A two-dimensional latent space, however, is suboptimal for modelling the underlying distribution of the data. Therefore we also trained a model with twenty latent variables. For this model, we used sequences of 40 time steps with overlap, such that the start of each data point is halfway through the previous data point. This way the model not only learns the individual data points but also the transitions between them, which enables generating music of arbitrary length. As in training the first model, Adam parameters used are and . The learning rate was and was adjusted to after epochs. The resulting lower bound is shown in Figure 2.
Similar to 1, the organisation of the data in latent space using this model is shown in 2. In order to visualize the twenty-dimensional latent representations we used t-SNE (Van der Maaten & Hinton, 2008).
Given a latent space vector, the decoding part of the trained models can be used for generating data. The first model described in this chapter was trained on non-overlapping sequences of 50 time steps. Therefore, it can not be expected that generating longer sequences will yield data from the same distribution as the training data. However, since we know for each data point its latent representation in two dimensions and we can inspect their positions (see Figure 1
) we use the model to interpolate between parts of different songs. The resulting music, which only lasts for a few seconds, clearly has elements of both parts. The model trained on overlapping data points was used to generate music of 1000 time steps (50 seconds) with various (20-dimensional) latent state vectors. It is possible to obtain latent vectors by encoding a data point, or to sample randomly from latent space. Doing this creates what one might call a ”medley” of the songs used for training. A generated sample from a randomly chosen point in latent space is available on YouTube 222http://youtu.be/cu1_uJ9qkHA.
We have shown that it is possible to train RNNs with SGVB for effective modeling of time sequences. An important difference with earlier, similar approaches is that our model maps time sequences to one latent vector, as opposed to latent state sequences.
A first possible improvement over the current model is dividing each song into as many data points as possible for training (i.e. one datapoint starting at each time step) instead of data points that only have 50% overlap. Another improvement is to reverse the order of the input, such that the the first time steps are more strongly related to the latent space than the last time steps. This will likely improve upon the length of the time dependency that can be captured, which was around 100 time steps with our current approach. Another way to train on longer time sequences is to incorporate the LSTM framework (Hochreiter & Schmidhuber, 1997).
Direct applications of our approach include recognition, denoising and feature extraction. The model can be combined with other (supervised or unsupervised) models for sequential data, for example to improve on current music genre tagging methods, e.g.Sigtia et al. (2014). In addition, this method could complement current methods for supervised training of RNNs by providing initial hidden states.
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2014.
Stochastic backpropagation and approximate inference in deep generative models.In Proceedings of the 31th International Conference on Machine Learning, (ICML), 2014.