1 Introduction
Representation learning remains an outstanding research problem in machine learning and computer vision. Recently there is a rising interest in disentangled representations, in which each component of learned features refers to a semantically meaningful concept. In the example of video sequence modelling, an ideal disentangled representation would be able to separate timeindependent concepts (e.g. the identity of the object in the scene) from dynamical information (e.g. the timevarying position and the orientation or pose of that object). Such disentangled representations would open new efficient ways of compression and style manipulation, among other applications.
Recent work has investigated disentangled representation learning for images within the framework of variational autoencoders (VAEs)
(Kingma & Welling, 2013; Rezende et al., 2014) and generative adversarial networks (GANs) (Goodfellow et al., 2014). Some of them, e.g. the VAE method (Higgins et al., 2016), proposed new objective functions/training techniques that encourage disentanglement. On the other hand, network architecture designs that directly enforce factored representations have also been explored by e.g. Siddharth et al. (2017); Bouchacourt et al. (2017). These two types of approaches are often mixed together, e.g. the infoGAN approach (Chen et al., 2016) partitioned the latent space and proposed adding a mutual information regularisation term to the vanilla GAN loss. Mathieu et al. (2016) also partitioned the encoding space into style and content components, and performed adversarial training to encourage the datapoints from the same class to have similar content representations, but diverse style features.Less research has been conducted for unsupervised learning of disentangled representations of sequences. For video sequence modelling, Villegas et al. (2017) and Denton & Birodkar (2017) utilised different networks to encode the content and dynamics information separately, and trained the autoencoders with a combination of reconstruction loss and GAN loss. Structured (Johnson et al., 2016) and Factorised VAEs (Deng et al., 2017) used hierarchical priors to learn more interpretable latent variables. Hsu et al. (2017) designed a structured VAE in the context of speech recognition. Their VAE architecture is trained using a combination of the standard variational lower bound and a discriminative regulariser to further encourage disentanglement. More related work is discussed in Section 3.
In this paper, we propose a generative model for unsupervised structured sequence modelling, such as video or audio. We show that, in contrast to previous approaches, a disentangled representation can be achieved by a careful design of the probabilistic graphical model. In the proposed architecture, we explicitly use a latent variable to represent content, i.e., information that is invariant through the sequence, and a set of latent variables associated to each frame to represent dynamical information, such as pose and position. Compared to the mentioned previous models that usually predict future frames conditioned on the observed sequences, we focus on learning the distribution of the video/audio content and dynamics to enable sequence generation without conditioning. Therefore our model can also generalise to unseen sequences, which is confirmed by our experiments. In more detail, our contributions are as follows:

Controlled generation. Our architecture allows us to approximately control for content and dynamics when generating videos. We can generate random dynamics for fixed content, and random content for fixed dynamics. This gives us a controlled way of manipulating a video/audio sequence, such as swapping the identity of moving objects or the voice of a speaker.

Efficient encoding. Our representation is more data efficient than encoding a video frame by frame. By factoring out a separate variable that encodes content, our dynamical latent variables can have smaller dimensions. This may be promising when it comes to endtoend neural video encoding methods.

We design a new metric that allow us to verify disentanglement of the latent variables, by investigating the stability of an object classifier over time.

We give empirical evidence, based on video data of a physics simulator, that for long sequences, a stochastic transition model generates more realistic dynamics.
The paper is structured as follows. Section 2 introduces the generative model and the problem setting. Section 3 discusses related work. Section 4 presents three experiments on video and speech data. Finally, Section 5 concludes the paper and discusses future research directions.
2 The model
Let denote a high dimensional sequence, such as a video with consecutive frames. Also, assume the data distribution of the training sequences is . In this paper, we model the observed data with a latent variable model that separates the representation of timeinvariant concepts (e.g. object identities) from those of timevarying concepts (e.g. pose information).
Generative model.
Consider the following probabilistic model, which is also visualised in Figure 1:
(1) 
We use the convention that . The generation of frame at time depends on the corresponding latent variables and . are model parameters.
Ideally, will be capable of modelling global aspects of the whole sequence which are timeinvariant, while will encode timevarying features. This separation may be achieved when choosing the dimensionality of to be small enough, thus reserving only for timedependent features while compressing everything else into . In the context of video encodings, would thus encode a “morphing transformation”, which encodes how a frame at time is morphed into a frame at time .
Inference models.
We use variational inference to learn an approximate posterior over latent variables given data (Jordan et al., 1999). This involves an approximating distribution . We train the generative model with the VAE algorithm (Kingma & Welling, 2013):
(2) 
To quantify the effect of the architecture of on the learned generative model, we test with two types of factorisation structures as follows.
The first architecture constructs a factorised distribution
(3) 
as the amortised variational distribution. We refer to this as “factorised ” in the experiments section. This factorization assumes that content features are approximately independent of motion features. Furthermore, note that the distribution over content features is conditioned on the entire time series, whereas the dynamical features are only conditioned on the individual frames.
The second encoder assumes that the variational posterior of depends on , and the distribution has the following architecture:
(4) 
and the distribution is conditioned on the entire time series. It can be implemented by e.g. a bidirectional LSTM (Graves & Schmidhuber, 2005) conditioned on , followed by an RNN taking the biLSTM hidden states as the inputs. We provide a visualisation of the corresponding computation graph in the appendix. This encoder is referred to as “full ”. The idea behind the structured approximation is that content may affect dynamics: in video, the shape of objects may be informative about their motion patterns, thus is conditionally dependent on . The architectures of the generative model and both encoders are visualised in Figure 1.
Unconditional generation.
After training, one can use the generative model to synthesise video or audio sequences by sampling the latent variables from the prior and decoding them. Furthermore, the proposed generative model allows generation of multiple sequences entailing the same global information (e.g. the same object in a video sequence), simply by fixing , sampling different , and generating the observations . Generating sequences with similar dynamics is done analogously, by fixing and sampling from the prior.
Conditional generation.
Together with the encoder, the model also allows conditional generation of sequences. As an example, given a video sequence as reference, one can manipulate the latent variables and generate new sequences preserving either the object identity or the pose/movement information. This is done by conditioning on for a given then randomising from the prior, or the other way around.
Feature swapping.
One might also want to generate a new video sequence with the object identity and pose information encoded from different sequence. Given two sequences and , the synthesis process first infers the latent variables and ^{1}^{1}1For the full encoder it also requires ., then produces a new sequence by sampling . This allows us to control both the content and the dynamics of the generated sequence, which can be applied to e.g. conversion of voice of the speaker in a speech sequence.
3 Related work
Research on learning disentangled representation has mainly focused on two aspects: the training objective and the generative model architecture. Regarding the loss function design for VAE models,
Higgins et al. (2016) propose the VAE by scaling up the term in the variational lowerbound with to encourage learning of independent attributes (as the prior is usually factorised). While the VAE has been shown effective in learning better representations for natural images and might be able to further improve the performance of our model, we do not test this recipe here to demonstrate that disentanglement can be achieved by a careful model design.For sequence modelling, a number of prior publications have extended VAE to video and speech data (Fabius & van Amersfoort, 2014; Bayer & Osendorfer, 2014; Chung et al., 2015). These models, although being able to generate realistic sequences, do not explicitly disentangle the representation of timeinvariant and timedependent information. Thus it is inconvenient for these models to perform tasks such as controlled generation and feature swapping.
For GANlike models, both Villegas et al. (2017) and Denton & Birodkar (2017) proposed an autoencoder architecture for next frame prediction, with two separate encoders responsible for content and pose information at each time step. While in Villegas et al. (2017), the pose information is extracted from the difference between two consecutive frames and , Denton & Birodkar (2017) directly encoded for both pose and content, and further designed a training objective to encourage learning of disentangled representations. On the other hand, Vondrick et al. (2016) used a spatiotemporal convolutional architecture to disentangle a video scene’s foreground from its background. Although it has successfully achieved disentanglement, we note that the timeinvariant information in this model is predefined to represent the background, rather than learned from the data automatically. Also this architecture is suitable for video sequences only, unlike our model which can be applied to any type of sequential data.
Very recent work (Hsu et al., 2017) introduced the factorised hierarchical variational autoencoder
(FHVAE) for unsupervised learning of disentangled representation of speech data. Given a speech sequence that has been partitioned into segments
, FHVAE models the joint distribution of
and latent variables as follows:Here the variable has a hierarchical prior , . The authors showed that by having different prior structures for and , it allows the model to encode with speech sequencelevel attributes (e.g. pitch of a speaker), and other residual information with . A discriminative training objective (see discussions in Section 4.2) is added to the variational lowerbound, which has been shown to further improve the quality of the disentangled representation. Our model can also benefit from the usage of hierarchical prior distributions, e.g. , and we leave the investigation to future work.
4 Experiments
We carried out experiments both on video data (Section 4.1) as well as speech data (Section 4.2). In both setups, we find strong evidence that our model learns an approximately disentangled representation that allows for conditional generation and feature swapping. We further investigated the efficiency for encoding long sequences with a stochastic transition model in Section 4.3. The detailed model architectures of the networks used in each experiment are reported in the appendix.
4.1 Video sequence: Sprites
We present an initial test of the proposed VAE architecture on a dataset of video game “sprites”, i.e. animated cartoon characters whose clothing, pose, hairstyle, and skin color we can fully control. This dataset comes from an opensource video game project called Liberated Pixel Cup^{2}^{2}2http://lpc.opengameart.org/, and has been also considered in Reed et al. (2015); Mathieu et al. (2016) for image processing experiments. Our experiments show that static attributes such as hair color and clothing are well preserved over time for randomly generated videos.
Data and preprocessing.
We downloaded and selected the online available sprite sheets^{3}^{3}3https://github.com/jrconway3/UniversalLPCspritesheet, and organised them into 4 attribute categories (skin color, tops, pants and hairstyle) and 9 action categories (walking, casting spells and slashing, each with three viewing angles). In order to avoid a combinatorial explosion problem, each of the attribute categories contains 6 possible variants (see Figure 2), therefore it leads to unique characters in total. We used of them for training/validation and the rest of them for testing. The resulting dataset consists of sequences with frames of dimension . Note here we did not use the labels for training the generative model. Instead these labels on the data frames are used to train a classifier that is later deployed to produce quantitative evaluations on the VAE, see below.
Qualitative analysis.
We start with a qualitative evaluation of our VAE architecture. Figure 3 shows both reconstructed as well as generated video sequences from our model. Each panel shows three video sequences with time running from left to right. Panel (a) shows parts of the original data from the test set, and (b) shows its reconstruction.
The sequences visualised in panel (c) are generated using but . Hence, the dynamics are imposed by the encoder, but the identity is sampled from the prior. We see that panel (c) reveals the same motion patterns as (a), but has different character identities. Conversely, in panel (d) we take the identity from the encoder, but sample the dynamics from the prior. Panel (d) reveals the same characters as (a), but different motion patterns.
Panels (e) and (f) focus on feature swapping. In (e), the frames are constructed by computing on one input sequence but encoded on another input sequence. These panels demonstrate that the encoder and the decoder have learned a factored representation for content and pose.
Panels (g) and (h) focus on conditional generation, showing randomly generated sequences that share the same or samples from the prior. Thus, in panel (g) we see the same character performing different actions, and in (h) different characters performing the same motion. This again illustrates that the prior model disentangles the representation.
Quantitative analysis.
Next we perform quantitative evaluations of the generative model, using a classifier trained on the labelled frames. Empirically, we find that the fully factorized and structured inference networks produce almost identical results here, presumably because in this dataset the object identity and pose information are truly independent. Therefore we only report results on the fully factorised distribution case.
The first evaluation task considers reconstructing the test sequences with encoded and randomly sampled (in the same way as to produce panel (d) in Figure 3
). Then we compare the classifier outputs on both the original frames and the reconstructed frames. If the character’s identity is preserved over time, the classifier should produce identical probability vectors on the data frames and the reconstructed frames (denoted as
and respectively).We evaluate the similarity between the original and reconstructed sequences both in terms of the disagreement of the predicted class labels and the KLdivergence . We also compute the two metrics on the action predictions using reconstructed sequences with randomised and inferred . The results in Table 1 indicate that the learned representation is indeed factorised. For example, in the fix generation test, only out of datareconstruction frame pairs contain characters whose generated skin color differs from the rest, where in the case of hairstyle preservation the disagreement rate is only . The KL metric is also much smaller than the KLdivergence where , indicating that our result is significant.
In the second evaluation, we test whether static attributes of generated sequences, such as clothing or hair style, are preserved over time. We sample 200 video sequences from the generator, using the same but different latent dynamics . We use the trained classifier to predict both the attributes and the action classes for each of the generated frames. Results are shown in Figure 4(a), where we plot the prediction of the classifiers for each frame over time. For example, the trajectory curve in the “skin color” panel in Figure 4(a) corresponds to the skin color attribute classification results for frames of a generated video sequence. We repeat this process 5 times with different samples, where each corresponds to one color.
It becomes evident that those lines with the same color are clustered together, confirming that mainly controls the generation of timeinvariant attributes. Also, most character attributes are preserved over time, e.g. for the attribute “tops”, the trajectories are mostly straight lines. However, some of the trajectories for the attributes drift away from the majority class. We conjecture that this is due of the masscovering behaviour of (approximate) maximum likelihood training, which makes the trained model generate characters that do not exist in the dataset. Indeed the middle row of panel (c) in Figure 3 contains a character with an unseen hairstyle, showing that our model is able to generalise beyond the training set. On the other hand, the sampling process returns sequences with diverse actions as depicted in the action panel, meaning that contains little information regarding the video dynamics.
We performed similar tests on sequence generations with shared latent dynamics but different , shown in Figure 4(b). The experiment is repeated 5 times as well, and again trajectories with the same color encoding correspond to videos generated with the same (but different ). Here we also observe diverse trajectories for the attribute categories. In contrast, the characters’ actions are mostly the same. These two test results again indicate that the model has successfully learned disentangled representations of character identities and actions. Interestingly we observe multimodalities in the action domain for the generated sequences, e.g. the trajectories in the action panel of Figure 4(b) are jumping between different levels. We also visualise in Figure 5 generated sequences of the “turning” action that is not present in the dataset. It again shows that the trained model generalises to unseen cases.
attributes  disagreement  KLrecon  KLrandom 

skin colour  3.98%  0.7847  8.8859 
pants  1.82%  0.3565  8.9293 
tops  0.34%  0.0647  8.9173 
hairstyle  0.06%  0.0126  8.9566 
action  8.11%  0.9027  13.7510 
4.2 Speech data: TIMIT
We also experiment on audio sequence data. Our disentangled representation allows us to convert speaker identities into each other while conditioning on the content of the speech. We also show that our model gives rise to speaker verification, where we outperform a recent probabilistic baseline model.
Data and preprocessing.
The TIMIT data (Garofolo et al., 1993) contains broadband 16kHz recordings of phoneticallybalanced read speech. A total of 6300 utterances (5.4 hours) are presented with 10 sentences from each of the 630 speakers (70% male and 30% female). We follow Hsu et al. (2017)
for data preprocessing: the raw speech waveforms are first split into subsequences of 200ms, and then preprocessed with sparse fast Fourier transform to obtain a 200 dimensional logmagnitude spectrum, computed every 10ms. This implies
for the observation .Qualitative analysis.
We perform voice conversion experiments to demonstrate the disentanglement of the learned representation. The goal here is to convert male voice to female voice (and vice versa) with the speech content being preserved. Assuming that has learned the representation of speaker’s identity, the conversion can be done by first encoding two sequences and with to obtain representations and , then construct the converted sequence by feeding and to the decoder . Figure 6 shows the reconstructed spectrogram after the swapping process of the features. We also provide the reconstructed speech waveforms using the GriffinLim algorithm (Griffin & Lim, 1984) in the appendix.
The experiments show that the harmonics of the converted speech sequences shifted to higher frequency in the “male to female” test and vice versa. Also the pitch (the red arrow in Figure 6 indicating the fundamental frequency, i.e. the first harmonic) of the converted sequence (b) is close to the pitch of (c), same as for the comparison between (d) and (a). By an informal listening test of the speech sequence pairs (a, d) and (b, c), we confirm that the speech content is preserved. These results show that our model is successfully applied to speech sequences for learning disentangled representations.
Quantitative analysis.
We further follow Hsu et al. (2017) to use speaker verification for quantitative evaluation. Speaker verification is the process of verifying the claimed identity of a speaker, usually by comparing the “features” of the test utterance with those of the target utterance
from the claimed identity. The claimed identity is confirmed if the cosine similarity
is grater than a given threshold (Dehak et al., 2009). By varying , we report the verification performance in terms of equal error rate (EER), where the false rejection rate equals the false acceptance rate.The extraction of the “features” is crucial for the performance of this speaker verification system. Given a speech sequence containing segments , we constructed two types of “features”, one by computing as the mean of across the segments, and the other by extracting the mean of and averaging them across both time and segments. In formulas,
We also include two baseline results from Hsu et al. (2017): one used the ivector method (Dehak et al., 2011)
for feature extraction, and the other one used
and (analogous to and in our case) from a trained FHVAE model on Melscale filter bank (FBank) features.The test data were created from the test set of TIMIT, containing 24 unique speakers and 18,336 pairs for verification. Table 2 presents the EER results of the proposed model and baselines.^{4}^{4}4 Hsu et al. (2017) did not provide the EER results for and in the 16 dimension case. It is clear that the feature performs significantly better than the ivector method, indicating that the variable has learned to represent a speaker’s identity. On the other hand, using as the features returns considerably worse EER rates compared to the ivector method and feature. This is good, as it indicates that the variables contain less information about the speaker’s identity, again validating the success of disentangling timevariant and timeindependent information. Note that the EER results for get worse when using the full encoder, and in the 64 dimensional feature case the verification performance of improves slightly. This also shows that for realworld data it is useful to use a structured inference network to further improve the quality of disentangled representation.
Our results are competitive with (or slightly better than) the FHVAE results () reported in Hsu et al. (2017). The better results for FHVAE () is obtained by adding a discriminative training objective (scaled by ) to the variational lowerbound. In a nutshell, the timeinvariant information in FHVAE is encoded in a latent variable , and the discriminative objective encourages encoded from a segment of one sequence to be close to the corresponding while far away from of other sequences. However, we do not test this idea here because (1) our goal is to demonstrate that the proposed architecture is a minimalistic framework for learning disentangled representations of sequential data; (2) this discriminative objective is specifically designed for hierarchical VAE, and in general the assumption behind it might not always be true (consider encoding two speech sequences coming from the same speaker). Similar ideas for discriminative training have been considered in e.g. Mathieu et al. (2016), but that discriminative objective can only be applied to two sequences that are known to entail different timeinvariant information (e.g. two sequences with different labels), which implicitly uses supervisions. Nevertheless, a better design for the discriminative objective without supervision can further improve the disentanglement of the learned representations, and we leave it to future work.
model  feature  dim  EER 
  ivector  200  9.82% 
FHVAE ()  16  5.06%  
FHVAE ()  32  2.38%  
32  22.47%  
factorised q  16  4.78%  
16  17.84%  
factorised q  64  4.94%  
64  17.49%  
full q  16  5.64%  
16  19.20%  
full q  64  4.82%  
64  18.89% 
4.3 Comparing stochastic & deterministic dynamics
Lastly, although not a main focus of the paper, we show that the usage of a stochastic transition model for the prior leads to more realistic dynamics of the generated sequence. For comparison, we consider another class of models:
The parameters of
are defined by a neural network
, with computed by a deterministic RNN conditioned on . We experiment with two types of deterministic dynamics. The first model uses an LSTM with as the initial state: , . In later experiments we refer this dynamics as LSTMf as the latent variable is forward propagated in a deterministic way. The second one deploys an LSTM conditioned on (i.e. ), therefore we refer it as LSTMc. This is identical to the transition dynamics used in the FHVAE model (Hsu et al., 2017). For comparison, we refer to our model as the ’stochastic’ model (Eq. 1).The LSTM models encodes temporal information in a global latent variable . Therefore, small differences/errors in will accumulate over time, which may result in unrealistic longtime dynamics. In contrast, the stochastic model (Eq. 1) keeps track of the timevarying aspects of in for every , making the reconstruction to be timelocal and therefore much easier. Therefore, the stochastic model is better suited if the sequences are long and complex. We give empirical evidence to support this claim.
Data preprocessing & hyperparameters.
We follow Fraccaro et al. (2017) to simulate video sequences of a ball (or a square) bouncing inside an irregular polygon using Pymunk.^{5}^{5}5http://www.pymunk.org/en/latest/. For simplicity we disabled rotation of the square when hitting the wall, by setting the inertia to infinity. The irregular shape was chosen because it induces chaotic dynamics, meaning that small deviations from the initial position and velocity of the ball will create exponentially diverging trajectories at long times. This makes memorizing the dynamics of a prototypical sequence challenging. We randomly sampled the initial position and velocity of the ball, but did not apply any force to the ball, except for the fully elastic collisions with the walls. We generated 5,000 sequences in total (1000 for test), each of them containing frames with a resolution of . For the deterministic LSTMs, we fix the dimensionality of to 64, and set and the LSTM internal states to be 512 dimensions. The latent variable dimensionality of the stochastic dynamics is .
Qualitative & quantitative analyses.
We consider both reconstruction and missing data imputation tasks for the learned generative models. For the latter and for
, the models observe the first frames of a sequence and predict the remaining frames using the prior dynamics. We visualise in Figure 7 the ground truth, reconstructed, and predicted sequences () from all models. We further consider average fraction of incorrectly reconstructed/predicted pixels as a quantitative metric, to evaluate how well the groundtruth dynamics is recovered given consecutive missing frames. The result is reported in Figure 8. The stochastic model outperforms the deterministic models both qualitatively and quantitatively. The shape of the ball is better preserved over time, and the trajectories are more physical. This explains the lower errors of the stochastic model, and the advantage is significant when the number of missing frames is small.Our experiments give evidence that the stochastic model is better suited to modelling long, complex sequences when compared to the deterministic dynamical models. We expect that a better design for the stochastic transition dynamics, e.g. by combining deep neural networks with wellstudied linear dynamical systems (Krishnan et al., 2015; Fraccaro et al., 2016; Karl et al., 2016; Johnson et al., 2016; Krishnan et al., 2017; Fraccaro et al., 2017), can further enhance the quality of the learned representations.
5 Conclusions and outlook
We presented a minimalistic generative model for learning disentangled representations of highdimensional time series. Our model consists of a global latent variable for content features, and a stochastic RNN with timelocal latent variables for dynamical features. The model is trained using standard amortized variational inference. We carried out experiments both on video and audio data. Our approach allows us to perform full and conditional generation, as well as feature swapping, such as voice conversion and video content manipulation. We also showed that a stochastic transition model generally outperforms a deterministic one.
Future work may investigate whether a similar model applies to more complex video and audio sequences. Also, disentangling may further be improved by additional crossentropy terms, or discriminative training. A promising avenue of research is to explore the usage of this architecture for neural compression. An advantage of the model is that it separates dynamical from static features, allowing the latent space for the dynamical part to be lowdimensional.
Acknowledgements
We thank Robert Bamler, Rich Turner, Jeremy Wong and Yu Wang for discussions and feedback on the manuscript. We also thank WeiNing Hsu for helping reproduce the FHVAE experiments. Yingzhen Li thanks Schlumberger Foundation FFTF fellowship for supporting her PhD study.
References
 Bayer & Osendorfer (2014) Bayer, J. and Osendorfer, C. Learning stochastic recurrent networks. arXiv preprint arXiv:1411.7610, 2014.
 Bouchacourt et al. (2017) Bouchacourt, D., Tomioka, R., and Nowozin, S. Multilevel variational autoencoder: Learning disentangled representations from grouped observations. arXiv preprint arXiv:1705.08841, 2017.
 Chen et al. (2016) Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., and Abbeel, P. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In Advances in Neural Information Processing Systems, pp. 2172–2180, 2016.
 Chung et al. (2015) Chung, J., Kastner, K., Dinh, L., Goel, K., Courville, A. C., and Bengio, Y. A recurrent latent variable model for sequential data. In Advances in neural information processing systems, pp. 2980–2988, 2015.
 Dehak et al. (2009) Dehak, N., Dehak, R., Kenny, P., Brümmer, N., Ouellet, P., and Dumouchel, P. Support vector machines versus fast scoring in the lowdimensional total variability space for speaker verification. In Tenth Annual conference of the international speech communication association, 2009.
 Dehak et al. (2011) Dehak, N., Kenny, P. J., Dehak, R., Dumouchel, P., and Ouellet, P. Frontend factor analysis for speaker verification. IEEE Transactions on Audio, Speech, and Language Processing, 19(4):788–798, 2011.

Deng et al. (2017)
Deng, Z., Navarathna, R., Carr, P., Mandt, S., Yue, Y., Matthews, I., and Mori,
G.
Factorized variational autoencoders for modeling audience reactions
to movies.
In
Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on
, pp. 6014–6023. IEEE, 2017.  Denton & Birodkar (2017) Denton, E. and Birodkar, V. Unsupervised learning of disentangled representations from video. arXiv preprint arXiv:1705.10915, 2017.
 Fabius & van Amersfoort (2014) Fabius, O. and van Amersfoort, J. R. Variational recurrent autoencoders. arXiv preprint arXiv:1412.6581, 2014.
 Fraccaro et al. (2016) Fraccaro, M., Sønderby, S. K., Paquet, U., and Winther, O. Sequential neural models with stochastic layers. In Advances in neural information processing systems, pp. 2199–2207, 2016.
 Fraccaro et al. (2017) Fraccaro, M., Kamronn, S., Paquet, U., and Winther, O. A disentangled recognition and nonlinear dynamics model for unsupervised learning. In Advances in Neural Information Processing Systems, pp. 3604–3613, 2017.
 Garofolo et al. (1993) Garofolo, J. S., Lamel, L. F., Fisher, W. M., Fiscus, J. G., and Pallett, D. S. TIMIT AcousticPhonetic Continuous Speech Corpus LDC93S1. Web Download. Philadelphia: Linguistic Data Consortium, 1993.
 Goodfellow et al. (2014) Goodfellow, I., PougetAbadie, J., Mirza, M., Xu, B., WardeFarley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680, 2014.
 Graves & Schmidhuber (2005) Graves, A. and Schmidhuber, J. Framewise phoneme classification with bidirectional lstm and other neural network architectures. Neural Networks, 18(56):602–610, 2005.

Griffin & Lim (1984)
Griffin, D. and Lim, J.
Signal estimation from modified shorttime fourier transform.
IEEE Transactions on Acoustics, Speech, and Signal Processing, 32(2):236–243, 1984.  Higgins et al. (2016) Higgins, I., Matthey, L., Glorot, X., Pal, A., Uria, B., Blundell, C., Mohamed, S., and Lerchner, A. Early visual concept learning with unsupervised deep learning. arXiv preprint arXiv:1606.05579, 2016.
 Hochreiter & Schmidhuber (1997) Hochreiter, S. and Schmidhuber, J. Long shortterm memory. Neural computation, 9(8):1735–1780, 1997.
 Hsu et al. (2017) Hsu, W.N., Zhang, Y., and Glass, J. Unsupervised learning of disentangled and interpretable representations from sequential data. In Advances in neural information processing systems, pp. 1876–1887, 2017.
 Johnson et al. (2016) Johnson, M., Duvenaud, D. K., Wiltschko, A., Adams, R. P., and Datta, S. R. Composing graphical models with neural networks for structured representations and fast inference. In Advances in neural information processing systems, pp. 2946–2954, 2016.
 Jordan et al. (1999) Jordan, M. I., Ghahramani, Z., Jaakkola, T. S., and Saul, L. K. An introduction to variational methods for graphical models. Machine learning, 37(2):183–233, 1999.
 Karl et al. (2016) Karl, M., Soelch, M., Bayer, J., and van der Smagt, P. Deep variational bayes filters: Unsupervised learning of state space models from raw data. arXiv preprint arXiv:1605.06432, 2016.
 Kingma & Welling (2013) Kingma, D. P. and Welling, M. Autoencoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
 Krishnan et al. (2015) Krishnan, R. G., Shalit, U., and Sontag, D. Deep kalman filters. arXiv preprint arXiv:1511.05121, 2015.
 Krishnan et al. (2017) Krishnan, R. G., Shalit, U., and Sontag, D. Structured inference networks for nonlinear state space models. 2017.
 Mathieu et al. (2016) Mathieu, M. F., Zhao, J. J., Zhao, J., Ramesh, A., Sprechmann, P., and LeCun, Y. Disentangling factors of variation in deep representation using adversarial training. In Advances in Neural Information Processing Systems, pp. 5040–5048, 2016.
 Reed et al. (2015) Reed, S. E., Zhang, Y., Zhang, Y., and Lee, H. Deep visual analogymaking. In Advances in neural information processing systems, pp. 1252–1260, 2015.
 Rezende et al. (2014) Rezende, D. J., Mohamed, S., and Wierstra, D. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014.
 Siddharth et al. (2017) Siddharth, N., Paige, B., de Meent, V., Desmaison, A., Wood, F., Goodman, N. D., Kohli, P., Torr, P. H., et al. Learning disentangled representations with semisupervised deep generative models. arXiv preprint arXiv:1706.00400, 2017.
 Villegas et al. (2017) Villegas, R., Yang, J., Hong, S., Lin, X., and Lee, H. Decomposing motion and content for natural video sequence prediction. In ICLR, 2017.
 Vondrick et al. (2016) Vondrick, C., Pirsiavash, H., and Torralba, A. Generating videos with scene dynamics. In Advances In Neural Information Processing Systems, pp. 613–621, 2016.
Appendix A Computation graph for the full inference network
In Figure 9 we show the computation graph of the full
inference framework. The inference model first computes the mean and variance parameters of
with a bidirectional LSTM (Graves & Schmidhuber, 2005) and samplesfrom the corresponding Gaussian distribution (see Figure (a)). Then
and are fed into another bidirectional LSTM to compute the hidden state representations and for the variables (see Figure (b)), where at each timestep both LSTMs take as the input and update their hidden and internal states. Finally the parameters of is computed by a simple RNN with input at time .Appendix B Sound files for the speech conversion test
We provide sound files to demonstrate the conversion of female/male speech sequences at https://drive.google.com/file/d/1zpiZJNjGWw9pGPYVxgSeoipiZdeqHatY/view?usp=sharing. Given a spectrum (magnitude information), the sound waveform is reconstructed using the GriffinLim algorithm (Griffin & Lim, 1984), which initialises the phase randomly, then iteratively refine the phase information by looping the SFFT/inverse SFFT transformation until convergence or reaching some stopping criterion. We note that the sound quality can be further improved by e.g. conjugate gradient methods. Also we found in general it is more challenging to convert female speech to male speech than the other way around, which is also observed by (Hsu et al., 2017).
We also note here that the phase information is not modelled in our experiments, nor in the FHVAE tests. First, as phase is a circular variable ranging from , Gaussian distribution is inappropriate, and instead a von Mises distribution is required. However, fast computation of the normalising constant of a von Mises distribution – which is a Bessel function – remains a challenging task, let alone differentiation and optimisation of the concentration parameters.
Appendix C Network architecture
Sprite.
The prior dynamics is Gaussian with parameters computed by an LSTM (Hochreiter & Schmidhuber, 1997). Then is generated by a deconvolutional neural network, which first transforms with a one hiddenlayer MLP, then applies 4 deconvolutional layers with 256 channels and upsampling. We use the loss for the likelihood term, i.e. .
For the inference model, we first use a convolutional neural network, with a symmetric architecture to the deconvolutional one, to extract visual features. Then
is also a Gaussian distribution parametrised by an LSTM and depends on the entire sequence of these visual features. For the factorised encoder, is also Gaussian parametrised by a onehidden layer MLP taking the visual feature of as input. The dimensions of and are 256 and 32, respectively, and the hidden layer sizes are fixed to 512.Timit.
We use almost identical architecture as in the Sprite data experiment, except that the likelihood term is defined as Gaussian with mean and variance determined by a 2hiddenlayer MLP taking both and as inputs. The dimensions of and are 64 if not specifically stated, and the hidden layer sizes are fixed to 256.
For the full inference model we use the architecture visualised in Figure 9. Again the biLSTM networks take the features of as inputs, where those features are extracted using a one hiddenlayer MLP.
Bouncing ball.
We use an RNN (instead of an LSTM as in previous experiments) to parametrise the stochastic prior dynamics of our model, and set the dimensionality of to be 16. For the deterministic models we set to be 64 dimensional. We use a 64 dimensional variable and Bernoulli likelihood for all models.
For inference models, we use the full model for the stochastic dynamics case. For the generative models with deterministic dynamics, we also use biLSTMs of the same architecture to infer the parameters of and . Again a convolutional neural network is deployed to compute visual features for LSTM inputs.
All models share the same architecture of the (de)convolution network components. The deconvolution neural network has 3 deconvolutional layers with 64 channels and upsampling. The convolutional neural network for the encoder has a symmetric architecture to the deconvolution one. The hidden layer sizes in all networks are fixed to 512.
Comments
There are no comments yet.