Towards Recurrent Autoregressive Flow Models

by   John Mern, et al.
Stanford University

Stochastic processes generated by non-stationary distributions are difficult to represent with conventional models such as Gaussian processes. This work presents Recurrent Autoregressive Flows as a method toward general stochastic process modeling with normalizing flows. The proposed method defines a conditional distribution for each variable in a sequential process by conditioning the parameters of a normalizing flow with recurrent neural connections. Complex conditional relationships are learned through the recurrent network parameters. In this work, we present an initial design for a recurrent flow cell and a method to train the model to match observed empirical distributions. We demonstrate the effectiveness of this class of models through a series of experiments in which models are trained on three complex stochastic processes. We highlight the shortcomings of our current formulation and suggest some potential solutions.



page 7


Improving Sequential Latent Variable Models with Autoregressive Flows

We propose an approach for improving sequence modeling based on autoregr...

FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow

Most sequence-to-sequence (seq2seq) models are autoregressive; they gene...

Latent Normalizing Flows for Discrete Sequences

Normalizing flows have been shown to be a powerful class of generative m...

Sinusoidal Flow: A Fast Invertible Autoregressive Flow

Normalising flows offer a flexible way of modelling continuous probabili...

Implicit Variational Conditional Sampling with Normalizing Flows

We present a method for conditional sampling with normalizing flows when...

A stochastic game theory approach for the prediction of interfacial parameters in two-phase flow systems

The prediction of interfacial area properties in two-phase flow systems ...

A Method to Model Conditional Distributions with Normalizing Flows

In this work, we investigate the use of normalizing flows to model condi...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Normalizing Flows are the class of invertible transforms with tractably computable Jacobian determinants. Normalizing Flow are used to estimate complex probability distributions by transforming a simple base distribution, such as the Gaussian, using the change of variables theorem 

[rezende2015]. These transformed distributions retain many of the attractive features of the base distribution, such as tractable density calculation and efficient sampling, while being flexible enough to model complex, multi-modal densities. Because of this, normalizing flows have been used in a variety of learning contexts, particularly variational inference [kingma2016] and deep generative modeling [papamakarios2017].

Existing methods can model complex static distributions but are limited in their ability to model stochastic processes or other complex conditional distributions [garnelo2018]

. Many normalizing flow models use highly-parameterized function approximators such as neural networks to transform distributions 


. Modeling conditional distributions is typically accomplished by changing the parameters of the base distribution, such as the mean of a base Gaussian distribution. This limits how much the learned transform may be changed by the conditioning term.

This work seeks to overcome this limitation by introducing a more flexible mechanism for conditioning based on Recurrent Neural Networks (RNNs). The output of a RNN at a given time step is a function of the input at that time step and the history of previous inputs in the sequence 


. This is achieved through use of recurrent cells that maintain hidden states over each step in a sequence. In this work, we propose a bijective recurrent cell based on the Gated Recurrent Unit (GRU) 

[cho2014]. We present the Recurrent Autoregressive Flow (RAF), based on this modified GRU, that is able to condition the transform at a given step by the preceding history, allowing for more expressive conditioning than previous methods [van2016].

We present the result of several experiments in which RAF graphs were trained to model various complex processes. We show that the RAF is generally able to outperform baseline methods on the selected tasks. The experiments also revealed shortcomings of the existing approach and suggest potential improvements to the current RAF model and training process.

2 Background

A normalizing flow is a function

that transforms one random variable to another. To be a useful flow, it must meet the following criteria.

  1. It is invertible, such that .

  2. It is a bijection, such that for all , maps to only one and vice versa.

  3. It has a tractably computable Jacobian determinant .

The probability distribution of a variable that was transformed from a variable through a normalizing flow may be calculated by the change of variables formula as shown in eq. 1.


Normalizing Flow Graphs (NFGs) are computational graphs composed of layers of simple normalizing flows. The probability distribution of variable transformed by a NFG can be calculated by applying the change of variables theorem with the chain-rule. This is shown for a NFG with

layers in eq. 3.


Samples from are generated by sampling and then transforming by the inverse of the NFG as . In general, calculating the determinant of the Jacobian is for a matrix. In order to tractably compute the density of , normalizing flows are designed such that the Jacobian takes a special form with an easy to calculate determinant.

3 Prior Work

Normalizing flows have been proposed in various forms for variational inference and generative modeling. A popular early model was the planar flow model, in which variables were transformed through a series of simple additive operations [rezende2015]. Nonlinear Independent Components Estimation (NICE) uses additive coupling layers similar to planar flows, alternating with multiplicative rescaling layers [dinh2014]. In NICE models, the input is first partitioned into two parts . The output is calculated from each part separately such that and . The Jacobian of a NICE transform is then a triangular matrix.

Both Planar Flows and NICE flows are volume-preserving, meaning they have a Jacobian determinant of

. While these models can translate the density of a distribution, they cannot scale the variance. This limits their ability to map variables onto distributions of much smaller or larger support volume than the base distribution.

In order to overcome this limitation, RealNVP [dinh2016] flows were proposed as an extension to NICE. RealNVP flows partition and shift the input just as in NICE, adding a scale term such that . MADE flows also use a similar approach [germain2015]. With this scale factor, the Jacobian is lower-triangular and the determinant is no longer necessarily . The transforms of the RealNVP and MADE models can be represented as


Decomposing the input to , we can write the Jacobian as


This Jacobian is lower triangular, and the determinant can be easily calculated as the trace of the matrix in time.

Inverse Autoregressive Flows (IAFs) introduced the concept of auto-regressive models as normalizing flows [kingma2016]. These models make the assumption that sample ordering is sequential, such that . The IAF flow transform is shown in eq. 9. The Masked Autoregressive Flow (MAF) [papamakarios2017] uses the same transform as IAF but makes the sequential assumption on the latent variables . Both IAF and MAF have a Jacobian of the form given in eq. 7.


Despite the name, these autoregressive models were not specifically developed for sequential data. In many cases, flows operate along the sample dimension such that for a sample

, any given element can be conditionally dependent only on elements . Each sample, however, is treated as independently drawn. It has been empirically shown that, even with this assumption, autoregressive flows can be effective generative models [van2016]. Commonly, autoregressive flow graphs alternate autoregressive layers with permutation layers to reduce the effect of the ordering assumption. By changing the variable order at each transform, the order dependence across the entire graph can be reduced. Permutations are themselves normalizing flows.

All of the above transforms use a simple Hadamard-product and affine transforms at each layer. The Neural Autoregressive Flow (NAF) [huang2018]

model uses a feed-forward neural network as a transform layer. By composing the network with strictly positive weights and strictly monotonic activation functions, the resulting transform is invertible. Unfortunately, as pointed out in the Block NAF extension 

[DeCao2019], there is no easy analytic method to invert the neural network transform. As a result, a given NAF model can only be used for inference or generative sampling, not both as is the case with other models.

Some work has been done to develop flows to directly model random sequences. Latent normalizing flows [ziegler2019] used autoregressive flows with a latent state variable to represent sequential processes. These flows, however, are limited to sequences of discrete variables.

4 Methods

Existing auto-regressive flow methods do not directly address temporally correlated sequences. Some methods have introduced conditioning vectors to in the transform such that

. These methods typically use simple affine functions to condition the transforms, limiting how expressive the dynamic models can be. They are also limited by only including the effect of the single prior time step as a conditioning signal. The current work extends these approaches to allow conditioning on an arbitrary length history of previous steps.

One mechanism well suited to encoding temporal data into compact representations is the Recurrent Neural Network. RNNs learn temporal relations by propagating signals forward in time through cell hidden states. These hidden states carry information from one time step to the next and also introduce a path by which training gradients may be back propagated. In this way, RNNs can learn optimal embeddings of sequences as hidden-states, which can be viewed as a form of short-term memory.

We incorporated this ability by developing a normalizing flow with a recurrent connection as shown in fig. 1. Similar to the IAF and MAF, we represent our transforms as conditioning functions coupled transformer functions. The recurrent conditioning function is based on the Gated Recurrent Unit (GRU) which uses gate functions to control the evolution of the hidden state in time. The GRU was selected because of its tendency to stabilize gradients in the temporal direction during training [cho2014]. At each time step, the Normalizing Flow Graph (NFG) models the distribution of the future state . Each recurrent layer of the NFG transforms the predicted variable using the transformer function conditioned on the cell hidden state .

Figure 1: Recurrent normalizing flow cell. The cell state evolves as a function of the observed input at each time step and the previous cell state . The transformer function outputs the transformed variable value given the input , conditioned on the hidden state value. The transform may be in forward or inverse . This figure shows the cell in the inverse configuration.

Each layer takes as input the hidden state of the previous time stepas well as the current sample. Each layer outputs the cell hidden state and either the or depending if it is operating in forward or inverse mode. When generating samples of from samples of , the NFG is operating in forward mode. When estimating density of a given sample by transforming it to , the NFG is operating in inverse mode.

The hidden-state update is shown in eq. 10.


The RAF cell can use any bijective flow as a transformer function. The proposed transformer function first splits the hidden state vector into a weight vector

and bias vector

. The weight vector is reshaped into a lower-triangular matrix . An affine transform is applied with the weight matrix and bias vector, and an invertable non-linear function is applied to the resulting value. The complete transform is


The conditioning signal is only a function of for the transform of , so the Jacobian of the transform with respect to is still lower triangular. This enables complex temporal dependencies to be learned without increasing computational complexity of the change of variables operations. Additionally, the use of a triangular matrix for the affine transform enables the learning of more complex relationships between the values of each forward sample than do previously proposed affine transforms. Like the element-wise affine transform, the triangular matrix does imply some ordering dependency that may not exist in the real data. In this way, these transforms are autoregressive and some permutation layers are still required.

We can derive the Jacobian of the recurrent flow by first applying the chain rule and finding the Jacobians of the affine and non-linearity transforms. For non-linearity transforms (such as sigmoid), the value of each output dimension

is a function only of the corresponding input value , so the Jacobian is a diagonal matrix. For the affine transform, the determinant is the weight matrix . The complete Jacobian can be calculated as


for a given non-linear activation function . We can then find the determinants of each term independently as the product of the diagonal values as


where the indexes the diagonal elements of the weight matrix. Unlike NAFs, RAF inverse transforms can be easily calculated analytically as the product of the inverse triangular matrices.

5 Experiments

We ran three experiments testing the performance of RAF graphs in modeling stochastic processes with various challenging features. The process in the first experiment is a hierarchical stochastic process in 2D space. In the second experiment, the RAF predicts the trajectory of a stochastic agent performing a simple navigation task. In the third experiment, the RAF models the evolution of a fluid flow represented as a non-stationary bimodal distribution.

5.1 Experiment 1: Hierarchical Stochastic Process

Previous flow methods have been able to model complex stationary distributions through topology transforms. This experiment compares the performance of an existing flow method to RAF in modeling a non-stationary distribution generated through a simple stochastic process.

The process generates data in batches or episodes. At the start of each episode, the process draws an index from a Bernoulli distribution

. The observed data points are then drawn sampling from Gaussian distributions as


where . The resulting distribution is shown in fig. 2.

We trained a Real-NVP and an RAF to model this process. Both graphs had five layers and transformed a Standard-Normal distribution. They were each trained with 10,000 samples generated across 100 episodes. The training loss was the negative log-likelihood of the data. The Adam optimizer 

[Kingma2014] was used with an initial learning rate of .

After training, each model was run for ten episodes of 1,000 samples each. As can be seen in fig. 2, while the Real-NVP approximately recovers the shape of the distribution of the training data, samples from a single generative batch are generated in both the and regions. A single episode from the true process would contain points from only a single mode. This suggests the Real-NVP was unable to learn the hierarchical generative process. Further, the Real-NVP graph was unable to completely separate the base normal distribution into the two components of the original distribution using. The static topology transform left a manifold connecting the two modes.

(a) True Distribution
(b) Real-NVP Model
(c) RAF Model
Figure 2: Hierarchical process modeling results. Each subplot shows data generated in one episode with in blue and in red.

For the RAF, we visualized the data and selected one episode for each value. As with the Real-NVP, the RAF accurately captures the shape of the training data distribution. In addition, it was able to learn the underlying generative process. Each episode contains samples only from a single mode of the distribution. A few points are generated between the distributions at the start of each episode. This is likely due to the model starting with zero hidden-state and accumulating hidden state upon observing samples. For generative modeling, this can be overcome with a short burn-in period.

5.2 Experiment 2: Stochastic Trajectory Prediction

In the second experiment, we trained a RAF model to predict the position of an agent completing a simple navigation task. In the task, a mouse navigates a maze to the cheese at the goal position. At each time step, the mouse agent moves toward the goal along the corridor in which it is currently located. Each step it moves a distance of 1 unit with probability 0.8 or 2 units with probability 0.2. When the mouse reaches corridor intersection, it chooses to change directions with probability 0.5. This procedure is a hierarchical stochastic process operating on varying timescales, with the agent speed sampled every time step and the direction sampled only at intersection instances.

We trained an RAF graph to predict the distribution of the next state of the mouse given the previously observed trajectory . The graph had 8 layers of RAFs of 64 hidden units each. The model was trained to maximize the average log probability density of observed positions across each episode. The Adam optimizer was used with initial learning rate of

over 10,000 epochs.

As a baseline, we also trained an RNN to output parameters for a multi-variate Gaussian distribution over the next state.


After training, we ran the RAF and RNN each for 50 episodes and measured the average log-probability density of the predictions at each time step. The RAF had an average log probability density of . The RNN had an average log probability density of .

Qualitative inspection of the test episodes showed that the RAF was able to learn to predict the bimodal position distribution caused by the speed distribution. However, the graph was only partly successful in learning multi-modal distributions at the corridor intersections. Figure 3 shows two instances in which the agent has entered the same intersection travelling from left to right. We would expect approximately equal probability mass to be assigned to continuing to travel right as to traveling vertically. We see, however, that in both cases significantly more mass is assigned to the vertical travel direction, and that significant probability mass is assigned to unreachable positions.

Figure 3: Visualization of the maze navigation agent experiment. The two images show instances where the agent approached the same corridor intersection traveling from the left to the right. As can be seen, the RAF only partially maps the distribution to the correct shape, with different results at each approach.

This shortcoming is likely due to the difficulties associated with RNN predictions over multiple time-scales, as the frequency of intersection occurrences is far lower than the base time step frequency. Only two modes were observed in the distributions at the intersections, where three were expected. This may suggest the fixed transform parameters learned by the model were more tuned to modeling the bimodal speed distribution and not adequately affected by the conditioning signal. Incorporating importance sampling into the training process to better account for sparse events may be used to overcome this deficiency.

5.3 Experiment 3: Fluid flow modeling

For our final experiment, we trained a RAF to model a simple fluid-flow. This was selected to test the ability of the RAF to model non-stationary continuous distributions under unknown dynamics.

For this test, we built a 2D flow simulator using simplified linear heat and mass transfer equations. In each episode a mass of hot fluid is initialized below the vertical mid-plane of the environment and offset from the horizontal mid-plane by a random distance sampled from a zero-mean Gaussian distribution. A second cold fluid mass is initialized the same way above the mid plane. During the episode, the cold and hot masses sink and rise respectively at rates approximately proportional to their temperatures. Heat is also transferred to neighboring fluid, causing temperatures to tend toward equal. Snapshots from an episode are shown in the top row of fig. 4.

In order to train an RAF to model this, we created a 2D distribution from each simulation step. To do this, we numerically integrated exponential of the fluid temperature over the flow area and normalized the resulting quantity such that the resulting field would be a proper distribution. No heat was allowed to exit the control-volume, so the normalization constant remained fixed throughout the episode. The resulting distribution is difficult to fit since it not only has a strong maximum at the point of high-temperature, but also a strong minimum at the low-temperature point, with the surrounding area having moderate probability density.


(a) *


(b) *


(c) *


(d) *
Figure 4: Visualization of the flow modeling task. The top row shows the true fluid motion evolving over multiple time steps. The bottom row shows the predicted fluid motion from the RAF. The RAF is able to accurately model this physical phenomenon by casting it to a distribution modeling challenge, with some accuracy loss.

At each step, our RAF predicted the next distribution , where is a position in the plane and is the observation of the distribution at time . To train this, we minimized the average KL-divergence between the predicted distribution and the true distribution. We estimated the KL divergence through a discrete 2D approximation as


where is the value of the field at point .

An autoencoder was pre-trained to vectorize the 2D field to allow it to be used by the RAF graph. An autoencoder is a neural network that encodes an input to a compressed latent space and decodes the latent representation to an estimate of the original input, typically with some data loss. The latent state of the autoencoder used was a 32-dimensional vector. The autoencoder was trained to minimize empirical L1 loss on the reconstruction as


where is the number of samples in a batch of training inputs, is the true field value and is the reconstructed value.

The autoencoder had five convolutional layers each for the encoder and decoder, with kernel size

, stride 2, and a

activation functions. The flow graph had 16 layers with hidden size 128. The base distribution was a standard Normal distribution.

The end-to-end network was trained on a composite loss


where is a hyper-parameter weighting KL-divergence loss relative to reconstruction loss. The network was trained for 3,000 epochs using the Adam optimizer.

After training, the RAF was run for 100 episodes for evaluation. The average log probability density of each step was . The predicted field and actual field for an example episode are shown for predictions at times 0, 2, 4, and 6 in fig. 4.

As can be seen in the figure, the flow graph is able to produce samples that are perceptually similar to the original fluid simulation, despite the challenging multi-modal dynamics. This suggests that having dense data available over the whole problem domain for each time step aided the training process. There is, however, significant blur on the modes at the later time steps. Some of this blur is likely a result of the loss from encoding the 2D field to the vector representation for RAF.

6 Conclusion

This work introduced Recurrent Autoregressive Flows (RAFs) as a new class of normalizing flow model. Through the introduction of a recurrent connection, RAFs allow flow models to learn more complex temporal relationships over sequential data than previous conditional flow methods. In this work, we showed that our proposed transform meets the required characteristics of a normalizing flow in that it is analytically invertible, bijective, and has an efficiently computable Jacobian determinant.

Our experiments show that RAFs improve performance of existing flow methods in modeling stochastic processes. Future work will study how to better compose the conditioning signal of the RAF with more complex flow layers, such as the block neural auto-regressive flow. Work will also be done to better condition on events occurring at varying time-scales. One approach that will be considered is a temporal dilation approach as in the clockwork RNN [koutnik2014]. Another approach to be considered will be the use of an importance-weighted replay buffer to enable learning of rare stochastic events [mnih2016].


DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited.

This material is based upon work supported by the Under Secretary of Defense for Research and Engineering under Air Force Contract No. FA8702-15-D-0001. Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Under Secretary of Defense for Research and Engineering.

© 2020 Massachusetts Institute of Technology.

Delivered to the U.S. Government with Unlimited Rights, as defined in DFARS Part 252.227-7013 or 7014 (Feb 2014). Notwithstanding any copyright notice, U.S. Government rights in this work are defined by DFARS 252.227-7013 or DFARS 252.227-7014 as detailed above. Use of this work other than as specifically authorized by the U.S. Government may violate any copyrights that exist in this work.

The authors acknowledge the MIT SuperCloud and Lincoln Laboratory Supercomputing Center for providing (HPC, database, consultation) resources that have contributed to the research results reported within this paper/report.