1 Introduction
Probabilistic programming is concerned with the symbolic specification of probabilistic models in which inference can be performed automatically. Stochastic gradientbased variational methods are gradually replacing MCMC as the default inference technique in (differentiable) probabilistic programming languages (Wingate & Weber, 2013; Kucukelbir et al., 2017; Tran et al., 2016; Kucukelbir et al., 2015; Bingham et al., 2019). This trend is a consequence of the increasing automatization of variational inference (VI) techniques, which passed from being highly mathematically sophisticated and modelspecific tools to generic algorithms that can be applied to a broad class of problems without modelspecific derivations (Hoffman et al., 2013; Kingma & Welling, 2013; HernándezLobato & Adams, 2015; Ranganath et al., 2014)
. However, applications of VI relies on the choice of a parameterized variational family and this reliance on user input arguably violates the spirit of probabilistic programming. In general, it is relatively easy to automatize the construction of the variational family under the meanfield approximation, where the approximate posterior distribution factorizes as a product of univariate distributions. While there is a substantial amount of modelspecific research on structured variational families, few existing methods can be used for automatically constructing an appropriate scalable structured variational approximation for an arbitrary chosen probabilistic model. Furthermore, these existing methods either ignore most of the prior structure of the model (e.g. ADVI with multivariate Gaussian distribution
(Kucukelbir et al., 2017)) or require strict assumptions such as local conjugacy (e.g. structured stochastic VI (Hoffman & Blei, 2015)). Furthermore, several of these methods require the use of ad hocgradient estimators or variational lower bounds
(Tran et al., 2015; Ranganath et al., 2016).In this paper, we introduce an automatic procedure for constructing variariational approximations that incorporate the structure (forward pass) of the probabilistic model while being flexible enough to capture the distribution of the observed data. The construction of these variational approximations is fully automatic and the resulting variational distribution has the same time/memory complexity of the input probabilistic program. The new family of variational models, which we call pseudoconjugate variational families
, interpolate the evidence coming from the observed data with the probabilistic structure of the prior model. Specifically, the parameters of the posterior distribution of each latent variable is a convex combination of the parameters induced by the probabilistic program and a term reflecting the influence of the data. This mimics the evidence update in the expectation parameters of conjugate exponential family models, where this posterior form is exact. Pseudoconjugate variational families can be trained using used standard inference techniques and gradient estimators and can therefore be used as dropin replacement of the meanfield approach in automatic differentiation stochastic VI. We call this new form of fully automatic inference as
automatic structured variational inference (ASVI).2 Background on variational inference and exact parameter update
VI is used to approximate the posterior over the latent variables of a probabilistic program
with a member of a parameterized family of probability distributions
. The vector of variational parameters
is obtained by maximizing the ELBO:(1) 
where is a likelihood function. The resulting variational posterior (i.e. the maximum of this optimization problem) depends on the choice of the parameterized family and it is equal to the exact posterior only when the latter is included in the family. In this paper, we restrict our attention to probabilistic programs that are specified in terms of conditional probabilities and densities chained together by deterministic functions:
(2) 
where is a family of probability distributions and is a subset parent variables such that the resulting graphical model is a directed acyclic graph (DAG). The vectorvalued functions specifies the value of the parameters of the th of the distribution of the latent variable given the values of all its parents.
An automatic variational family is determined by a algorithm that takes as input a probabilistic program and outputs a parameterized variational family
. The most commonly used algorithm consists in creating a random variable for each latent variable in the program which follows the same distribution but with uncoupled variational parameters (i.e. the meanfield (MF) approximation). This approach is somehow reminiscent of parameter update in conjugate models, where the posterior in in the same family as the prior. In this paper, we go further by introducing an explicit parameterization of the variational family that mimics the update rule of (expectation) parameters in exactly solvable conjugate models. This approach leads to a flexible structured family that includes the prior probabilistic program as special case.
2.1 Parameter update in conjugate models
Exponential family distributions have a central role in Bayesian statistics as they are the only to admit conjugate priors, where inference can be performed in closed form. An exponential family distribution
can be parameterized by a vector of expectation parameters , where is the vector of sufficient statistics of the data. We can assign to these parameters a conjugate prior distribution , which in turn is parameterized by the prior expectations . Upon observing independently sampled datapoints, it can be shown that the posterior expectation parameters are convex combination of the prior parameters and the maximum likelihood estimators:(3) 
where is a vector of convex combination coefficients, denotes the elementwise product, is the maximal likelihood estimator. For example, in a Gaussian model with known likelihood precision and Gaussian prior over the mean, the mean parameter updates as
(4) 
where is the precision of the prior. This formula shows that the posterior parameters are a tradeoff between the prior hyperparameters and the value induced by the data.
3 Pseudoconjugate variational families
We are finally ready to introduce the central innovation of the paper. Consider the following probabilistic model:
(5) 
where is a likelihood function and is a prior distribution parameterized by a vector of parameters . We do not assume that the likelihood or the prior to be in the exponential family. Nevertheless, we can construct a pseudoconjugate parameterized variational family by copying the form of the parameter update rule in conjugate models:
(6) 
where is now a vector of learnable parameters with entries ranging from to and is a vector of learnable parameters that have the same domain of definition of the parameters .
In a model with a single latent variable, the pseudoconjugate parameterizations is overparameterized. However, the power of this approach becomes evident in multivariate models constructed by chaining basic probability distributions. Consider a probabilistic program specified in the form of Eq. 2. We can construct a structured variational family by applying the pseudoconjugate form to each latent conditional distribution in the model:
(7)  
Where we rewrite the formula succinctly using the convex update operator:
(8) 
.
3.1 Theoretical and practical justifications of the structured pseudoconjugate families
The multivariate structured distributions induced by this family have several appealing theoretical properties that justify their usage in structured inference problems:

The family always contains the original probabilistic program (i.e. the prior distribution). This is trivial to see as we can obtain the prior by setting all the lambdas equal to . On the other hand, setting all the lambdas equal to leaves us with the standard meanfield approximation. Note that none of the commonly used automatic structured variational approaches share this simple property.

In pseudoconjugate stochastic VI, the gradient estimator can backpropagate through forward pass of the probabilistic program. Consequently, all the variational variables can be updated from the very first stochastic gradient update. Conversely, MF stochastic VI can update variables that are not directly connected to observations only by updating all the intermediary variables through multiple gradient updates. This phenomenon is formally analogous of the bootstrap of policy updates in modelfree reinforcement learning
(Sutton & Barto, 2018). The phenomenon is visualized in Fig. 1 which shows the magnitude of the gradients during the first updates of stochastic VI training of a timeseries experiment (described in Appendix A). 
The family includes both the filtering and the smoothing exact posterior of univariate linear Gaussian timeseries models such as
(9) In this case, the filtering conditional posterior is given by Kalman filter update:
(10) where is the Kalman gain which in our case corresponds to . The smoothing update has a similar form where the data term is augmented with a estimate integrating all observations of future timepoints.

The pseudoconjugate family has a very parsimoniously parameterization compared with other structured families. The number of parameters is , where is the total number of parameters of the conditional distributions. Conversely, the multivariate normal approach scales quadratically with the number of latent variables. However, this parsimonious parameterization implies that the pseudoconjugate family cannot capture dependencies that are not already present in the prior probabilistic program. Specifically, pseudoconjugate family cannot model correlations originating from colliding arrows in the DAG (”explaining away” dependencies).
BR (Full)  BR (Bridge)  OS (Full)  OS (Bridge)  LZ (Full)  LZ (Bridge)  

ASVI  
ADVI (MF)  
ADVI (MN)  
NN 
Average and standard error of the root mean squared errors between the posterior means and the ground truth curves computed on
simulations.3.2 Pseudoconjugate families for stochastic processes
The pseudoconjugate family can be extended to discretetime and continuoustime stochastic processes. Stochastic processes can be seen as probabilistic programs with a potentially infinite number of variables. As a example, consider a discretetime Markov process defined marginals of the following form:
(11) 
for all sets of ordered contiguous time points starting from . Assume that we collected noisy observations of the process at the arbitrary ordered set of time points . We can construct a pseudoconjugate variational process by applying the convex update operator to the active set of all time points prior to the last observation:
(12)  
It is straightforward to dynamically expand the active set simply by adding the appropriate update operators. This suggests the use of pseudoconjugate families in Bayesian nonparametrics that combine sampling of the DAG structure with VI in the DAG parameters (Wang & Blei, 2012). This can be particularly useful when combined with nonparametric models that can learn the graphical structure of the DAG (Patrick et al., 2020).
We can also define pseudoconjugate variational families for diffusion processes defined as solutions of stochastic differential equations (SDE). Consider the distribution induced by the following SDE:
(13) 
where is the drift function, is the volatility function and is a standard Brownian motion process. The corresponding variational SDE is obtained by applying the convex update operator to the drift function:
(14) 
where , , and are now functions. Note that in this context and can be interpreted as control variables which can redirect the paths of the SDE towards the datapoints. Variational inference in these SDEs model can be performed either by discretizing the timeaxis or using more sophisticated continuous stochastic backpropagation methods (Li et al., 2020).
4 Automatic structured variational inference
ASVI is a form of automatic differentiation variational inference in which the variational family is the pseudoconjugate family constructed from the input probabilistic program. The family is constructed by copying the input probabilistic program and applying the convex update operator to each function that specify the parameters of a node given the values of its parents (Eq. 8). We denote the conditional distributions obtained in this way as . The lambda variables are constrained to be between and
. This constraint is implemented by passing a unconstrained variable through a sigmoid function. The alpha and lambda parameters are trained by minimizing the ELBO, which in our case has the following form:
(15)  
where the expectations are taken with respect to the variational family. Note that, if
is a tractable exponential family distribution, we can automatically compute the KL divergence analytically, thereby reducing the variance of the gradient estimator.
5 Related work
Structured VI is commonly applied in time series models such as hidden Markov models and autoregressive models. In these models, the posterior distributions inherit strong statistical dependencies from the sequential nature of the prior. Structured VI for timeseries usually use structured variational families that capture the temporal dependencies while being fullyfactorized in the nontemporal variables
(Eddy, 1996; Foti et al., 2014; Johnson & Willsky, 2014; Karl et al., 2016; Fortunato et al., 2017). This differs from the pseudoconjugate families preserve where both temporal and nontemporal dependencies are preserved. Furthermore, these approaches typically require model specific derivations and variational bounds.Several forms of model agnostic structured variational distributions have been introduced. Hierarchical VI accounts for dependencies between latent variables by coupling the parameters of their factorized distributions through a joint variational prior (Ranganath et al., 2016). While this method is very general, it requires user input in order to define the variational prior and the use of a modified variational lower bound. Copula VI models the dependencies between latent variables using a vine copula function (Tran et al., 2015). In the context of probabilistic programming, copula VI shares the some of the same limitations of hierarchical VI: It requires the appropriate specification of bivariate copulas and it needs a specialized inference technique. The approach that is closest to our current work is perhaps structured stochastic VI (Hoffman et al., 2013). Similarly to our model, its variational posteriors have the same conditional independence structure as the input probabilistic program. However, this method is limited to conditionally conjugate models with exponential family distributions. Furthermore, the resulting ELBO is intractable and it needs to be estimated using specialized techniques. Finally, automatic differentiation VI (Kucukelbir et al., 2017)
maps the values of all latent variables to a unbounded coordinate space based on the support of each distribution. The variational distribution in this new space is then parameterized as a multivariate Gaussian. While this approach is very generic, it exploits very little information information from the original probabilistic model and has scalability problems due to the cubic complexity of Bayesian inference with multivariate Gaussian distributions.
6 Applications
We evaluate the performance of ASVI and relevant baselines in a range of probabilistic inference problems. Since the main goal of this paper is to introduce a new form of fully automatic VI that works in arbitrary probabilistic programs, we will only compare with general purpose variational families. Therefore, we will not include modeltailored variational families among our baselines. Furthermore, since our approach is meant to be generally applicable, we will only compere with existing methods that do not requires special mathematical tractability assumptions such as local conjugacy (Hoffman & Blei, 2015). Finally, we will exclude from our comparisons methods that require the use of ad hocloss functionals and gradient estimators since our pseudoconjugate family is meant as a dropin replacement in standard stochastic VI settings where pathderivative and reinforce estimators are used.
6.1 Time series analysis
As first application, we focus on timeseries models and SDEs. We used three SDE models. The first model (BR) is a Brownian motion without drift. The second model (OS) is a linear secondorder Langevin equation with oscillatory dynamics. Finally, the third model (LZ) is a stochastic Lorenz dynamical system. The details of these SDEs are given in Appendix A. The form of our pseudoconjugate family is given in Eq. 11 where the conditional densities are Gaussian distributions obtained by discretizing the SDE:
(16) 
where is the drift function while
is the volatility function. As baselines, we implemented mean field ADVI, multivariate normal ADVI and a more expressive hierarchical structured variational approach where the dependencies are learned using a fully connected neural network
(Ranganath et al., 2016). The details of this baseline is given in Appendix A. For each timeseries model, we performed inference in three experimental situations. In the first case (Full) the processes were observed at all time points with a Gaussian likelihood (BR: sd = 0.15, OS: sd = 0.2, LZ: sd = 1). In the second case (Bridge), the processes were observed only in the first and last timepoints. Finally, in the third case (Past) the processes were observed only in the last timepoints. In the case of the Lorenz system, only the timeseries was observed while and were latent variables.Table 1 reports the performance of ASDI and baselines quantified as root mean squared deviation of the posterior mean (rMSE) from the generated ground truth curves. ASDI achieves the highest performance in almost all comparisons with the gain being more pronounced in the bridge experiments. Figure 2 shows the analysis in a example trial of the LZ Bridge experiment. Figure 2 (Top panel) shows the variational parameter as function of the timestep. In a pseudoconjugate distribution, can be interpreted as a surprise detector with low values being associated with high surprise. In this example, the dynamic of is particularly interpretable. In the first observed regime, stays at a intermediary value as the trajectory needs to be corrected in order to keep track of the data. Subsequently, in the unobserved regime, increases as the extrapolation has to rely on the prior dynamics. Eventually the second observed regime is reached and lambda suddenly spikes down in order to correct the trajectory to fit the new datapoints. Interestingly, after this first correction shoots back to very high values as in this new high gain regime the Lorenz dynamics can fit the data without the need of much interference. This situation exemplifies how the pseudoconjugate optimization works in general: If the prior model is structured, the variational parameters and only need to nudge the prior dynamics towards the observations at selected points.
6.2 Deep Bayesian smoothing with neural SDEs
So far, we performed inference in simple timeseries models with lowdimensional state spaces. We will now test the performance of ASVI on high dimensional problems with complex nonlinearities parameterized by deep networks. As latent model, we used neural stochastic differential equations (SDE) (Chen et al., 2018; Li et al., 2020):
(17) 
where is a nonlinear function parameterized by a neural network and is a standard multivariate Brownian motion (see Appendix B for the details of the architecture). This latent processes generate noisecorrupted observations through a deep network :
(18) 
In our examples, is a generator which converts latent vectors into RGB images. The details of the networks and generators are given in Appendix B. We tested on two kinds of pretrained generator: A DCGAN trained on CIFAR10 and a DCGAN trained on FashionGEN (Radford et al., 2015) (see Appendix B for the details of the architecture). In the former case, the latent space is dimensional while in the latter it is dimensional. We considered two kinds of inference problems. In smoothing problems, we aim to remove the noise from a series of images generated by a trajectory in the latent process. On the other hand, in bridge problems we reconstruct a series of intermediate images given the beginning (first three time points) and the end (last two time points) of a trajectory. For both problems,we assumed to know the dynamical and generative model and we discretize the neural SDE using a Euler–Maruyama scheme and we backpropagate through the integrator (Chen et al., 2018).
Figure 3 shows the filtering performance of ASVI and two baselines (mean field, and linear Gaussian model (see Appendix B for the details) in a filtering problem. The quantitative results (negative ELBOs) are shown in Figure 4. As you can see, ASVI always reaches tighter lower bounds except in the Fashion bridge experiment where it has slightly lower performance than the linear coupling baseline. This tighter variational bound results is discernibly higher quality filtered images in both CIFAR10 and Fashon, as shown in Figure 3. Figure 5 shows several samples from the ASVI bridge posterior. As expected, the generated images diverge in the unobserved period and reconverge at the end.
6.3 Deep amortized generative modeling
Finally, we apply a amortized form of ASVI to deep generative modeling problem. The goal is to model the joint distribution of a set of binary images
paired with class labels . To this aim, we use a deep variational autoencoder with three layers of latent variables , andcoupled in a feed forward fashion through ReLu fully connected networks:
(19)  
where , are fully connected twolayers networks with ReLU activations and linear output units and and hidden units respectively while , are linear layers. Figure 6 shows the graphical model associated to this probabilistic model. The amortized pseudoconjugate distribution has the following form:
(20)  
where the mean vectors is the activation of the th layer (post ReLu) of a fullyconnected 6layers ReLu inference network taking the image as input and with sizes . On the other hand, the scale parameter vectors were obtained by applying a linear layer to the th layer followed by a softplus transformation. The details of all architectures are given in Appendix C. The amortized family was parameterized by the lambdas and by the weights and biases of the inference network. The meanfield baseline had the same form given in Eq. 20 but with fully determining the expectation of the distribution. We did not include comparison with the other baselines as they are computationally unfeasible in this larger scale experiment.
We tested the performance of these deep variational generative models in three computer vision datasets: MNIST, FashinMNIST and KMNIST
(LeCun et al., 1998; Xiao et al., 2017; Clanuwat et al., 2018). Furthermore, we performed two types of experiment: I) Images and labels were generated jointly and II) only images were generated. Figure 8 shows the performance of ASVI and mean field baseline quantified as the negative ELBO. As you can see, ASVI achieves tighter bounds in all experiments for all datasets. Figure 7 shows a randomized selection of images generated by the ASVI model together with the corresponding label.7 Discussion
In this paper we introduced a automatic algorithm for constructing an appropriate structured variational family given a input probabilistic program. The resulting method can be used on any probabilistic program specified by a directed Bayesian network and always preserves the forwardpass structure of the input program. The main limitation of the pseudoconjugate family is that it cannot capture dependencies induced by colliding arrows om the input graphical model. Consequently, in a model such a standard Bayesian neural network, where the prior over the weights is decoupled, the pseudoconjugate family is a meanfield family.
References

Bingham et al. (2019)
Bingham, E., Chen, J. P., Jankowiak, M., Obermeyer, F., Pradhan, N.,
Karaletsos, T., Singh, R., Szerlip, P., Horsfall, P., and Goodman, N. D.
Pyro: Deep universal probabilistic programming.
The Journal of Machine Learning Research
, 20(1):973–978, 2019. 
Chen et al. (2018)
Chen, T. Q., Rubanova, Y., Bettencourt, J., and Duvenaud, D. K.
Neural ordinary differential equations.
In Advances in Neural Information Processing Systems, 2018.  Clanuwat et al. (2018) Clanuwat, T., BoberIrizar, M., Kitamoto, A., Lamb, A., Yamamoto, K., and Ha, D. Deep learning for classical japanese literature. arXiv preprint arXiv:1812.01718, 2018.
 Eddy (1996) Eddy, S. R. Hidden markov models. Current Opinion in Structural Biology, 6(3):361–365, 1996.
 Fortunato et al. (2017) Fortunato, M., Blundell, C., and Vinyals, O. Bayesian recurrent neural networks. arXiv preprint arXiv:1704.02798, 2017.
 Foti et al. (2014) Foti, N., Xu, J., Laird, D., and Fox, E. Stochastic variational inference for hidden markov models. In Advances in Neural Information Processing Systems, 2014.
 HernándezLobato & Adams (2015) HernándezLobato, J. M. and Adams, R. Probabilistic backpropagation for scalable learning of bayesian neural networks. In International Conference on Machine Learning, pp. 1861–1869, 2015.
 Hoffman & Blei (2015) Hoffman, M. and Blei, D. Stochastic structured variational inference. In Artificial Intelligence and Statistics, pp. 361–369, 2015.
 Hoffman et al. (2013) Hoffman, M. D., Blei, D. M., Wang, C., and Paisley, J. Stochastic variational inference. The Journal of Machine Learning Research, 14(1):1303–1347, 2013.
 Johnson & Willsky (2014) Johnson, M. and Willsky, A. Stochastic variational inference for bayesian time series models. In International Conference on Machine Learning, 2014.
 Karl et al. (2016) Karl, M., Soelch, M., Bayer, J., and Van der Smagt, P. Deep variational bayes filters: Unsupervised learning of state space models from raw data. arXiv preprint arXiv:1605.06432, 2016.
 Kingma & Ba (2014) Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
 Kingma & Welling (2013) Kingma, D. P. and Welling, M. Autoencoding variational Bayes. arXiv preprint arXiv:1312.6114, 2013.
 Kucukelbir et al. (2015) Kucukelbir, A., Ranganath, R., Gelman, A., and Blei, D. Automatic variational inference in stan. In Advances in Neural Information Processing Systems, pp. 568–576, 2015.
 Kucukelbir et al. (2017) Kucukelbir, A., Tran, D., Ranganath, R., Gelman, A., and Blei, D. M. Automatic differentiation variational inference. The Journal of Machine Learning Research, 18(1):430–474, 2017.
 LeCun et al. (1998) LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. Gradientbased learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
 Li et al. (2020) Li, X., Wong, T. L., Chen, R. T., and Duvenaud, D. Scalable gradients for stochastic differential equations. arXiv preprint arXiv:2001.01328, 2020.
 Patrick et al. (2020) Patrick, D., Ambrogioni, L., Trottier, L., Güçlü, U., Hinne, M., Giguère, P., ChaibDraa, B., van Gerven, M., and Laviolette, F. The indian chefs process. arXiv preprint arXiv:2001.10657, 2020.
 Radford et al. (2015) Radford, A., Metz, L., and Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
 Ranganath et al. (2014) Ranganath, R., Gerrish, S., and Blei, D. Black box variational inference. In Artificial Intelligence and Statistics, pp. 814–822, 2014.
 Ranganath et al. (2016) Ranganath, R., Tran, D., and Blei, D. Hierarchical variational models. In International Conference on Machine Learning, pp. 324–333, 2016.
 Sutton & Barto (2018) Sutton, R. S. and Barto, A. G. Reinforcement learning: An introduction. MIT press, 2018.
 Tran et al. (2015) Tran, D., Blei, D., and Airoldi, E. M. Copula variational inference. In Advances in Neural Information Processing Systems, pp. 3564–3572, 2015.
 Tran et al. (2016) Tran, D., Kucukelbir, A., Dieng, A. B., Rudolph, M., Liang, D., and Blei, D. M. Edward: A library for probabilistic modeling, inference, and criticism. arXiv preprint arXiv:1610.09787, 2016.
 Wang & Blei (2012) Wang, C. and Blei, D. M. Truncationfree online variational inference for bayesian nonparametric models. In Advances in Neural Information Processing Systems, 2012.
 Wingate & Weber (2013) Wingate, D. and Weber, T. Automated variational inference in probabilistic programming. arXiv preprint arXiv:1301.1299, 2013.
 Xiao et al. (2017) Xiao, H., Rasul, K., and Vollgraf, R. Fashionmnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747, 2017.
Appendix A Details of the timeseries experiments
a.0.1 Models
As first application, we focus on timeseries models and SDEs. We used the following three models. The first model (BR) is a Brownian motion without drift and with innovation standard deviation equal to . The second model (OS) is a linear Langevin equation with oscillatory dynamics:
(21) 
where , and
is a Gaussian white noise process with standard deviation equal to
. Finally, the third model (LZ) is a stochastic Lorenz system (nonlinear SDE):(22)  
where , and are Gaussian white noise processes with standard deviation equal to .
a.0.2 Baselines
The ADVI (MF) baseline was obtained by replacing all the conditional Gaussian distributions in the probabilistic program with Gaussian distributions with uncoupled trainable mean and standard deviation parameters. The optimization on the positivevalued standard deviations was performed by transforming realvalued trainable parameters using a sofplus function.
The ADVI (MF) baseline, the distribution over all the variables was modeled as a multivariate Gaussian parameterized by its mean vector and the lowertriangular factor of covariance (with positivevalued diagonal).
In the NN baseline was a hierarchical variational distribution (Ranganath et al., 2016). The mean parameters of all the Gaussian variables in the mean field model were obtained by transforming a standard dimensional (dimensional for the LZ experiment) noise vector
with a trainable fully connected perceptron:
(23) 
where is the th component of the linear output of a fully connected two layers perceptron with ( for the LZ experiment) hidden units, without biases and with sigmoid activations in the hidden units.
a.0.3 Experiment details
All processes were discretized with a Euler–Maruyama method ( for BR and OS and for LZ) and the transition probability were approximated as Gaussian distributions (this approximation is exact for tending to ). The total number of time points was for BR and OS and for LZ. Each experiment was repeated times with different synthetic data generated from the joint model. The gradients of the ELBOs were estimated using pathderivative gradient estimators (20 samples, with entropy term integrated analytically). The models were optimized using Adam (Kingma & Ba, 2014) with parameters: lr=0.05 (0.015 for MN), betas=(0.9, 0.999), eps=1e08. In the OS and BR experiment, the models were trained for iterations. This number was chosen as all the model showed convergence within this iterations range. Conversely, the more challenging LZ problem required iterations for ASDI and NN, iterations for ADVI (MF) and iterations for ADVI (MN). The training of this latter model was unstable, leading to some suboptimal local optima.
Appendix B Details of the neural SDE experiment
b.0.1 Models
The function had the following form
(24) 
where and were matrices whose entries were sampled in each of the repetitions from a centered normal with SD equal to . Those matrices encodes the forward dynamical model and they were assumed to be known during the experiment. This is a Kalman filterlike setting where the form of the forward model is known and the inference is performed in the latent units. The neural SDE was integrated using Euler–Maruyama integration with step size equal to from to . We trained the model by backpropagating though the integrator.
We used two DCGAN generators as emission models. The networks were the DCGAN implemented in PyTorch. In the CIFAR experiment, we used the following architecture:
Network pretrained on CFAR was obtained from the GitHub repository: csinva/ganpretrainedpytorch. The FashionGEN network was downloaded from the pytorch GAN zoo repository. The architectural details are given in (Radford et al., 2015).
b.0.2 Baselines
The ADVI (MF) baseline was obtained by replacing all the conditional Gaussian distributions in the probabilistic program with Gaussian distributions with uncoupled trainable mean and standard deviation parameters. ADVI (MN) was not computationally feasible in this larger scale experiment. Therefore, we implemented a a linear Gaussian model whith conditional densities:
(25) 
where the matrix , and the vectors and are learnable parameters.
Appendix C Details of the autoencoder experiment
c.0.1 Models
Decoder 1 ()
Decoder 2 ()
Decoder 3 ()
Decoder 4 ()
Inference network ()
Comments
There are no comments yet.