Disentangling ODE parameters from dynamics in VAEs

08/26/2021 ∙ by Stathi Fotiadis, et al. ∙ 8

Deep networks have become increasingly of interest in dynamical system prediction, but generalization remains elusive. In this work, we consider the physical parameters of ODEs as factors of variation of the data generating process. By leveraging ideas from supervised disentanglement in VAEs, we aim to separate the ODE parameters from the dynamics in the latent space. Experiments show that supervised disentanglement allows VAEs to capture the variability in the dynamics and extrapolate better to ODE parameter spaces that were not present in the training data.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Robust prediction of dynamical systems remains an open question in machine learning and engineering in general. Such capabilities would enable innovations in several fields including system control, autonomous agents and computer aided engineering. The use of deep networks for sequence modelling has recently gained significant traction (Girin et al., 2020), aided also by advances in self-supervision (Wei et al., 2018). Accurate long-term prediction, though, can be notoriously difficult, especially for some dynamical systems, where errors can accumulate in finite time (Zhou et al., 2020; Fotiadis et al., 2020; Raissi et al., 2019). One reason why the prediction of dynamical systems is hard is the variability of the solution space. Even simple ODEs, like the swinging pendulum or the -body system, can have multiple continuous parameters that affect their evolution. Capturing the whole range of such parameters in a single training set is unrealistic and further inductive biases are required for robustness (Fotiadis et al., 2020; Bird and Williams, 2019; Barber et al., 2021; Miladinović et al., 2019).

Many approaches based on the use of neural networks try to jointly learn the dynamics and the physical parameters, which results in convoluted representations and usually leads to overfitting

(Bengio et al., 2012). System identification can be used to extract parameters, but requires knowledge of the underlying system to be computationally effective (Ayyad et al., 2020). We leverage advances in Variational Autoencoders (Kingma and Welling, 2014) to learn representations in which the ODE parameters are disentangled from the dynamics. Disentanglement enables distinct latent variables to focus on different factors of variation of the data distribution, and has been successfully applied in the context of image generation (Higgins et al., 2017; Kim and Mnih, 2018). We translate this idea to dynamics modelling by treating ODE parameters as factors of variation. Recent findings (Locatello et al., 2018, 2019) emphasize the vital role of inductive biases from models or data for useful disentanglement. We tap into the wealth of ground truth values of ODE parameters, which are cheaply collected in simulations. Furthermore, while non-trivial, using simulated data for training real-world models is an increasingly appealing option (Peng et al., 2017). With supervised disentanglement, VAEs can achieve better generalization in parameter spaces they had not been exposed to during training.

Contributions By treating the ODE parameters as factors of variation of the data and applying supervised disentanglement, we enforce several inductive biases. First, the encoder in addition to prediction also performs ”soft” system identification which acts as a regularizer. Second, it creates an implicit hierarchy such that some latent variables correspond to sequence-wide ODE parameters and the rest capture instant dynamics. Additionally, this renders the latent space more interpretable. Third, the extracted parameters condition the decoder, bringing it closer to numerical integrators where the ODE parameters are known. We assess our method in three dynamical systems and demonstrate that disentangled VAEs can better capture the variability of dynamical systems compared to baseline models. We, also, assess the out-of-distribution (OOD) generalization to increasing degrees of ODE parameter shift, and find that disentanglement provides an important advantage in this case.

2 Related Work

VAEs and disentanglement While supervised disentanglement in generative models is a long-standing idea (Mathieu et al., 2016), information-theoretic properties can be leveraged to allow unsupervised disentanglement in VAEs (Higgins et al., 2017; Kim and Mnih, 2018). The impossibility result from (Locatello et al., 2018) demonstrated that disentangled learning is only possible by inductive biases coming either from the model or the data. Hence, the focus shifted back to semi- or weakly-supervised disentanglement approaches (Locatello et al., 2019, 2020). While most of these methods focus on assessing the disentanglement, we directly assess using the downstream prediction task.

Disentanglement in sequence modelling While disentanglement techniques are mainly tested in a static setting, there is a growing interest in applying it to sequence dynamics. Using a bottleneck based on physical knowledge, Iten et al. (2018) learn an interpretable representation that requires conditioning the decoder on time, but it can return physically inconsistent predictions in OOD data (Barber et al., 2021). Deep state-space models (SSMs) have also employed techniques for disentangling content from dynamics (Fraccaro et al., 2017; Li and Mandt, 2018), but, focus mostly on modelling variations in the content, failing to take dynamics into account. In hierarchical approaches (Karl et al., 2017), different layers of latent variables correspond to different timescales: for example, in speech analysis for separating voice characteristics and phoneme-level attributes (Hsu et al., 2017). In an approach similar to our work, Miladinović et al. (2019) separate the dynamics from sequence-wide properties in dynamical systems like Lotka-Volterra, but do so in an unsupervised way which dismisses a wealth of cheap information and only assesses the OOD generalization in a very limited way.

Feed-forward models for sequence modelling Deep SSM models are difficult to train as they require non-trivial inference schemes and a careful design of the dynamic model (Krishnan et al., 2015; Karl et al., 2017). Feed-forward models, with necessary inductive biases, have been used for sequence modelling both in language (Bai et al., 2018) and also in dynamical systems (Greydanus et al., 2019; Fotiadis et al., 2020). Disentanglement has not been successfully addressed in these models; together with Barber et al. (2021), our work is an attempt in this direction.

3 Supervised disentanglement of ODE parameters in VAEs

Figure 1: The VAE-SD model. From an -dimensional input, an

-dimensional prediction of future time-steps is derived. The loss function has three parts: the reconstruction loss is replaced by a prediction loss, the KL-divergence enforces the prior on to the latent space, and the extra loss term enforces the supervised disentanglement of the ODE parameters in latent space.

Figure 2: Mean Absolute Error at 200 time-steps. The bars represent the 10 experiments of each model with lower MAE. In all three systems disentangled VAEs provide an advantage over the other baseline. The disentanglement in MLP does not increase performance as consistently. The scaling in VAE-SSD allows it to better capture the parameter space of the original test-set but in most cases VAE-SD extrapolates better OOD.

Variational autoencoders (VAEs) (Kingma and Welling, 2014) offer a principled approach to latent variable modeling by combining a variational inference model with a generative model . As in other approximate inference methods, the goal is to maximize the evidence lower bound (ELBO) over the data:

The first part of the ELBO is the reconstruction loss (in our case the prediction loss) and the second part is the Kullback-Leibler divergence that quantifies how close is the approximate posterior to the prior.

Design choices for the model We use an isotropic unit Gaussian prior which helps to disentangle the learned representation (Higgins et al., 2017). The approximate posterior (encoder) distribution is a Gaussian with diagonal covariance allowing a closed form KL-divergence, while the decoder has a Laplace distribution with constant diagonal covariance which is tuned empirically. This leads to an loss that provides improved results in some problems (Mathieu et al., 2018) and empirically works better in our case. The parameters , and

are computed via feed-forward neural networks.

Disentanglment of ODE parameters in latent space Apart from the disentanglement that stems from the choice of prior , we explicitly disentangle part of latent space so that it corresponds to the ODE parameters of each input sequence. We achieve this by using a regression loss term between the ground truth factors of the ODE parameters and the output of the corresponding latents, . We opted for an loss, corresponding to a Laplacian prior with mean and unitary covariance. Previous methods have reported that binary cross-entropy works better than (Locatello et al., 2019) but this does not fit well in a setting like ours. We hypothesize that BCE works better because of the implicit scaling. To address this, we propose applying a function which linearly scales the between the min and max values of the corresponding factor of variation. In all cases, the regression term is weighted by a parameter which is empirically tuned. Plugging these choices in results in the following loss function :

Using the reparameterization trick (Kingma and Welling, 2014)

, the loss is amenable to stochastic gradient descent optimization, with batches of

data points. The model architecture can be seen in Figure 1.

4 Experiments

Figure 3: Model predictions (taken from the OOD Test-Set Hard of each system)

4.1 Datasets

Model were compared on three dynamical systems:

The systems where chosen for varied complexity in terms of degrees of freedom, number of ODE equations and factors of variation. For the pendulum we consider one factor of variation, its length

; Lotka-Volterra has 4 factors of variation and the 3-body system has also 4 factors of variation . Factor are drawn uniformly from a predetermined range which is the same between the training, validation and test sets. To further assess the OOD prediction accuracy, we create two additional test sets with factor values outside of the original range. We denote these datasets as OOD Test-set Easy and Hard, representing a smaller and bigger deviation from the original range. The data were additionally corrupted with Gaussian noise. Dataset details can be found on Table 1 of the Appendix.

4.2 Models and training

The main goal of this work is to assess whether OOD prediction can be improved by using ODE parameters to disentangle the latent representation in VAEs. We opted to use simple models to allow more experiments and comparisons. Our main baseline is the VAE upon which we propose two enhancements that leverage supervised disentanglement, using the loss function described in Section 3. The first model, called VAE with Supervised Disentanglement (VAE-SD), uses and identity scaling function . The second one uses a linear scaling function, termed VAE-SSD: where

are the ODE parameters and their corresponding minimum and maximum values from the training set. Another baseline is a multilayer perceptron (MLP) autoencoder which allows comparison with a deterministic counterpart of the VAE. We additionally use supervised disentanglement on the latent neurons of the MLP, a model we refer to as MLP-SD. This enables us to assess if the parameter information can improve other models. Lastly, we include a stacked LSTM model, a popular choice for low dimensional sequence modelling

(Yu et al., 2019), as a representative recurrent method.

Early experiments revealed a significant variance on the performance of the models, depending on hyperparameters. Under these conditions, we took various steps to make model comparisons as fair as possible. Firstly, all models have similar capacity in terms of neuron count. Secondly, we tune various hyperparameter dimensions, some of which are shared and others are model-specific as can be seen in detail in Tables

3, 4 and 5 of the Appendix. Lastly, we conduct a thorough grid search on the hyperparameters to avoid undermining a model. We train the same number of experiments for all models which amounts to 1440 trained model in total, as summarized in Table 2 of the Appendix.

4.3 Results

For each dynamical system we focus on the performance on the three test-set, the in-distribution test set and the two OOD test-sets which represent an increasing shift from the training data. Models are compared on the cumulative Mean Absolute Error(MAE) between prediction and ground truth for 200 predicted time-steps. This is at least 20 times longer than training supervision. Long predictions are obtained by re-feeding the model outputs back as input. This approach has been shown to work well in systems where the dynamics are locally deterministic (Fotiadis et al., 2020). A summary of the quantitative results can be found in Figure 2. To account for the variability in the results, we present the 10 runs of each model with the lower MAE.

As expected, the MAE is positively correlated with the data distribution shift of the test-sets for all systems and models. Results show that in the non-disentangled models the MLP is generally better than the VAE in most cases, while the LSTM is only comparable in the pendulum dataset for small OOD shifts. Disentangled VAE models offer a substantial and consistent improvement over the VAE. It is also important to note that the improvement is more pronounced for the OOD test-sets where the distribution shift is greater. This holds true across all 3 dynamical systems, a strong sign that disentanglement of ODE parameters is an inductive bias that can lead to better generalization. On the other hand, results for the MLP-SD are mixed with overfitting observed in some cases, especially OOD. It probabilistic are better suited to capture the variation in the data. In any case, the contrast between VAE-SD and MLP-SD illustrates that making use of privileged information is not trivial and more work is needed to help us understand what works in practice and why.

Comparing the disentangled VAEs, we see that the scaling in VAE-SSD allows it to better model the data, yielding a lower error in-distribution. This seems to come at a slight overfitting cost, because the VAE-SD provides better OOD extrapolation in most cases. This could be explained because the extra scaling is dependent on min and max values of the factors in the training set. The extra information allows the model to better capture the training data but sacrifices some generalization capacity. Qualitative prediction can be found in Figure 3. All models produce plausible trajectories; nevertheless the error in some experiments explodes after a finite number of steps.

5 Conclusions

Supervised disentanglement of ODE parameters in the latent space of VAEs is a helpful inductive bias that improves OOD generalization in modelling of dynamical systems. Disentanglement acts as a regularizer for the encoder, enforcing an implicit hierarchy in the latent space, making the model not only more explainable but also acts as conditioning for the decoder. Disentanglement in MLP autoencoders does not yield equally consistent improvements indicating that using extra information is not a straightforward task that requires further exploration. While transferring models trained in simulated data to the real world is far from trivial, simulated data are cheap and this motivates similar fields like sim2real. Under that light supervised disentanglement can provide a pathway for improved robustness in real world applications where dynamical system prediction is critical. Applying the method to high-dimensional spatiotemporal data from more complicated dynamical systems can further increase its relevance. Sequence-wide parameters could also be exploited through self-supervision.

References

  • A. Ayyad, M. Chehadeh, M. I. Awad, and Y. Zweiri (2020) Real-time system identification using deep learning for linear processes with application to unmanned aerial vehicles. IEEE Access 8 (), pp. 122539–122553. External Links: Document Cited by: §1.
  • S. Bai, J. Z. Kolter, and V. Koltun (2018) Trellis Networks for Sequence Modeling. arXiv. External Links: Link Cited by: §2.
  • G. Barber, M. A. Haile, and T. Chen (2021) Joint Parameter Discovery and Generative Modeling of Dynamic Systems. External Links: Link Cited by: §2.
  • G. Barber, M. A. Haile, and T. Chen (2021) Joint Parameter Discovery and Generative Modeling of Dynamic Systems. External Links: Link Cited by: §1, §2.
  • Y. Bengio, A. Courville, and P. Vincent (2012) Representation Learning: A Review and New Perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence 35 (8), pp. 1798–1828. External Links: Link Cited by: §1.
  • A. Bird and C. K.I. Williams (2019) Customizing sequence generation with multitask dynamical systems. arXiv (i). External Links: ISSN 23318422 Cited by: §1.
  • S. Fotiadis, E. Pignatelli, M. Lino Valencia, C. D. Cantwell, A. Storkey, and A. A. Bharath (2020)

    Comparing recurrent and convolutional neural networks for predicting wave propagation

    .
    In ICLR 2020 Workshop on Deep Differential Equations, External Links: Link Cited by: §1, §2, §4.3.
  • M. Fraccaro, S. Kamronn, U. Paquet, and O. Winther (2017)

    A disentangled recognition and nonlinear dynamics model for unsupervised learning

    .
    Advances in Neural Information Processing Systems 2017-Decem (section 5), pp. 3602–3611. External Links: ISSN 10495258 Cited by: §2.
  • L. Girin, S. Leglaive, X. Bie, J. Diard, T. Hueber, and X. Alameda-Pineda (2020) Dynamical Variational Autoencoders: A Comprehensive Review. External Links: Link Cited by: §1.
  • S. Greydanus, M. Dzamba, and J. Yosinski (2019) Hamiltonian Neural Networks. pp. 1–15. External Links: Link Cited by: §2.
  • I. Higgins, L. Matthey, A. Pal, C. Burgess, X. Glorot, M. Botvinick, M. Shakir, and A. Lerchner (2017) beta-VAE: LEARNING BASIC VISUAL CONCEPTS WITH A CONSTRAINED VARIATIONAL FRAMEWORK. 44 (6), pp. 807–831. External Links: Document, ISSN 1078-0874 Cited by: §1, §2, §3.
  • W. Hsu, Y. Zhang, and J. Glass (2017) Unsupervised Learning of Disentangled and Interpretable Representations from Sequential Data. Advances in Neural Information Processing Systems 2017-December, pp. 1879–1890. External Links: Link Cited by: §2.
  • R. Iten, T. Metger, H. Wilming, L. del Rio, and R. Renner (2018) Discovering physical concepts with neural networks. External Links: Link Cited by: §2.
  • M. Karl, M. Soelch, J. Bayer, and P. Van Der Smagt (2017) Deep variational Bayes filters: Unsupervised learning of state space models from raw data. 5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings (ii), pp. 1–13. Cited by: §2, §2.
  • H. Kim and A. Mnih (2018) Disentangling by Factorising. 35th International Conference on Machine Learning, ICML 2018 6, pp. 4153–4171. External Links: Link Cited by: §1, §2.
  • D. P. Kingma and M. Welling (2014) Auto-encoding variational bayes. In 2nd International Conference on Learning Representations, ICLR 2014 - Conference Track Proceedings, External Links: Link Cited by: §1, §3, §3.
  • R. G. Krishnan, U. Shalit, and D. Sontag (2015)

    Deep Kalman Filters

    .
    External Links: Link Cited by: §2.
  • Y. Li and S. Mandt (2018) Disentangled Sequential Autoencoder. 35th International Conference on Machine Learning, ICML 2018 13, pp. 8992–9001. External Links: Link Cited by: §2.
  • F. Locatello, S. Bauer, M. Lucic, G. Rätsch, S. Gelly, B. Schölkopf, and O. Bachem (2018) Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations. 36th International Conference on Machine Learning, ICML 2019 2019-June, pp. 7247–7283. External Links: Link Cited by: §1, §2.
  • F. Locatello, B. Poole, G. Rätsch, B. Schölkopf, O. Bachem, and M. Tschannen (2020) Weakly-Supervised Disentanglement Without Compromises. arXiv. External Links: Link Cited by: §2.
  • F. Locatello, M. Tschannen, S. Bauer, G. Rätsch, B. Schölkopf, and O. Bachem (2019) Disentangling Factors of Variation Using Few Labels. External Links: Link Cited by: §1, §2, §3.
  • E. Mathieu, T. Rainforth, N. Siddharth, and Y. W. Teh (2018) Disentangling Disentanglement in Variational Autoencoders. External Links: Link Cited by: §3.
  • M. Mathieu, J. Zhao, P. Sprechmann, A. Ramesh, and Y. LeCun (2016) Disentangling factors of variation in deep representations using adversarial training. External Links: Link Cited by: §2.
  • Đ. Miladinović, M. W. Gondal, B. Schölkopf, J. M. Buhmann, and S. Bauer (2019) Disentangled State Space Representations. External Links: Link Cited by: §1, §2.
  • X. B. Peng, M. Andrychowicz, W. Zaremba, and P. Abbeel (2017) Sim-to-real transfer of robotic control with dynamics randomization. CoRR abs/1710.06537. External Links: Link, 1710.06537 Cited by: §1.
  • M. Raissi, P. Perdikaris, and G. E. Karniadakis (2019)

    Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations

    .
    Journal of Computational Physics 378, pp. 686–707. External Links: Document, ISSN 10902716 Cited by: §1.
  • D. Wei, J. Lim, A. Zisserman, and W. T. Freeman (2018) Learning and using the arrow of time. In

    2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition

    ,
    Vol. , pp. 8052–8060. External Links: Document Cited by: §1.
  • Y. Yu, X. Si, C. Hu, and J. Zhang (2019)

    A Review of Recurrent Neural Networks: LSTM Cells and Network Architectures

    .
    Neural Computation 31 (7), pp. 1235–1270. External Links: ISSN 0899-7667, Document, Link, https://direct.mit.edu/neco/article-pdf/31/7/1235/1053200/neco_a_01199.pdf Cited by: §4.2.
  • H. Zhou, S. Zhang, J. Peng, S. Zhang, J. Li, H. Xiong, and W. Zhang (2020) Informer: Beyond efficient transformer for long sequence time-series forecasting. arXiv. External Links: ISSN 23318422 Cited by: §1.

Appendix A Datasets

For simulations, we use an adaptive Runge-Kutta integrator with a timestep of seconds. Each simulated sequence has a different combination of factors of variation. Simulation of the pendulum uses an initial angle which is randomly between while the angular velocity is 0. For the other two systems the initial conditions are always the same to avoid pathological configurations.

Pendulum Lotka-Volterra 3-Body
ODEs
Number of ODEs 1 2 6
Independent Variables , (prey), (predator)
Initial values
Timestep 0.01 0.01 0.01
Sequence length 2000 1000 1000
Noise 0.05 0.05 0.01
Factors of variation (length)
Train/Val/Test
OOD Test Set Easy
OOD Test Set Hard
Number of sequences
Train/Val/Test 8000/1000/1000
OOD Test Set Easy 1000
OOD Test Set Hard 1000
Table 1: Datasets. In L-V and 3-body OOD test sets, at least one ODE parameter is outside of the parameter range used for training.

Appendix B Training and Hyperparameters

During training the back-propagation is used after a single forward pass. The input and output of the models are smaller than the sequence size, so to cover the whole sequence we use random starting points per batch, both during training and testing. The validation set is used for early stopping. We used the Adam optimizer with and . A scheduler for the learning rate was applied whose patience and scaling factor are hyperparameters.

b.1 Number of experiments

MLP MLP-SD VAE VAE-SD VAE-SSD LSTM Total
Pendulum 72 72 72 72 72 72 432
L-V 72 72 72 72 72 72 432
3-body 96 96 96 96 96 96 576
Total experiments 1440
Table 2: Number of trained models per architecture and dataset. Each corresponds to a distinct configuration of hyperparameters.
MLP MLP-SD VAE VAE-SD LSTM
Input Size 10, 50
Output Size 1, 10
Hidden Layers [400, 200] 50,100,200
Latent Size 4, 8, 16 -
Nonlinearity

Leaky ReLU

Sigmoid
Num. Layers - - - - 1,2,3
Learning rate
Batch size 16, 32 16 16, 32 16 16, 64
Sched. patience 20, 30, 40 20,30 20 20 30
Sched. factor 0.3 0.3 0.3 0.3 0.3
Gradient clipping No 1.0 1.0
Layer norm (latent) No No Yes Yes No
Teacher Forcing - - - - Partial
Decoder - - -
Sup. scaling - Linear - Linear -
Supervision - 0.1, 0.2, 0.3 - 0.01, 0.1, 0.2 -
# of experiments 72 72 72 72 72
Table 3: Pendulum hyperparameters.
MLP MLP-SD VAE VAE-SD LSTM
Input Size 50
Output Size 10
Hidden Layers [400, 200] 50,100
Latent Size 8, 16, 32 -
Nonlinearity Leaky ReLU Sigmoid
Num. Layers - - - - 1,2,3
Learning rate
Batch size 16, 32, 64 16, 32 16, 32 16 10, 64, 128
Sched. patience 20, 30 20, 30 20 20 20, 30
Sched. factor 0.3, 0.4 0.3 0.3 0.3 0.3
Gradient clipping No No 0.1, 1.0 0.1, 1.0 No
Layer norm (latent) No No No No No
Teacher Forcing - - - - Partial, No
Decoder - - -
Sup. scaling - Linear - Linear -
Supervision - 0.1, 0.2, 0.3 - 0.01, 0.1, 0.2, 0.3 -
# of experiments 72 72 72 72 72
Table 4: Lotka-Volterra hyperparameters
MLP MLP-SD VAE VAE-SD LSTM
Input Size 50
Output Size 10
Hidden Layers [400, 200] 50,100
Latent Size 8, 16, 32 -
Nonlinearity Leaky ReLU Sigmoid
Learning rate
Batch size 16, 32 16 16 16 16, 64, 128
Sched. patience 30, 40, 50, 60 30, 40, 50, 60 30, 40, 50, 60 30, 40, 50, 60 20, 30
Sched. factor 0.3, 0.4 0.3 0.3, 0.4 0.3, 0.4 0.3
Gradient clipping No No No No No
Layer norm (latent) No No No No No
Decoder - - -
Sup. scaling - Linear - Linear -
Supervision - 0.05, 0.1, 0.2, 0.3 - 0.1, 0.2 -
# of experiments 96 96 96 96 96
Table 5: 3-body system hyperparameters

Appendix C Additional results

Figure 4: Mean Absolute Error at 200 time-steps. The bars represent the 10 experiments of each model with lower MAE. In all three systems disentangled VAEs provide an advantage over the other baseline. The disentanglement in MLP does not increase performance as consistently. The scaling in VAE-SSD allows it to better capture the parameter space of the original test-set but in most cases VAE-SD extrapolates better OOD.
Figure 5: Model predictions (taken from the OOD Test-Set Hard of each system)