Physics has traditionally relied upon human ingenuity to identify key variables, discover physical laws, and model dynamical systems. With the recent explosion of available data coupled with advances in machine learning Goodfellow-et-al-2016
, fundamentally new methods of discovery are now possible. However, a major issue with applying these novel techniques to scientific and industrial applications is their interpretability: neural networks and deep learning are often seen as inherently black box methods. To make progress, we must incorporate scientific domain knowledge into our network architecture design and algorithms without sacrificing the flexibility provided by deep learning modelsKarpatne:ec; 1710.11431; 1901.11103. In this work, we show that we can leverage unsupervised learning techniques in a physics-informed architecture to build models that learn to both identify relevant interpretable parameters and perform prediction. Because relevant parameters are necessary for predictive success, the two tasks of extracting parameters and creating a predictive model are closely linked, and we exploit this relationship to do both using a single architecture.
We focus our attention on spatiotemporal systems with dynamics governed by partial differential equations (PDEs). These systems are ubiquitous in nature and include physical phenomena such as hydrodynamics and electromagnetism. Recently, there has been significant interest in the data-driven analysis and discovery of PDEs pmlr-v80-long18a; Rudy:2017cs; RAISSI2019686; BERG2019239; JMLR:v19:18-046
. However, previous works on PDE discovery and parameter extraction often assume the entire dataset is governed by the same dynamics and also explicitly provide the key dynamical parameters. In more complex scenarios, we may have limited control over the systems that we are studying and yet still want to model them and extract relevant physical features from their dynamics. If we attempt to study such systems by naïvely training a predictive model, we are likely to fail in one of two ways: first, a single explicit PDE model will be unable to capture the variations in the dynamics caused by uncontrolled variables in the data, and second, a generic deep learning method for time-series prediction such as long short-term memory (LSTM)-based modelshochreiter1997long; cp_19991218 will not be interpretable or provide any physical insight and may also result in unphysical solutions at later times due to overfitting. To avoid these problems and gain a better understanding of the physical system, we must first identify important parameters or variables that are uncontrolled and that change in the raw data, producing varying dynamics. Recent work on learning parametric PDEs has taken steps toward addressing this issue doi:10.1137/18M1191944. We will use an unsupervised learning method to automate the process of determining the relevant parameters that control the system dynamics and constructing a predictive model.
We propose a model architecture (Figure 1a) based on variational autoencoders (VAEs) 1312.6114. VAEs are widely used for dimensionality reduction and unsupervised learning tasks Goodfellow-et-al-2016 and have been shown to be effective for studying a wide variety of physical phenomena, e.g. discovering simple representations of systems in classical and quantum mechanics 1807.10300, modeling protein folding and molecular dynamics 1710.10629; Hernandez:2018kk; Wehmeyer:2018hy
, and identifying condensed matter phase transitionsWetzel:2017ko; 1705.09524. In terms of interpretability, the VAE architecture and its derivatives have also been shown to disentangle independent factors of variation higgins2017beta; 1804.03599; NIPS2018_7527.
Our architecture consists of an encoder (Figure 1b) that extracts physical parameters characterizing the system dynamics and a decoder (Figure 1c) that acts as a predictive model and propagates an initial condition forward in time given the extracted parameters. This differs from a traditional VAE due to the additional initial condition provided to the decoder, allowing the encoder to focus on extracting latent parameters that parameterize the dynamics of the system rather than the physical state. Our architecture can be thought of as a conditional VAE NIPS2015_5775, although only the decoder is conditional. While similar architectures have been recently proposed for physical systems such as interacting particles 1807.09244 and moving objects xue2018visual
, our model is specifically designed to study spatiotemporal phenomena, which have a continuous set of degrees of freedom.
To take advantage of the spatiotemporal structure of PDE-governed systems, we use convolutional layers—commonly used for image recognition tasks NIPS2012_4824; He_2016_CVPR—which efficiently represent local features in both the encoder and decoder portions of our architecture. The translation invariance of the convolutions also allows us to train on small patches of data and then evaluate on arbitrarily large systems with arbitrary boundary conditions (Appendix F). In addition, we construct the decoder to efficiently parameterize the dynamics of a PDE by making its convolutional kernels and biases a function of the extracted latent parameters using a fully-connected neural network. In this way, the latent parameters directly control the local propagation of the physical states in the decoder, resulting in more stable predictions for the future propagation of the system and a more physical encoding of the dynamics. Using convolutions to represent PDE propagation in the decoder is also analogous to applying a finite difference approximation pmlr-v80-long18a.
To demonstrate the capabilities of this approach, we test our method on simulated data from PDE models for chaotic wave dynamics, optical nonlinearities, and convection and diffusion (Section III). These numerical experiments show that our method can accurately identify and extract relevant physical parameters that characterize variations in the observed dynamics of a spatiotemporal system, while at the same time construct a flexible and transferable predictive model. We further show that the parameter extraction is robust to noisy data and can still be effective for chaotic systems where accurate prediction is difficult. The goal of this approach is to provide an addition tool for studying complex spatiotemporal systems when there are unknown and uncontrolled variables present.
Ii Model Architecture
using dilated convolutions and inverse-variance weighted (IVW) averaging. During training, the latent parameters are sampled from the extracted distribution. (c) The PD then uses a fully-connected latent-to-kernel network to map the latent parameters to the kernels and biases of the dynamic convolutional layers, which are used in a recurrent fashion to predict the propagation of the system starting from the initial condition . The model is trained end-to-end using the mean squared error (MSE) loss between the predicted propagation series and target series along with VAE regularization. Time-series limits are dropped in the figure labels for conciseness.
Our model (Figure 1) has an encoder–decoder architecture based on a variational autoencoder (VAE) 1312.6114; higgins2017beta. Given a dataset of time-series from a spatiotemporal system, the dynamics encoder (DE) extracts latent parameters which parameterize the varying dynamics in the dataset. These latent parameters are then used by the propagating decoder (PD) to simulate the system given an initial condition and boundary conditions. During training, the model is optimized to match the output of the PD to a time-series example from the dataset. The goal of the VAE architecture is to allow the PD to push the DE to extract useful and informative latent parameters.
For training, the network requires time-series data that are grouped in pairs: the input series is the input to the DE, and the target series provides the initial condition and the training targets for the PD. Each pair of time-series must follow the same dynamics and thus correspond to the same latent parameters. We can construct such a dataset from the raw data by cropping each original time-series to produce a pair of smaller time-series. This cropping can be performed randomly in both the time and space dimensions, which allows the network to train on a reduced system size while still making use of all the available data. In our examples, we also choose to crop dynamically during training, akin to data augmentation methods used in image recognition He_2016_CVPR.
In detail, the DE network takes the full input series and outputs a mean and a variance
representing a normal distribution for each latent parameter. During training, each is sampled from its corresponding distribution using the VAE reparameterization trick: , where is independently sampled for every training example during each training step. During evaluation, we simply take . These parameters along with an initial condition—the first state in the target series—are then used by the PD network to predict the future propagation of the system. The predicted propagation series produced by the PD can be computed up to an arbitrary future time , where during training to match the target series. By providing the PD with an initial condition, we allow the DE to focus on encoding parameters that characterize the dynamics of the data rather than encoding a particular state of the system. This is further reinforced by training on randomly cropped pairs of time-series as well as by the VAE regularization term (Appendix D).
The full architecture is trained end-to-end using a mean-squared error loss between the predicted propagation series from the PD and the target series . We also add the VAE regularization loss—the KL divergence term —which encourages each latent parameter distribution generated by the DE to approach the standard normal prior distribution
. The total loss function is given by
where is the length of the target series without the initial , and
is a regularization hyperparameter111The hyperparameter can be interpreted as either -VAE regularization higgins2017beta or a particular choice of output distribution for the propagating decoder (PD), i.e. giving . which we tune for each dataset. The parameter is key to learning disentangled representations higgins2017beta; 1804.03599. By using the VAE sampling method and regularizer, we compel the model to learn independent and interpretable latent parameters (Appendix D). For additional training details, see Appendix B.
ii.1 Dynamics Encoder (DE)
The dynamics encoder (DE) network is designed to take advantage of existing symmetry or structure in the time-series data. We implement the DE as a deep convolutional network in both the time and space dimensions to allow the network to efficiently extract relevant features. To ensure the DE can handle arbitrary system sizes and time-series lengths, the architecture only contains convolutional layers with a weighted average applied at the output to obtain the latent parameters. The weights for the final averaging are also learned by the network and interpreted as variances so that the overall variance can also be computed. In this way, the network is able to focus on areas of the input series that are most important for estimating the latent parameters, akin to a visual attention mechanismpmlr-v37-xuc15.
Explicitly, we first compute local quantities
where and are multilayer convolutional networks in space and time (see Appendix A for details). Then, instead of using a fully-connected layer to compute the final mean and variance for each latent parameter, we combine the local quantities by performing an inverse-variance weighted average using weights given by
where is a constant chosen to correct for the correlations between nearby points. This averaging serves two purposes: it allows the DE to scale to arbitrary system sizes and geometries, and it improves the parameter extraction by placing greater emphasis on regions of high confidence. Assuming that non-overlapping patches should be treated as independent while overlapping patches are increasingly correlated, we take to be the size of the receptive field of the convolutional networks and .
ii.2 Propagating Decoder (PD)
The propagating decoder (PD) network is designed as a predictive model for spatiotemporal systems. We structure the PD as a multilayered convolutional network (see Appendix A for details) with a residual skip connection that maps a state to the next state in the time-series . Thus, each propagation step is given by
To generate the predicted propagation series for comparison with the target series , we begin with the initial condition and then recursively apply the PD to propagate
, forming a recurrent network. The PD acts as a physics simulator or, in this case, a PDE integrator with explicit time stepping. This architecture reflects both the spatial and temporal structure of PDE-governed systems and incorporates boundary conditions by properly paddingat each time step before applying the convolutional layers. For example, to use periodic boundary conditions during evaluation, we apply periodic padding at each time step. During training, we treat the edges of the target series—cropped from a full training example—as a boundary condition by using the spatial boundary of each in the target series to pad the corresponding state before propagation.
Unlike the convolutional layers of the DE, the kernels and biases for are not directly trained. Instead, the kernel weights and biases are a function of the latent parameters . This type of layer is known as a dynamic convolution NIPS2016_6578; wu2018pay or a cross-convolution xue2018visual. Each convolutional kernel and corresponding bias is constructed by a separate fully-connected latent-to-kernel network that maps the latent parameters to each kernel or bias, forming a multiplicative connection in the PD. Thus, we can interpret the PD convolutional kernels and biases as encoding the dynamics of the system parameterized by .
Iii Simulated PDE Datasets
To study the ability of our architecture to perform parameter extraction, we generate simulated datasets of spatiotemporal systems that have spatially uniform, time-independent local dynamics in a box with periodic boundary conditions, i.e. we consider PDEs of the form
where is a general space- and time-independent, nonlinear local operator acting on . This allows us to design an optimized, physics-informed model architecture. We test our model on a variety of spatiotemporal systems by creating the following three datasets that cover linear, nonlinear, and chaotic dynamics as well as giving both 1D and 2D examples. For details on the generation of the simulated datasets, see Appendix C.
iii.1 1D Kuramoto–Sivashinsky
The Kuramoto–Sivashinsky equation
is nonlinear scalar wave equation with a viscosity damping parameter . This is a key example of a chaotic PDE manneville1985liapounov due to the instability caused by the negative second derivative term and was originally derived to model laminar flame fronts kuramoto1978diffusion; sivashinsky1980flame. The 1D Kuramoto–Sivashinsky dataset has a training set with 5,000 examples and a test set with 10,000 examples.
iii.2 1D Nonlinear Schrödinger
The nonlinear Schrödinger equation
is a complex scalar wave equation with a cubic nonlinearity controlled by the coefficient . In our data, we represent
as a real two-component vector. This equation can be used to model the evolution of wave-packets in nonlinear optics and is known to exhibit soliton solutions ablowitz_2011. The 1D nonlinear Schrödinger dataset has a training set with 5,000 examples and a test set with 10,000 examples.
iii.3 2D Convection–Diffusion
The 2D convection–diffusion equation
is a linear scalar wave equation consisting of a diffusion term with constant and a velocity-dependent convection term with velocity field . The equation describes a diffusing quantity that is also affected by the flow or drift of the system, e.g. dye diffusing in a moving fluid. We consider the case of a constant velocity field. The 2D convection–diffusion dataset has a training set with 1,000 examples and a test set with 1,000 examples.
Iv Numerical Experiments
|Param.||LP 1||LP 2||LP 5||LP 1||LP 2||LP 3|
We perform numerical experiments by training the model on both the original noiseless datasets and the datasets with added Gaussian noise—corresponding to 10% noise relative to the initial conditions. Then, we evaluate the trained models on the full size noiseless test set examples (no cropping). By also training on noisy datasets, we test the robustness of our method and show the effect of noise on the extracted parameters and prediction performance.
iv.1 Parameter Extraction
from the 1D Kuramoto–Sivashinsky dataset. The light blue shaded bars are the 95% confidence intervals produced by the model. Results are shown for models trained on (a) the original noiseless dataset as well as (b) the dataset with addedGaussian noise.
), so we instead show a multivariate linear regression in the subspace of two relevant latent parameters, which extracts the linear combination of latent parameters that correspond to theand velocity components. The light blue shaded bars are the 95% confidence intervals produced by the model. Results are shown for models trained on (a) the original noiseless dataset as well as (b) the dataset with added Gaussian noise.
During training, the model will only use a minimal set of latent parameters to encode the variation in the dynamics and will align each latent parameter in this subspace with an independent factor of variation due to the VAE regularization higgins2017beta; 1804.03599. Intuitively, the regularization encourages each latent parameter to independently collapse to a non-informative prior , and so the model prefers to minimize its use of the latent parameters and maintain their independence (Appendix D). Therefore, the number of latent parameters provided to the model is not critical as long as it is greater than the number of independent factors of variation. In our experiments, we allow the model to use five latent parameters. Because the 1D datasets have only one varying physical parameter and the 2D dataset has three varying physical parameters, the trained model will only make use of one or three latent parameters, respectively, and the rest will collapse to the prior.
We can determine the number of relevant latent parameters and empirically verify this claim by examining the statistics of the extracted distribution parameters and from the dynamics encoder (DE) for each dataset. A latent parameter that is useful to the propagating decoder (PD) for predicting the target series will have high variance in and a low mean , implying that the extracted parameter is precise and informative. A parameter which has collapsed to the prior and is non-informative will have low variance in and high mean . These statistics indeed show that our model can correctly determine the number of relevant parameters for each dataset (Figure 2). In real applications, we will not have access to the ground truth physical parameters, so we must rely on these metrics to identify the relevant parameters extracted by the model.
To evaluate the performance of our parameter extraction method, we compare the extracted latent parameters from the model with the true physical parameters used to generate our simulated datasets: the viscosity damping parameter for the 1D Kuramoto–Sivashinsky dataset, the nonlinearity parameter for the 1D nonlinear Schrödinger dataset, and the diffusion constant and drift velocity components , for the 2D convection–diffusion dataset. Because these simulated physical parameters are drawn from normal distributions (Section III), we expect the relevant latent parameters—which have prior distribution —to be linearly related to the true parameters (Appendix D
). For real experimental systems, this is also a reasonable assumption for uncontrolled variables because natural parameters tend to be normally distributed due to the central limit theorem. We assess the quality of the extracted parameters by linearly fitting the relevant latent parameters with the ground truth physical parameters to obtain parameter predictions andcorrelation coefficients. Our numerical experiments show excellent parameter extraction on all three datasets (Figures 3–5) with for all parameters (Table 1) and no degradation in performance with added Gaussian noise. However, we do observe some nonlinear behavior at the edges of the parameter range likely due to data sparsity in those regions.
Looking more closely at the results for the three-parameter convection–diffusion dataset, we see that the trained model correctly extracts three relevant latent parameters: one latent parameter corresponds to the diffusion constant and the remaining two-dimensional latent subspace corresponds to the drift velocity vector (Table 2). In particular, the model learns a rotated representation of the drift velocity vector as a two-dimensional latent subspace due to the inherent rotational symmetry of the dynamics (Appendix E), so we can recover the , components of the drift velocity by performing a multivariate linear fit (Figure 5). The successful separation of diffusion from drift velocity in the extracted parameters demonstrates our model’s ability to distinguish distinct and interpretable factors of variation in the dynamics of the system.
iv.2 Prediction Performance
In addition to testing parameter extraction, we evaluate the prediction performance of the trained models on their corresponding test sets. Due training speed and stability considerations, our models are initially trained with a PD architecture containing only 16 hidden channels (Appendix A). To show the potential for further model refinement, we fix the weights of the DEs trained on the original noiseless datasets and then train additional PDs, each with an expanded 64 hidden channels. These refined predictive models perform better than the original predictive models used during end-to-end training. For comparison, the datasets are also evaluated with a stiff exponential integrator for semilinear differential equations (ETDRK4 kassam2005fourth) using a second order finite difference discretization on the same time and space meshes provided in the datasets. Note that using a non-stiff integrator fails on many of the examples in the datasets, so a stiff integrator is required.
The models trained on the 1D Kuramoto–Sivashinsky and 1D nonlinear Schrödinger datasets both perform reasonably when compared with the traditional finite difference method (Figures 6a, 6b), with the model trained on the Kuramoto–Sivashinsky dataset maintaining a higher accuracy than its traditional counterpart. The prediction error of the 2D convection–diffusion model is dominated by the uncertainty in the parameter extraction, so the prediction performance is comparable to a finite difference exponential integrator with similar noise in the PDE parameters (Figure 6c). For the models trained on datasets with added noise, we do see some negative impact on prediction performance (Figure 6), but there is no effect on the parameter extraction quality (Tables 1, 2).
The refined models, trained on the noiseless datasets, demonstrate that the PD—the predictive network—can be improved independently of the DE—the parameter extraction network. Moreover, the solutions generated by these models remain stable and physically reasonable well beyond the number of time steps propagated during training, suggesting that the models have indeed learned physically meaningful propagators of the PDE-governed systems (Figures 7–9).
We have developed a general unsupervised learning method for extracting unknown dynamical parameters from noisy spatiotemporal data without detailed knowledge of the underlying system or the exact form of the governing PDE. While we do not explicitly extract the governing PDE, our method provides a set of tunable relevant parameters, which characterize the system dynamics in independent and physically interpretable ways, coupled with a highly transferable predictive model.
The flexibility and robustness of our model comes from using a generic physics-informed neural network model for nonlinear PDEs. The interpretability of the resulting extracted parameters is a result of the variational autoencoder (VAE) training and regularization (Appendix D) as well as the inductive biases imposed by our physics-informed network design. By using appropriate spatial averaging in the dynamics encoder (DE) and dynamic convolutions in the propagating decoder (PD), we ensure that both the parameter extraction and the propagation prediction from our model are physically motivated and generalizable to arbitrary system sizes and geometries. The dynamic convolutions, in particular, are an important physical inductive bias for encouraging the model to learn latent parameters which govern the propagation dynamics. As a result, the learned parameter-to-kernel mappings in the trained predictive model are fully transferable, which we can demonstrate by evaluating the predictive model without retraining on a different set boundary conditions (see Appendix F).
Our strategy for modeling spatiotemporal systems is to retain the expressiveness of a neural network model while imposing general constraints—such as locality through using convolutional layers—to help the network learn more efficiently. For particular applications, this could also include spatial symmetries cohen2017steerable; pmlr-v48-cohenc16; NIPS2018_8239, e.g. properly transforming fluid flow vectors, as well as additional symmetries of the internal degrees of freedom, e.g. the global phase of the nonlinear Schrödinger equation. These architecture-based constraints encourage the model to learn physically relevant representations and can be tailored to individual applications, allowing us to incorporate domain knowledge into the model. This also lets us use datasets that are much smaller than is traditionally required by deep learning methods. In fact, we have been able to successfully train the model with as few as 10 examples (Appendix G). Additionally, the model’s robustness to noise is a powerful feature of deep learning methods and provides a promising avenue for studying dynamical systems in a data-driven fashion RUDY2019.
We believe that this unsupervised learning method can be further adapted in the future for a more general class of spatiotemporal systems by incorporating spatially inhomogeneous latent parameters and will also be able to make of use data with incomplete physical states by inferring the missing information. Parameter extraction will improve following the rapid advances in unsupervised learning and disentangling representations, e.g. through a deeper theoretical understanding of the -VAE alemi18a; 1804.03599; 1702.08658 and alternative formulations NIPS2018_7527; infoVAE. Physics-informed inductive biases, however, will remain the key ingredient for ensuring the representations are interpretable pmlr-v97-locatello19a. The predictive model will also likely achieve better accuracies by using more sophisticated architectures, which have been shown to perform extremely well on even chaotic PDEs PhysRevLett.120.024102, and by explicitly incorporating differentiable PDE solvers with derivatives computed by the adjoint method NIPS2018_7892. This would allow the PD network to focus on encoding the PDE rather than also learning a stable integration method, and thus improve the interpretability of the model.
The ultimate goal of this work is to provide additional insight into complex spatiotemporal dynamics using a data-driven approach. Our method is an example of a new machine learning tool for studying the physics of spatiotemporal systems with an emphasis on interpretability.
We would like to acknowledge useful discussions with Jason Fleischer, Rumen Dangovski, and Li Jing. This research is sponsored in part by the Army Research Office and was accomplished under Cooperative Agreement Number W911NF-18-2-0048. This work is further supported in part by the U.S. Department of Defense through the National Defense Science & Engineering Graduate Fellowship (NDSEG) Program and by the MIT–SenseTime Alliance on Artificial Intelligence. This material is also based in part upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Agreement No. HR00111890042.
Appendix A Model Implementation Details
For the 1D datasets, the dynamics encoder (DE) uses 2D convolutions with output channel sizes , linear kernel sizes , and dilation factors . For the 2D dataset, the DE uses 3D convolutions with output channel sizes with the same kernel sizes and dilation factors. The and networks share the same convolution weights for the first four layers and have distinct final layers to produce and . The final output channel size is determined by the number of latent parameters; in our tests, we use five latent parameters.
For the 1D datasets, the propagating decoder (PD) architecture uses three 1D dynamic convolutional layers with output channel sizes , linear kernel size , and periodic padding. For the 2D datasets, the PD uses three 2D dynamic convolutional layers with the same output channel sizes, kernels, and padding. The refined models increase the number of hidden channels in the PD from 16 to 64, resulting in output channel sizes .
The latent-to-kernel network, which maps the latent parameters to each kernel or bias in the PD, consists of two fully-connected layers, i.e. one hidden layer. For the 1D datasets, the hidden layers have size for kernels and for biases, where the input and output channel sizes refer to the corresponding dynamic convolution in the PD. For the 2D dataset, the hidden layers have size for kernels and for biases.
Our architecture uses ReLU activations throughout except for the unactivated output layers of the DE, PD, and latent-to-kernel networks. The output of the PD convolutional networkuses a tanh activation with a learnable scaling parameter ( with learnable parameter initialized to 1) to stabilize the recurrent architecture. The network also has an additional fixed multiplicative pre-factor set to to improve the initial training stability.
Our model is implemented using PyTorch v1.0paszke2017automatic.
Appendix B Training Details
All models are trained using batch size 50 and the Adam optimizer kingma2015adam with learning rate
. Models for the 1D datasets and the noisy 2D dataset were trained for 2,000 epochs; the model for the noiseless 2D dataset was trained for 4,000 epochs; and the refined models were all trained for 2,000 epochs. The VAE regularization hyperparameter is set tofor the 1D datasets and for the 2D dataset. All hyperparameter tuning was done using the training set for validation.
For the 1D Kuramoto–Sivashinsky dataset, we train the model using a crop—in the time and space dimensions, respectively—for the input series and a crop for the target series. For the 1D nonlinear Schrödinger dataset, we train using a crop for the input series and a crop for the target series. For the 2D convection–diffusion dataset, we train using a crop—in the one time and two space dimensions, respectively—for the input series and a crop for the target series. During evaluation on each dataset, we use the same full size time-series from the test set for both the input and target series.
Appendix C Dataset Generation Details
For each time-series example in the 1D Kuramoto–Sivashinsky dataset, we sample the viscosity damping parameter from a truncated normal distribution (, , and truncation interval ). We then use the ETDRK4 integrator kassam2005fourth to generate each time-series to within a local relative error of . Each time-series consists of a uniform time mesh with points for a total time and a space mesh with points for an unit cell. These are produced by solving on a finer time and space mesh to ensure convergence and then resampling to the dataset mesh sizes. Each initial state is generated from independently sampled, normally distributed Fourier components with a Gaussian band-limiting envelope of varying widths (uniformly sampled in the interval ) and then normalized to unit variance.
For each time-series example in the 1D nonlinear Schrödinger, we sample the nonlinearity coefficient from a truncated normal distribution (, , and truncation interval ). We then use the ETDRK4 integrator kassam2005fourth to generate each time-series to within a local relative error of . Each time-series consists of a uniform time mesh with points for a total time and a space mesh with points for an unit cell. These are produced by solving on a finer time and space mesh to ensure convergence and then resampling to the dataset mesh sizes. The initial states are generated in an analogous manner to the Kuramoto–Sivashinsky dataset.
For each time-series example in the 2D convection–diffusion dataset, we vary the parameters by sampling the diffusion constant from a truncated normal distribution (, , and truncation interval ), and each velocity component , from a normal distribution (, ). Because the convection–diffusion equation is linear, we use the exact solution to generate the dataset. Each time-series consists of a uniform time mesh with points for a total time and an space mesh for an unit cell. Each initial state is generated from independently sampled, normally distributed Fourier components with a Gaussian band-limiting envelope of varying widths (uniformly sampled in the interval ) and then normalized to unit variance.
Appendix D Understanding the Effects of VAE Regularization
Using VAE 1312.6114 or -VAE higgins2017beta regularization in our model provides three main benefits for learning physically interpretable representations: the regularization encourages the model to minimize use of the latent parameters, enforces independence among the learned latent parameters, and matches the marginal latent distribution to a standard normal prior. We can explicitly see these effects by decomposing the data-averaged VAE regularization term in the following way NIPS2018_7527; hoffman2016elbo:
where is the data distribution, are the standard normal priors for the latent parameters , is the output distribution of the dynamics encoder,
is the joint distribution of the encoded latent parameters and the data, andand are the marginal distributions of the latent parameters or a single latent parameter , respectively. The three terms in this decomposition correspond directly to the three effects: the first term (13) represents the mutual information between the latent parameters and the data; the second term (14) represents the total correlation between the latent parameters; and the third term (15) consists of KL divergences between the marginal distribution for individual latent parameters and the standard normal prior.
By minimizing the mutual information between the latent space and the data (13) as well as correlations among the latent parameters (14), the model is compelled to learn a latent space with minimal information and independent parameters, i.e. the model will use a minimal set of independent relevant latent parameters to capture only the necessary information for better prediction performance. The rest of the unused latent parameters will collapse to the prior. Furthermore, by matching the marginal latent parameter distributions to the standard normal priors (15), the VAE regularizer encourages a linear relationship between the relevant learned latent parameters and the true physical parameters if the physical parameters are normally distributed in the data. Even if a physical parameter is non-normally distributed, the VAE regularization will still compel the model to learn a monotonic relationship 222 Caveat: In addition to ambiguities introduced by symmetries of the physical parameters, the relationship may not be monotonic for physical parameter distributions that have support on a topologically distinct space from the real line, e.g. a uniformly distributed periodic angle parameter. However, the result may still be interpretable, e.g. an angle parameter may be encoded as a ring in a two-dimensional latent subspace.
Caveat: In addition to ambiguities introduced by symmetries of the physical parameters, the relationship may not be monotonic for physical parameter distributions that have support on a topologically distinct space from the real line, e.g. a uniformly distributed periodic angle parameter. However, the result may still be interpretable, e.g. an angle parameter may be encoded as a ring in a two-dimensional latent subspace.between and a corresponding latent parameter given by
Although this decomposition is suggestive of the effects of VAE regularization, the study of the performance of VAE-based models and the relative importance and model dependence of each of these effects is still very much ongoing alemi18a; 1804.03599; 1702.08658; NIPS2018_7527; infoVAE; 1802.06847. While training our model, we empirically observe that the latent parameters retain their independence and that their marginal distributions match the standard normal priors, so only an increase in information stored in the latent space is traded for better prediction performance. We believe we can attribute this to the physics-informed inductive biases present in our architecture, which allows our model to achieve its best performance using a minimal set of independent and normally distributed latent parameters.
Appendix E Raw Parameter Extraction Results
We can explicitly see the relevant and collapsed latent parameters in the raw data by plotting the latent parameters versus the true physical parameters (Figure 10). The latent parameters that show a correlation with the true physical parameters also have small variances and correspond to the identified relevant latent parameters (Figure 2), while the remaining latent parameters have collapsed to the prior. Note that for the collapsed parameters, we see variances that are less than one—the value expected for parameters which have collapsed to the prior distribution . This is because, to average over systems of different sizes, the model makes the assumption that patches separated far in space or time provide independent estimates of the extracted parameters and computes the total variance accordingly. This assumption is reasonable for relevant parameters but will artificially lower the extracted variances for collapsed parameters. During testing, we choose to evaluate on the full system size resulting in this artifact. If we were to evaluate on smaller patches that match the size of the crops used during training, we would indeed see that the collapsed parameters have .
We also note that, for the model trained on the 2D convection–diffusion dataset, the latent parameters associated with the drift velocity are not aligned with the , velocity components. This is an expected result due to the inherent ambiguity of choosing a coordinate basis—introduced by the rotational symmetry of the velocity vector—and makes judging the extraction performance more difficult. Instead of examining one latent parameter at a time, we must consider the two-dimensional latent subspace associated with the velocity vector. Taking the two relevant latent parameters that are correlated with the drift velocity (Table 2), we can perform a multivariate linear regression of the velocity components , in this two-dimensional latent subspace to verify that the model has indeed learned a simple rotated representation of the velocity vector (Figure 5).
Appendix F Alternative Boundary Conditions
The fully convolutional structure of the propagating decoder (PD) means that we are able to evaluate our model on arbitrary geometries and boundary conditions. By training on small crops and evaluating on the full size examples in the test set (Section IV), we have already shown the trained model can be directly evaluated on larger system sizes. To show direct evaluation on an alternative boundary condition, we test the refined predictive model—originally trained on the 1D Kuramoto–Sivashinsky dataset with periodic boundaries—on a new test example generated with Dirichlet hard wall boundary conditions (Figure 11). In general, we can apply alternative boundary conditions by adjusting the padding scheme of each propagation step in the PD. For Dirichlet boundaries, this corresponds to applying constant padding at each propagation step. This preliminary test suggests that we can achieve similar prediction performance using an alternative boundary condition, which the model has never previously seen, and demonstrates the transferability of the learned convolutional kernels.
Appendix G Performance Scaling with Dataset Size
Due to the significant physics-informed inductive biases in our architecture, our model still achieves usable results even when trained on very small datasets. We test the dataset size dependence of our method using the 1D Kuramoto–Sivashinsky system and find that the model is still able to identify the relevant latent parameter even with a dataset of just 10 examples (Figure 12). The accuracy and precision of the extracted parameter and the prediction performance do begin to suffer when using such extremely small datasets, but the model is still able to provide some insight into the dynamics of the spatiotemporal system represented by the data.