Accelerating PDE-constrained Inverse Solutions with Deep Learning and Reduced Order Models

Inverse problems are pervasive mathematical methods in inferring knowledge from observational and experimental data by leveraging simulations and models. Unlike direct inference methods, inverse problem approaches typically require many forward model solves usually governed by Partial Differential Equations (PDEs). This a crucial bottleneck in determining the feasibility of such methods. While machine learning (ML) methods, such as deep neural networks (DNNs), can be employed to learn nonlinear forward models, designing a network architecture that preserves accuracy while generalizing to new parameter regimes is a daunting task. Furthermore, due to the computation-expensive nature of forward models, state-of-the-art black-box ML methods would require an unrealistic amount of work in order to obtain an accurate surrogate model. On the other hand, standard Reduced-Order Models (ROMs) accurately capture supposedly important physics of the forward model in the reduced subspaces, but otherwise could be inaccurate elsewhere. In this paper, we propose to enlarge the validity of ROMs and hence improve the accuracy outside the reduced subspaces by incorporating a data-driven ML technique. In particular, we focus on a goal-oriented approach that substantially improves the accuracy of reduced models by learning the error between the forward model and the ROM outputs. Once an ML-enhanced ROM is constructed it can accelerate the performance of solving many-query problems in parametrized forward and inverse problems. Numerical results for inverse problems governed by elliptic PDEs and parametrized neutron transport equations will be presented to support our approach.



page 7

page 8

page 9

page 10


Non-intrusive reduced-order models for parametric partial differential equations via data-driven operator inference

This work formulates a new approach to reduced modeling of parameterized...

Model-Constrained Deep Learning Approaches for Inverse Problems

Deep Learning (DL), in particular deep neural networks (DNN), by design ...

Forward and Inverse models in HCI:Physical simulation and deep learning for inferring 3D finger pose

We outline the role of forward and inverse modelling approaches in the d...

Beyond black-boxes in Bayesian inverse problems and model validation: applications in solid mechanics of elastography

The present paper is motivated by one of the most fundamental challenges...

Learning and correcting non-Gaussian model errors

All discretized numerical models contain modelling errors - this reality...

A framework for data-driven solution and parameter estimation of PDEs using conditional generative adversarial networks

This work is the first to employ and adapt the image-to-image translatio...

An acceleration strategy for randomize-then-optimize sampling via deep neural networks

Randomize-then-optimize (RTO) is widely used for sampling from posterior...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Data-driven machine learning techniques provide a useful framework for modeling and control of complex systems of scientific and industrial interest. Examples include physics-informed neural networks[17, 16], data-driven discovery of governing equations[3], a deep learning-based numerical algorithm for solving variational problems[24], and more[15, 14, 23].

Bridging the gap between numerical simulations and data-driven machine learning techniques is important in addressing the increasing need for better accuracy and faster computational performance when dealing with large-scale complex systems. Model order reduction provides a framework to construct efficient and low-dimensional models to alleviate the computational burden of simulating many parameter realizations of systems by capturing the dominant modes of the physical system[2]. Recent efforts provide evidence supporting the combination of machine learning techniques with low-dimensional physical models to perform efficient and accurate modeling of physical systems. Examples include modeling statistical error surrogates of reduced order models[7], neural network closure models for projection-based reduced models[19, 20], and deep learning model order reduction methods[8].

In this paper, we shall combine the effectiveness of traditionally physics-preserving approaches with deep learning. To achieve this objective, we propose first to learn the important/dominant physics of the problem under consideration by a well-established approach and then to use deep learning to learn the discrepancy of the former with respect to high-fidelity simulations and observation data. In particular, we will leverage the projection-based reduced-order modeling (ROM) approach[2]. The ROM is constructed not only to be fast-to-evaluate but also to respect the underlying important physics. Clearly, while reduced models are designed to be as accurate as possible, their accuracy is inversely proportional to the dimension of the reduced subspace and the complexity of the problem under consideration[2].

The proposed DL-enhanced ROM approach induces several advantages over the existing methods: 1) compared to standard DL techniques, it preserves physical properties including conservation and stability (e.g. via constrained least-squares ROM approach); 2) compared to conventional ROM approaches, ours is corrected by a NN in a data-driven fashion and hence more accurate; 3) it is significantly more reliable than the standard DL surrogate methods as its accuracy is first obtained through reduced models and then enhanced by a NN; and 4) our approach decouples physic-embedding (via reduced models) and accuracy (via NN), and thus allowing standard DL methods to be employed in a straightforward fashion.

2 Parameter-to-Observable Maps and Many-Query Problems

Consider the parameter-to-observable map,


where the state depends on the parameters through a forward model usually represented as a partial differential equation and represents the measurements of the quantities of interest. The parameter-to-observable map is evaluated by first solving a dynamical system for the state such that,


and extracting observations from the state,


where is some discrete operator, for example a residual operator resulting from a numerical discretization of a set of partial differential equations and is the linear operator computes the quantities of interest given the state variables .

2.1 Model Order Reduction

Model order reduction techniques aim to reduce the computational burden of evaluating the parameter-to-observable map by constructing reduced models that are faster and cheaper to evaluate. Projection-based model order reduction techniques employ a reduced-space basis to form a linear subspace defined to be the span of the reduced-basis onto which the governing equations are projected. An approximate solution is computed in the reduced space:



is the vector of coordinates of the approximate solution

and is the solution to reduced system,


where is the basis for the test space . To efficiently construct the reduced trial basis, a model-constrained adaptive sampling procedure (see, e.g., [4] and the references therein) is used to obtain high-fidelity state solution snapshots by greedily exploring the parameter space in order to find points leading to the largest errors in the quantities of interest.

Although reduced models can provide an accurate estimate of

, the inherent limitation introduced by approximating the state space with a linear subspace can introduce errors in the quantity of interest:

3 Improving Reduced Order Models with Deep Learning

3.1 Data-driven Discrepancy Function

We propose to learn the error term of the ROM using a deep learning model, equipped with the capacity to learn the possibly nonlinear dependency of the quantity of interest as a function of its parameters. The DL-enhanced quantity of interest would incorporate the physics preserved by the ROM with the approximation capacity of a deep neural network[11] to provide a computationally efficient parameter-to-observable map. The current work focuses on improving parameter-to-observable maps for time-independent systems. The proposed DL-enhanced ROM quantity of interest reads


where is the reduced trial basis.

Learning a corrective term , through finding the "best" network hyper-parameters , to augment a reduced model provides a mechanism to approximate possibly non-linear dependencies between the observables and the parameters in an efficient manner. Furthermore, the learned neural network model is data-driven, providing the ability to incorporate experimental observation data and high-fidelity simulation data during the offline stage of model creation. The learned model can then be incrementally improved in an online fashion during the prediction phase.

A purely data-driven surrogate model relies on a rich training dataset to learn the dynamics of the system whereas reduced order models, particularly projection-based reduced models generated with model-constrained adaptive sampling[4], efficiently albeit intrusively learns to approximate the high-fidelity model. Then, the corrective term will be easier to train compared to the purely data-driven surrogate model since the contained knowledge of dominant physics of the system would have already been captured in the ROM and will be transferred to the combined prediction.

3.2 Training the Model

To train the deep learning corrective model, an offline data-driven stage is required. The training set is formed by simultaneously running high-fidelity forward model and ROM for the same parameters . Each quantity of interest could even be obtained from experimental data in the case where a feasible high-fidelity model is not present.

For parameters in infinite-dimensional function spaces, a finite element discretization scheme is used enabling this corrective mechanism to be used in a wide range of physical applications. The deep learning model admits the nodal values of the parameters in the input layer.

3.3 Hyper-parameter Optimization

Learning the error between the full model and the reduced model is a high-dimensional deep learning regression problem whose performance is sensitive to the particular architecture of the deep neural network model. The pertinent hyper-parameters dictating the architecture of the model include:

We use a Bayesian optimization framework to pick hyper-parameters for the NN discrepancy function[21]. A Gaussian process[18] surrogate model is employed to parametrize the performance of the neural network as a function of the hyper-parameters. New hyper-parameters are sampled by minimizing an acquisition function. For this work, scikit-optimize’s gp_minimize[10] was used as the optimizer which chooses an acquisition function in a probabilistic manner so as to minimize either the negative the expected improvement, the lower confidence bound of the Gaussian process, or the negative probability of improvement at each sampling step. This approach to pick hyper-parameters improves on grid search and random search as the sampling procedure better explores the hyper-parameter space by leveraging information from previous neural network architectures.

4 Parameter Inference

4.1 Deterministic Inverse Problems with Reduced Models and Discrepancy Functions

Given observations of the state variable, a common many-query problem is to infer the relevant parameters that caused the observations. The following PDE-constrained optimization procedure aims to reconstruct the parameters by minimizing the objective function by solving

s. t.

where represents the measured observations, is the regularization operator introduced to make the inverse problem well-posed, is the coefficient of regularization, and the corrected prediction of the observation combines the reduced quantity of interest and the learned corrective term.

To minimize the above objective function, the gradient with respect to the parameters is computed and optimized using first-order methods (e.g. using L-BFGS). Each gradient evaluation requires solving an adjoint equation and computing the parameter-to-observable map for a new parameter value by solving the reduced system. The gradient of the objective function with respect to is given by,


where is the solution to the adjoint equation (see Appendix A),


The gradient of the DL discrepancy function

is computed using automatic differentiation of the network using Tensorflow

[1] and the partial derivatives and are computed using the relevant reduced equations.

5 Numerical Experiments

5.1 Steady Heat Conduction Problem

In this numerical experiment, a reduced order model corrected with the deep learning discrepancy function is used to compute quantities of interest as a function of temperature in a steady-state heat conduction problem. The temperature distribution within the fin, , is governed by the following elliptic partial differential equation:


where denotes the thermal heat conductivity, is the Biot number, is the physical domain describing the thermal fin, is the bottom edge of the fin, and is the exterior edges of the fin. Equation (11) models convective heat losses to the external surface, and equation (12) models the heat source at the root. The parameters of interest are the thermal heat conductivity values of the fin and observations are functions of the temperature .

Figure 1:

Finite element mesh for the thermal fin with 1446 degrees of freedom

The above system of equations can be discretized using the finite element method and the solution can be obtained by solving the linear system,


where is the discretized thermal conductivity, is the nodal value vector of the temperature distribution, , , is the quantity of interest, and with being the number of degrees of freedom and being the number of observables.

We define the residual for the reduced system given the reduced trial basis and test basis as,


and the projection-based model order reduction technique yields the reduced system of the form


where , , , and is the reduced quantity of interest.

The existence of an affine decomposition of the matrix can further improve computational complexity of the reduced model.


where each does not depend on the parameters and is a scalar function of such that . The form of the reduced matrix then becomes,


yielding a further improvement in computational efficiency by enabling the precomputation of the matrices .

For the thermal fin heat conduction problem, we average thermal conductivity over each sub-fin to compress the parameter space in order to obtain an efficient approximate affine decomposition of , relying on the capability of the deep learning error model to correct the introduced approximation error. This approximation further accelerates the construction of the reduced order model as the model-constrained optimization problem to identify the reduced basis now involves a search over the -dimensional compressed parameter space as opposed to the much larger -dimensional space.

The corrected prediction of the quantity of interest thus becomes,


The reduced space is spanned by an 80-dimensional basis computed by an adaptive model-constrained optimization procedure[4]. The deep neural network is trained by obtaining a simulation dataset comprising of parameters and their corresponding reduced model errors by simultaneously solving the high-fidelity model and the reduced model.

5.2 Inferring Thermal Conductivity Parameters with Sparse Temperature Observations

Given observations of the temperature distribution, the aforementioned deterministic inverse problem is posed in order to infer the thermal conductivity parameters:


To evaluate the accuracy of the corrected reduced order model, two different quantities of interest are examined each verified with two numerical experiments. The first two numerical experiments use average temperature measurements in each sub-fin as the quantity of interest . The last two numerical experiments use random point observations on the surface of the thermal fin. The number of surface observations are limited to 40 and the same finite-element discretization with 1446 degrees of freedom is used in all of the experiments. For each set of numerical experiments, two forms of the true thermal conductivity are used to generate observations from the high-fidelity model. For the first case, the true conductivity is constant on each sub-fin and total variation regularization is used for the deterministic inverse problem. For the second case, the true conductivity was drawn from a Gaussian random field and Tikhonov regularization was used.

The estimate of thermal conductivity was obtained by minimizing the objective function with a bound constrained limited-memory BFGS routine[5, 25]. The starting point for the optimization algorithm is drawn from a Gaussian random field and the bounds for the optimization is chosen to be within 5% of the maximum and the minimum values of the true solution in order to contrast the inversion results to the case where the high-fidelity model is used as the parameter-to-observable map.

Experiment 1

The true conductivity is constant on each sub-fin and the observations are averaged temperature per sub-fin. A dense feed forward neural network employed to learn the discrepancy model optimized using Adam

[13]. Exponential linear units were used as activation functions in each hidden layer[6]. A residual block consisting of two fully-connected layers with skip-connections[9]

followed by batch normalization


was used as the primary building blocks of the deep neural network. The hyper-parameters dictating the architecture of the network were determined by the aforementioned Gaussian optimization procedure to use the Adam optimizer with a learning rate of 3e-4 with scheduled cooling of 50% every 500 epochs, with a batch size of 400, comprising of three residual blocks with 50 neurons per dense layer, and activated by exponential linear units. The total number of trainable parameters amounted to 80,859. The trained network has an average validation error of 0.7% over simulated data generated from the high-fidelity model and the parameters were drawn from a Gaussian random field.

Table 1 shows relative reconstruction errors of the thermal conductivity compared to the true value. The inversion method using the high-fidelity forward model is the most accurate as expected. The DL-enhanced ROM performs with similar accuracy and number of optimizer iterations while the projection-based ROM performs the worst. However, during each optimizer iteration, the high-fidelity model solves an system (with ) whereas the DL-enhanced ROM solves a much smaller (with ) system along with an efficient forward and backward pass through the DL discrepancy function with two orders of magnitude smaller number of parameters.

Method Relative Reconstruction Error L-BFGS-B Iterations
High-fidelity model () 0.1193% 468
Reduced-order model () 1.4576% 139
ROM + error correction () 0.4721% 434
Table 1: Relative reconstruction error solving the inverse problem where the true solution is piece-wise constant and the observations are average temperatures on each sub-fin
Figure 2: conductivity field reconstructed by solving the deterministic inverse problem compared to the true piece-wise constant conductivity

Experiment 2

The true conductivity for this experiment is drawn from a Gaussian random field and average temperatures of sub-fins are observed. The discrepancy function trained for the previous experiment was reused.

Table 2 shows relative reconstruction errors of the thermal conductivity compared to the true value. Due to the affine approximation of the ROM and the spatially varying nature of the true conductivity field, the relative reconstruction error is significantly larger than the inversion result with the high-fidelity forward model. The DL-enhanced ROM however is able to effectively correct the quantity of interest output leading to similar accuracy compared to the high-fidelity model in a parsimonious fashion.

Method Relative Reconstruction Error L-BFGS-B Iterations
High-fidelity model () 5.58% 609
Reduced-order model () 77.41% 150
ROM + error correction () 5.6676% 1049
Table 2: Relative reconstruction error solving the inverse problem where the true solution is a Gaussian random field and the observations are average temperatures on each sub-fin
Figure 3: conductivity field reconstructed by solving the deterministic inverse problem compared to the true solution drawn from a Gaussian random field

Experiment 3

The true conductivity is constant on each sub-fin and random point observations are made on the external surface of the thermal fin. The hyper-parameters dictating the architecture of the network were determined by the aforementioned Gaussian optimization procedure to use the Adam optimizer with a learning rate of 1e-4 with scheduled cooling of 50% every 500 epochs, with a batch size of 100, comprising of three residual blocks with 100 neurons per dense layer, and activated by exponential linear units. The total number of trainable parameters amounted to 179,840. The trained network has an average validation error of 15% with simulated data where parameters were drawn from a Gaussian random field. Table 3 shows the similar results as experiment 1 with the somewhat worse accuracy of the DL-enhanced inversion attributing to the increase in the number of observations corresponding to higher regression parameters. The number of neural network parameters are still an order of magnitude smaller than the high-fidelity system.

Method Relative Reconstruction Error L-BFGS-B Iterations
High-fidelity model () 0.1680% 747
Reduced-order model () 1.6098% 265
ROM + error correction () 0.4962% 521
Table 3: Relative reconstruction error solving the inverse problem where the true solution is piece-wise constant and the observations are random point observations on the external surface
Figure 4: conductivity field reconstructed by solving the deterministic inverse problem compared to the true solution drawn from a Gaussian random field

Experiment 4

The true conductivity is drawn from a Gaussian random field and random point observations are made on the external surface of the thermal fin. The trained discrepancy function from experiment 3 was reused. Table 4 shows results similar to that of experiment 2 with the ROM inversion being significantly worse than the high-fidelity inversion due to the spatially varying nature of the true solution. The enhancing the ROM with the DL discrepancy shows relative reconstruction errors an order of magnitude smaller, although the number of optimizer iterations increase by 80%.

Method Relative Reconstruction Error L-BFGS-B Iterations
Full-order model () 4.4112% 724
Reduced-order model () 64.5398% 242
ROM + error correction () 7.3075% 1321
Table 4: Relative reconstruction error solving the inverse problem where the true solution is a Gaussian random field and the observations are random point observations on the external surface
Figure 5: conductivity field reconstructed by solving the deterministic inverse problem compared to the true solution drawn from a Gaussian random field

5.3 Physics-Informed Reduced Models for Neutron Transport Equations

High-fidelity radiation transport calculations require the solution of the Boltzmann equation and are computationally expensive. The DL-enhanced ROM technique is applied to efficiently solve neutral-particle transport modeled by the following equation:


where is the angular "flux" or "intensity" for particle type , is the unit vector (solid angle) in direction of motion, is the position vector, and represents its energy. is the macroscopic total interaction cross section, is the scattering cross section, and is the source term. Coupling between particles of different types are resolved iteratively. The phase-space is 6-dimensional with 3 space dimensions, 1 energy dimension, and 2-dimensions for angle. This leads to tens of billions of parameters even with moderate resolution. In order to compute quantities of interest, it is often sufficient to compute the angular integral of the angular flux called scalar flux.


Physics-informed reduced order models approximate the high-fidelity transport equations in 6-dimensional phase-space with lower-order transport equations in 4-dimensional phase-space. Further reduction in dimensionality with projection-based reduced order models is used on the lower-order physics models as they tend to be large-scale systems. Specifically, a diffusion approximation to the collided component of the total transport flux is used. This approximation is further simplified with energy group collapsing to form discrete energy bins from continuous energy. The energy-dependent diffusion approximation is as follows:


with symmetry planes:


and vacuum boundary conditions:


where is the group index, the total number of energy groups, is the the removal cross section, is the diffusion coefficient, is the flux in group , is the scattering cross section from group to group , and is the external source of particles in group . The quantities of interest for this problem are typically functionals of the computed flux solution, for example:


The quantities of interest can include dose, dose rates, fluence, fluence rates, and radiation fluxes through boundaries. These quantities of interest can be defined on a subset of the computational domain known as the region of interest and can be transferred to other physics models to compute important biological and electronic effects. The uncertainties in the quantities of interest can arise from factors such as source position, source spectrum, air humidity, ground composition, and location and orientation of the region of interest. The following results capture the behavior of the projection-based ROM applied to the diffusion approximation compared to the high-fidelity transport solution for the iron-water benchmark. The iron-water benchmark is a standard 1-group 2D benchmark for transport solution techniques comprising of three spatial zones. The sphericized version has been devised and employed to verify our method. The region of interest is the third zone . The diffusion ROM approaches the high-fidelity diffusion approximation with 4-dimensional reduced subspace and is then limited in accuracy compared to the high-fidelity transport solution.

(a) Diffusion-based ROM solution flux compared to the transport FOM solution flux. The reduced order model was limited to 20 basis vectors.
(b) Increasing the dimensionality of the ROM improves relative quantity of interest error compared to the transport solution. The diffusion-based ROM error is limited by the full-order diffusion solution (dotted red line)

The following results incorporate the deep learning discrepancy function to approach accuracy comparable to the high-fidelity transport solution, outperforming the high-fidelity diffusion approximation. The deep learning model is trained using a dataset obtained by simultaneously solving the high-fidelity transport equations and the projection-based diffusion ROM for random removal and scattering cross section values drawn uniformly from the following ranges in each of the three zones:

1 [0.0, 0.2] [0.7, 1.1]
2 [0.2, 0.8] [0.5, 1.8]
3 [0.4, 0.8] [0.1, 1.5]
Table 5: Parameter ranges for for the iron-water benchmark

Figures 6(c) and 6(d) show the parameter-to-observable evaluations for a validation set. Each validation set parameter example is used to evaluate the quantity of interest using the high-fidelity transport solution, the diffusion ROM, and the the DL-enhanced diffusion ROM. The discrepancy function is shown to improve even upon the constrained 3-dimensional diffusion ROM.

(c) Improvement using the error emulator over a 3-dimensional reduced basis
(d) Improvement using the error emulator over a 22-dimensional reduced basis
Figure 6: Parameter-to-observable evaluations for the validation set

Table 6 shows relative quantity of interest error for a validation set in parameter-to-observable maps constructed using the high-fidelity diffusion approximation, diffusion ROMs, and DL-enhanced diffusion ROMs. The ROM variants include a limited 3-dimensional reduced basis and a 22-dimension basis discovered by the model-constrained adaptive sampling. The DL-enhanced ROM variants outperform the high-fidelity diffusion solution showcasing the ability of the discrepancy function to accurately model the ROM error compared to the high-fidelity transport solutions.

Solver Method QoI error QoI error (relative)
Diffusion (FOM) 4.252 1.06%
Diffusion ROM (3-dim) 456.228 114.05%
Diffusion ROM (22-dim) 4.345 1.08%
Diffusion ROM (3-dim) + discrepancy function 3.464 0.86%
Diffusion ROM (22-dim) + discrepancy function 0.041 0.010%
Table 6: Average quantity of interest errors compared to the transport solution in the validation dataset

6 Concluding Remarks

In this work, we presented a data-driven technique to augment the accuracy of reduced order models by learning their error compared to high-fidelity models and experimental data with the goal of accelerating many-query problems in deterministic inverse problems. We presented preliminary results that support our approach in accelerating parameter-to-observable maps for an elliptic PDE and a parametrized neutron transport problem. We solved a deterministic inverse problem using a DL-enhanced reduced model to efficiently reconstruct thermal conductivity parameters in a steady heat conduction problem given sparse temperature observations.

Future work involves (1) incorporating parameter sensitivities to solve a deterministic inverse problem to infer scattering and absorption coefficients in neutron transport equations, (2) utilizing the DL-enhanced reduced order models to perform Bayesian inference, and (3) exploring physics-informed regularization mechanisms to accelerate the training of the DL discrepancy function.


  • [1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, et al. (2016) Tensorflow: large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467. Cited by: §4.1.
  • [2] P. Benner, S. Gugercin, and K. Willcox (2015) A survey of projection-based model reduction methods for parametric dynamical systems. SIAM review 57 (4), pp. 483–531. Cited by: §1, §1.
  • [3] S. L. Brunton, J. L. Proctor, and J. N. Kutz (2016) Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proceedings of the National Academy of Sciences 113 (15), pp. 3932–3937. Cited by: §1.
  • [4] T. Bui-Thanh, K. Willcox, O. Ghattas, and B. van Bloemen Waanders (2007) Goal-oriented, model-constrained optimization for reduction of large-scale systems. Journal of Computational Physics 224 (2), pp. 880–896. Cited by: §2.1, §3.1, §5.1.
  • [5] R. H. Byrd, P. Lu, J. Nocedal, and C. Zhu (1995) A limited memory algorithm for bound constrained optimization. SIAM Journal on Scientific Computing 16 (5), pp. 1190–1208. Cited by: §5.2.
  • [6] D. Clevert, T. Unterthiner, and S. Hochreiter (2015) Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289. Cited by: §5.2.
  • [7] M. Drohmann and K. Carlberg (2015) The romes method for statistical modeling of reduced-order-model error. SIAM/ASA Journal on Uncertainty Quantification 3 (1), pp. 116–145. Cited by: §1.
  • [8] D. Hartman and L. K. Mestha (2017) A deep learning framework for model reduction of dynamical systems. In 2017 IEEE Conference on Control Technology and Applications (CCTA), pp. 1917–1922. Cited by: §1.
  • [9] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    pp. 770–778. Cited by: 8th item, §5.2.
  • [10] T. Head, L. MechCoder, I. Shcherbatyi, et al. (2018) Scikit-optimize/scikit-optimize: v0. 5.2. Zenodo. External Links: Link Cited by: §3.3.
  • [11] K. Hornik, M. Stinchcombe, and H. White (1989) Multilayer feedforward networks are universal approximators. Neural networks 2 (5), pp. 359–366. Cited by: §3.1.
  • [12] S. Ioffe and C. Szegedy (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167. Cited by: §5.2.
  • [13] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §5.2.
  • [14] M. Milano and P. Koumoutsakos (2002) Neural network modeling for near wall turbulent flow. Journal of Computational Physics 182 (1), pp. 1–26. Cited by: §1.
  • [15] V. M. Nguyen-Thanh, X. Zhuang, and T. Rabczuk (2019) A deep energy method for finite deformation hyperelasticity. European Journal of Mechanics-A/Solids, pp. 103874. Cited by: §1.
  • [16] M. Raissi, P. Perdikaris, and G. E. Karniadakis (2019) Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics 378, pp. 686–707. Cited by: §1.
  • [17] M. Raissi, P. Perdikaris, and G. E. Karniadakis (2017) Physics informed deep learning (part i): data-driven solutions of nonlinear partial differential equations. arXiv preprint arXiv:1711.10561. Cited by: §1.
  • [18] C. E. Rasmussen (2003) Gaussian processes in machine learning. In Summer School on Machine Learning, pp. 63–71. Cited by: §3.3.
  • [19] O. San and R. Maulik (2018) Extreme learning machine for reduced order modeling of turbulent geophysical flows. Physical Review E 97 (4), pp. 042322. Cited by: §1.
  • [20] O. San and R. Maulik (2018) Neural network closures for nonlinear model order reduction. Advances in Computational Mathematics 44 (6), pp. 1717–1750. Cited by: §1.
  • [21] J. Snoek, H. Larochelle, and R. P. Adams (2012) Practical bayesian optimization of machine learning algorithms. In Advances in neural information processing systems, pp. 2951–2959. Cited by: §3.3.
  • [22] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov (2014) Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research 15 (1), pp. 1929–1958. Cited by: 7th item.
  • [23] J. Wang, J. Wu, and H. Xiao (2016) Physics informed machine learning for predictive turbulence modeling: toward a complete framework. In Proceedings of the Summer Program, pp. 1. Cited by: §1.
  • [24] E. Weinan and B. Yu (2018) The deep ritz method: a deep learning-based numerical algorithm for solving variational problems. Communications in Mathematics and Statistics 6 (1), pp. 1–12. Cited by: §1.
  • [25] C. Zhu, R. H. Byrd, P. Lu, and J. Nocedal (1997) Algorithm 778: l-bfgs-b: fortran subroutines for large-scale bound-constrained optimization. ACM Transactions on Mathematical Software (TOMS) 23 (4), pp. 550–560. Cited by: §5.2.

Appendix A Gradient of the Objective Function

The gradient of the objective function


with respect to is given by,


The gradient of the corrected prediction of the observable is given by,


The reduced model predictions of the observations can be written as


leading to the simplification of equation 28,


everywhere implies that . Expanding the total derivative yields,


and rearranging to obtain the gradient of the reduced state with respect to the parameters gives,


Substituting in equation 27 leads to,


Identifying the adjoint variables as,


we solve the adjoint equation


to evaluate the gradient of the objective function with respect to the parameters