
Learning in Modal Space: Solving TimeDependent Stochastic PDEs Using PhysicsInformed Neural Networks
One of the open problems in scientific computing is the longtime integration of nonlinear stochastic partial differential equations (SPDEs). We address this problem by taking advantage of recent advances in scientific machine learning and the dynamically orthogonal (DO) and biorthogonal (BO) methods for representing stochastic processes. Specifically, we propose two new PhysicsInformed Neural Networks (PINNs) for solving timedependent SPDEs, namely the NNDO/BO methods, which incorporate the DO/BO constraints into the loss function with an implicit form instead of generating explicit expressions for the temporal derivatives of the DO/BO modes. Hence, the proposed methods overcome some of the drawbacks of the original DO/BO methods: we do not need the assumption that the covariance matrix of the random coefficients is invertible as in the original DO method, and we can remove the assumption of no eigenvalue crossing as in the original BO method. Moreover, the NNDO/BO methods can be used to solve timedependent stochastic inverse problems with the same formulation and computational complexity as for forward problems. We demonstrate the capability of the proposed methods via several numerical examples: (1) A linear stochastic advection equation with deterministic initial condition where the original DO/BO method would fail; (2) Longtime integration of the stochastic Burgers' equation with many eigenvalue crossings during the whole time evolution where the original BO method fails. (3) Nonlinear reaction diffusion equation: we consider both the forward and the inverse problem, including noisy initial data, to investigate the flexibility of the NNDO/BO methods in handling inverse and mixed type problems. Taken together, these simulation results demonstrate that the NNDO/BO methods can be employed to effectively quantify uncertainty propagation in a wide range of physical problems.
05/03/2019 ∙ by Dongkun Zhang, et al. ∙ 6 ∙ shareread it

An Atomistic Fingerprint Algorithm for Learning Ab Initio Molecular Force Fields
Molecular fingerprints, i.e. feature vectors describing atomistic neighborhood configurations, is an important abstraction and a key ingredient for datadriven modeling of potential energy surface and interatomic force. In this paper, we present the DensityEncoded Canonically Aligned Fingerprint (DECAF) fingerprint algorithm, which is robust and efficient, for fitting peratom scalar and vector quantities. The fingerprint is essentially a continuous density field formed through the superimposition of smoothing kernels centered on the atoms. Rotational invariance of the fingerprint is achieved by aligning, for each fingerprint instance, the neighboring atoms onto a local canonical coordinate frame computed from a kernel minisum optimization procedure. We show that this approach is superior over PCAbased methods especially when the atomistic neighborhood is sparse and/or contains symmetry. We propose that the `distance' between the density fields be measured using a volume integral of their pointwise difference. This can be efficiently computed using optimal quadrature rules, which only require discrete sampling at a small number of grid points. We also experiment on the choice of weight functions for constructing the density fields, and characterize their performance for fitting interatomic potentials. The applicability of the fingerprint is demonstrated through a set of benchmark problems.
09/26/2017 ∙ by YuHang Tang, et al. ∙ 0 ∙ shareread it

Quantifying total uncertainty in physicsinformed neural networks for solving forward and inverse stochastic problems
Physicsinformed neural networks (PINNs) have recently emerged as an alternative way of solving partial differential equations (PDEs) without the need of building elaborate grids, instead, using a straightforward implementation. In particular, in addition to the deep neural network (DNN) for the solution, a second DNN is considered that represents the residual of the PDE. The residual is then combined with the mismatch in the given data of the solution in order to formulate the loss function. This framework is effective but is lacking uncertainty quantification of the solution due to the inherent randomness in the data or due to the approximation limitations of the DNN architecture. Here, we propose a new method with the objective of endowing the DNN with uncertainty quantification for both sources of uncertainty, i.e., the parametric uncertainty and the approximation uncertainty. We first account for the parametric uncertainty when the parameter in the differential equation is represented as a stochastic process. Multiple DNNs are designed to learn the modal functions of the arbitrary polynomial chaos (aPC) expansion of its solution by using stochastic data from sparse sensors. We can then make predictions from new sensor measurements very efficiently with the trained DNNs. Moreover, we employ dropout to correct the overfitting and also to quantify the uncertainty of DNNs in approximating the modal functions. We then design an active learning strategy based on the dropout uncertainty to place new sensors in the domain to improve the predictions of DNNs. Several numerical tests are conducted for both the forward and the inverse problems to quantify the effectiveness of PINNs combined with uncertainty quantification. This NNaPC new paradigm of physicsinformed deep learning with uncertainty quantification can be readily applied to other types of stochastic PDEs in multidimensions.
09/21/2018 ∙ by Dongkun Zhang, et al. ∙ 0 ∙ shareread it

PhysicsInformed Generative Adversarial Networks for Stochastic Differential Equations
We developed a new class of physicsinformed generative adversarial networks (PIGANs) to solve in a unified manner forward, inverse and mixed stochastic problems based on a limited number of scattered measurements. Unlike standard GANs relying only on data for training, here we encoded into the architecture of GANs the governing physical laws in the form of stochastic differential equations (SDEs) using automatic differentiation. In particular, we applied Wasserstein GANs with gradient penalty (WGANGP) for its enhanced stability compared to vanilla GANs. We first tested WGANGP in approximating Gaussian processes of different correlation lengths based on data realizations collected from simultaneous reads at sparsely placed sensors. We obtained good approximation of the generated stochastic processes to the target ones even for a mismatch between the input noise dimensionality and the effective dimensionality of the target stochastic processes. We also studied the overfitting issue for both the discriminator and generator, and we found that overfitting occurs also in the generator in addition to the discriminator as previously reported. Subsequently, we considered the solution of elliptic SDEs requiring approximations of three stochastic processes, namely the solution, the forcing, and the diffusion coefficient. We used three generators for the PIGANs, two of them were feed forward deep neural networks (DNNs) while the other one was the neural network induced by the SDE. Depending on the data, we employed one or multiple feed forward DNNs as the discriminators in PIGANs. Here, we have demonstrated the accuracy and effectiveness of PIGANs in solving SDEs for up to 30 dimensions, but in principle, PIGANs could tackle very high dimensional problems given more sensor data with lowpolynomial growth in computational cost.
11/05/2018 ∙ by Liu Yang, et al. ∙ 0 ∙ shareread it
Dongkun Zhang
is this you? claim profile