Deep ReLU neural network approximation of parametric and stochastic elliptic PDEs with lognormal inputs

11/10/2021
by   Dinh Dũng, et al.
0

We investigate non-adaptive methods of deep ReLU neural network approximation of the solution u to parametric and stochastic elliptic PDEs with lognormal inputs on non-compact set ℝ^∞. The approximation error is measured in the norm of the Bochner space L_2(ℝ^∞, V, γ), where γ is the tensor product standard Gaussian probability on ℝ^∞ and V is the energy space. The approximation is based on an m-term truncation of the Hermite generalized polynomial chaos expansion (gpc) of u. Under a certain assumption on ℓ_q-summability condition for lognormal inputs (0< q <∞), we proved that for every integer n > 1, one can construct a non-adaptive compactly supported deep ReLU neural network ϕ_n of size not greater than n on ℝ^m with m = 𝒪 (n/log n), having m outputs so that the summation constituted by replacing polynomials in the m-term truncation of Hermite gpc expansion by these m outputs approximates u with an error bound 𝒪((n/log n)^-1/q). This error bound is comparable to the error bound of the best approximation of u by n-term truncations of Hermite gpc expansion which is 𝒪(n^-1/q). We also obtained some results on similar problems for parametric and stochastic elliptic PDEs with affine inputs, based on the Jacobi and Taylor gpc expansions.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset