Error Bounds of the Invariant Statistics in Machine Learning of Ergodic Itô Diffusions

05/21/2021
by   He Zhang, et al.
0

This paper studies the theoretical underpinnings of machine learning of ergodic Itô diffusions. The objective is to understand the convergence properties of the invariant statistics when the underlying system of stochastic differential equations (SDEs) is empirically estimated with a supervised regression framework. Using the perturbation theory of ergodic Markov chains and the linear response theory, we deduce a linear dependence of the errors of one-point and two-point invariant statistics on the error in the learning of the drift and diffusion coefficients. More importantly, our study shows that the usual L^2-norm characterization of the learning generalization error is insufficient for achieving this linear dependence result. We find that sufficient conditions for such a linear dependence result are through learning algorithms that produce a uniformly Lipschitz and consistent estimator in the hypothesis space that retains certain characteristics of the drift coefficients, such as the usual linear growth condition that guarantees the existence of solutions of the underlying SDEs. We examine these conditions on two well-understood learning algorithms: the kernel-based spectral regression method and the shallow random neural networks with the ReLU activation function.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/09/2021

Stationary Density Estimation of Itô Diffusions Using Deep Learning

In this paper, we consider the density estimation problem associated wit...
research
06/20/2022

The backward Euler-Maruyama method for invariant measures of stochastic differential equations with super-linear coefficients

The backward Euler-Maruyama (BEM) method is employed to approximate the ...
research
04/02/2020

On Explicit Milstein-type Scheme for Mckean-Vlasov Stochastic Differential Equations with Super-linear Drift Coefficient

We develop an explicit Milstein-type scheme for McKean-Vlasov stochastic...
research
08/09/2022

On the Activation Function Dependence of the Spectral Bias of Neural Networks

Neural networks are universal function approximators which are known to ...
research
06/23/2022

Stochastic Langevin Differential Inclusions with Applications to Machine Learning

Stochastic differential equations of Langevin-diffusion form have receiv...
research
02/23/2021

Deep ReLU Neural Network Approximation for Stochastic Differential Equations with Jumps

Deep neural networks (DNNs) with ReLU activation function are proved to ...
research
08/03/2022

Asymptotic relative efficiency of the Kendall and Spearman correlation statistics

A necessary and suffcient condition for Pitman's asymptotic relative eff...

Please sign up or login with your details

Forgot password? Click here to reset