
Stationary Density Estimation of Itô Diffusions Using Deep Learning
In this paper, we consider the density estimation problem associated wit...
read it

Approximation of the invariant distribution for a class of ergodic SDEs with onesided Lipschitz continuous drift coefficient using an explicit tamed Euler scheme
We consider the longtime behavior of an explicit tamed Euler scheme app...
read it

Markov chains in random environment with applications in queueing theory and machine learning
We prove the existence of limiting distributions for a large class of Ma...
read it

Wellposedness and numerical schemes for McKeanVlasov equations and interacting particle systems with discontinuous drift
In this paper, we first establish wellposedness results of McKeanVlaso...
read it

Deep ReLU Neural Network Approximation for Stochastic Differential Equations with Jumps
Deep neural networks (DNNs) with ReLU activation function are proved to ...
read it

Kernel Embedding Linear Response
In the paper, we study the problem of estimating linear response statist...
read it

On the Convergence Properties of Optimal AdaBoost
AdaBoost is one of the most popular machinelearning algorithms. It is s...
read it
Error Bounds of the Invariant Statistics in Machine Learning of Ergodic Itô Diffusions
This paper studies the theoretical underpinnings of machine learning of ergodic Itô diffusions. The objective is to understand the convergence properties of the invariant statistics when the underlying system of stochastic differential equations (SDEs) is empirically estimated with a supervised regression framework. Using the perturbation theory of ergodic Markov chains and the linear response theory, we deduce a linear dependence of the errors of onepoint and twopoint invariant statistics on the error in the learning of the drift and diffusion coefficients. More importantly, our study shows that the usual L^2norm characterization of the learning generalization error is insufficient for achieving this linear dependence result. We find that sufficient conditions for such a linear dependence result are through learning algorithms that produce a uniformly Lipschitz and consistent estimator in the hypothesis space that retains certain characteristics of the drift coefficients, such as the usual linear growth condition that guarantees the existence of solutions of the underlying SDEs. We examine these conditions on two wellunderstood learning algorithms: the kernelbased spectral regression method and the shallow random neural networks with the ReLU activation function.
READ FULL TEXT
Comments
There are no comments yet.