Adaptive importance sampling by kernel smoothing

03/20/2019
by   Bernard Delyon, et al.
0

A key determinant of the success of Monte Carlo simulation is the sampling policy, the sequence of distribution used to generate the particles, and allowing the sampling policy to evolve adaptively during the algorithm provides considerable improvement in practice. The issues related to the adaptive choice of the sampling policy are addressed from a functional estimation point of view. adaptive importance sampling approach is revisited The considered approach consists of modelling the sampling policy as a mixture distribution between a flexible kernel density estimate, based on the whole set of available particles, and a naive heavy tail density. When the share of samples generated according to the naive density goes to zero but not too quickly, two results are established. Uniform convergence rates are derived for the sampling policy estimate. A central limit theorem is obtained for the resulting integral estimates. The fact that the asymptotic variance is the same as the variance of an "oracle" procedure, in which the sampling policy is chosen as the optimal one, illustrates the benefits of the proposed approach.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset