Efficiency of adaptive importance sampling
The sampling policy of stage t, formally expressed as a probability density function q_t, stands for the distribution of the sample (x_t,1,..., x_t,n_t) generated at t. From the past samples, some information depending on some objective is derived leading eventually to update the sampling policy to q_t+1. This generic approach characterizes adaptive importance sampling (AIS) schemes. Each stage t is formed with two steps : (i) to explore the space with n_t points according to q_t and (ii) to exploit the current amount of information to update the sampling policy. The very fundamental question raised in the paper concerns the behavior of empirical sums based on AIS. Without making any assumption on the allocation policy n_t, the theory developed involves no restriction on the split of computational resources between the explore (i) and the exploit (ii) step. It is shown that AIS is efficient : the asymptotic behavior of AIS is the same as some "oracle" strategy that knows the optimal sampling policy from the beginning. From a practical perspective, weighted AIS is introduced, a new method that allows to forget poor samples from early stages.
READ FULL TEXT