The nonzero gain coefficients of Sobol's sequences are always powers of two

06/19/2021
by   Zexin Pan, et al.
0

When a plain Monte Carlo estimate on n samples has variance σ^2/n, then scrambled digital nets attain a variance that is o(1/n) as n→∞. For finite n and an adversarially selected integrand, the variance of a scrambled (t,m,s)-net can be at most Γσ^2/n for a maximal gain coefficient Γ<∞. The most widely used digital nets and sequences are those of Sobol'. It was previously known that Γ⩽ 2^t3^s for Sobol' points as well as Niederreiter-Xing points. In this paper we study nets in base 2. We show that Γ⩽2^t+s-1 for nets. This bound is a simple, but apparently unnoticed, consequence of a microstructure analysis in Niederreiter and Pirsic (2001). We obtain a sharper bound that is smaller than this for some digital nets. We also show that all nonzero gain coefficients must be powers of two. A consequence of this latter fact is a simplified algorithm for computing gain coefficients of nets in base 2.

READ FULL TEXT

Authors

page 1

page 2

page 3

page 4

02/25/2021

Digital almost nets

Digital nets (in base 2) are the subsets of [0,1]^d that contain the exp...
06/11/2020

Walsh functions, scrambled (0,m,s)-nets, and negative covariance: applying symbolic computation to quasi-Monte Carlo integration

We investigate base b Walsh functions for which the variance of the inte...
10/05/2019

An algorithm to compute the t-value of a digital net and of its projections

Digital nets are among the most successful methods to construct low-disc...
05/10/2019

Nets and Reverse Mathematics, a pilot study

Nets are generalisations of sequences involving possibly uncountable ind...
03/10/2018

Generalization and Expressivity for Deep Nets

Along with the rapid development of deep learning in practice, the theor...
03/22/2019

Commitment Nets in Software Process Improvement

Several studies have revealed the fact that nearly two-thirds of all sof...
06/01/2019

Learning Patterns in Sample Distributions for Monte Carlo Variance Reduction

This paper investigates a novel a-posteriori variance reduction approach...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.