Approximate Function Evaluation via Multi-Armed Bandits
We study the problem of estimating the value of a known smooth function f at an unknown point μ∈ℝ^n, where each component μ_i can be sampled via a noisy oracle. Sampling more frequently components of μ corresponding to directions of the function with larger directional derivatives is more sample-efficient. However, as μ is unknown, the optimal sampling frequencies are also unknown. We design an instance-adaptive algorithm that learns to sample according to the importance of each coordinate, and with probability at least 1-δ returns an ϵ accurate estimate of f(μ). We generalize our algorithm to adapt to heteroskedastic noise, and prove asymptotic optimality when f is linear. We corroborate our theoretical results with numerical experiments, showing the dramatic gains afforded by adaptivity.
READ FULL TEXT