On Linear Convergence of Weighted Kernel Herding
We provide a novel convergence analysis of two popular sampling algorithms, Weighted Kernel Herding and Sequential Bayesian Quadrature, that are used to approximate the expectation of a function under a distribution. Existing theoretical analysis was insufficient to explain the empirical successes of these algorithms. We improve upon existing convergence rates to show that, under mild assumptions, these algorithms converge linearly. To this end, we also suggest a simplifying assumption that is true for most cases in finite dimensions, and that acts as a sufficient condition for linear convergence to hold in the much harder case of infinite dimensions. When this condition is not satisfied, we provide a weaker convergence guarantee. Our analysis also yields a new distributed algorithm for large-scale computation that we prove converges linearly under the same assumptions. Finally, we provide an empirical evaluation to test the proposed algorithm for a real world application.
READ FULL TEXT