DeepAI
Log In Sign Up

Online Sampling from Log-Concave Distributions

02/21/2019
by   Holden Lee, et al.
0

Given a sequence of convex functions f_0, f_1, ..., f_T, we study the problem of sampling from the Gibbs distribution π_t ∝ e^-∑_k=0^t f_k for each epoch t in an online manner. This problem occurs in applications to machine learning, Bayesian statistics, and optimization where one constantly acquires new data, and must continuously update the distribution. Our main result is an algorithm that generates independent samples from a distribution that is a fixed ε TV-distance from π_t for every t and, under mild assumptions on the functions, makes poly(T) gradient evaluations per epoch. All previous results for this problem imply a bound on the number of gradient or function evaluations which is at least linear in T. While we assume the functions have bounded second moment, we do not assume strong convexity. In particular, we show that our assumptions hold for online Bayesian logistic regression, when the data satisfy natural regularity properties. In simulations, our algorithm achieves accuracy comparable to that of a Markov chain specialized to logistic regression. Our main result also implies the first algorithm to sample from a d-dimensional log-concave distribution π_T ∝ e^-∑_k=0^T f_k where the f_k's are not assumed to be strongly convex and the total number of gradient evaluations is roughly T(T)+poly(d), as opposed to T·poly(d) implied by prior works. Key to our algorithm is a novel stochastic gradient Langevin dynamics Markov chain that has a carefully designed variance reduction step built-in with fixed constant batch size. Technically, lack of strong convexity is a significant barrier to the analysis, and, here, our main contribution is a martingale exit time argument showing the chain is constrained to a ball of radius roughly poly(T) for the duration of the algorithm.

READ FULL TEXT
06/19/2022

Faster Sampling from Log-Concave Distributions over Polytopes via a Soft-Threshold Dikin Walk

We consider the problem of sampling from a d-dimensional log-concave dis...
10/19/2020

Faster Convergence of Stochastic Gradient Langevin Dynamics for Non-Log-Concave Sampling

We establish a new convergence analysis of stochastic gradient Langevin ...
02/22/2019

Nonconvex sampling with the Metropolis-adjusted Langevin algorithm

The Langevin Markov chain algorithms are widely deployed methods to samp...
05/05/2019

Faster algorithms for polytope rounding, sampling, and volume computation via a sublinear "Ball Walk"

We study the problem of "isotropically rounding" a polytope K⊆R^n, that ...
11/29/2018

Simulated Tempering Langevin Monte Carlo II: An Improved Proof using Soft Markov Chain Decomposition

A key task in Bayesian machine learning is sampling from distributions t...
02/24/2018

Dimensionally Tight Running Time Bounds for Second-Order Hamiltonian Monte Carlo

Hamiltonian Monte Carlo (HMC) is a widely deployed method to sample from...