Hoeffding's lemma for Markov Chains and its applications to statistical learning

02/01/2018 ∙ by Jianqing Fan, et al. ∙ 0

We establish the counterpart of Hoeffding's lemma for Markov dependent random variables. Specifically, if a stationary Markov chain {X_i}_i > 1 with invariant measure π admits an L_2(π)-spectral gap 1-λ, then for any bounded functions f_i: x [a_i,b_i], the sum of f_i(X_i) is sub-Gaussian with variance proxy 1+λ/1-λ·∑_i (b_i-a_i)^2/4. The counterpart of Hoeffding's inequality immediately follows. Our results assume none of reversibility, countable state space and time-homogeneity of Markov chains. They are optimal in terms of the multiplicative coefficient (1+λ)/(1-λ), and cover Hoeffding's lemma and inequality for independent random variables as special cases with λ = 0. We illustrate the utility of these results by applying them to six problems in statistics and machine learning. They are linear regression, lasso regression, sparse covariance matrix estimation with Markov-dependent samples; Markov chain Monte Carlo estimation; respondence driven sampling; and multi-armed bandit problems with Markovian rewards.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.