
Three rates of convergence or separation via Ustatistics in a dependent framework
Despite the ubiquity of Ustatistics in modern Probability and Statistic...
read it

Hoeffding's lemma for Markov Chains and its applications to statistical learning
We establish the counterpart of Hoeffding's lemma for Markov dependent r...
read it

On Optimal Uniform Concentration Inequalities for Discrete Entropies in the Highdimensional Setting
We prove an exponential decay concentration inequality to bound the tail...
read it

Bernstein's inequality for general Markov chains
We prove a sharp Bernstein inequality for generalstatespace and not ne...
read it

Uniform HansonWright type concentration inequalities for unbounded entries via the entropy method
This paper is devoted to uniform versions of the HansonWright inequalit...
read it

Concentration Bounds for Cooccurrence Matrices of Markov Chains
Cooccurrence statistics for sequential data are common and important da...
read it

A quantitative Mc Diarmid's inequality for geometrically ergodic Markov chains
We state and prove a quantitative version of the bounded difference ineq...
read it
Concentration inequality for Ustatistics of order two for uniformly ergodic Markov chains, and applications
We prove a new concentration inequality for Ustatistics of order two for uniformly ergodic Markov chains. Working with bounded πcanonical kernels, we show that we can recover the convergence rate of Arcones and Gine (1993) who proved a concentration result for Ustatistics of independent random variables and canonical kernels. Our proof relies on an inductive analysis where we use martingale techniques, uniform ergodicity, Nummelin splitting and Bernstein's type inequality where the spectral gap of the chain emerges. Our result allows us to conduct three applications. First, we establish a new exponential inequality for the estimation of spectra of trace class integral operators with MCMC methods. The novelty is that this result holds for kernels with positive and negative eigenvalues, which is new as far as we know. In addition, we investigate generalization performance of online algorithms working with pairwise loss functions and Markov chain samples. We provide an onlinetobatch conversion result by showing how we can extract a low risk hypothesis from the sequence of hypotheses generated by any online learner. We finally give a nonasymptotic analysis of a goodnessoffit test on the density of the invariant measure of a Markov chain. We identify the classes of alternatives over which our test based on the L2 distance has a prescribed power.
READ FULL TEXT
Comments
There are no comments yet.