Concentration (large deviation) inequalities are one of the most important subjects of study in probability theory. A class of distributions for which sharp concentration inequalities have been developed is the class of subGaussian distributions.
A random vector is subGaussian, if there exists so that:
The concentration bounds of subGaussian random vectors/variables depends on the parameter – smaller the better the concentration bounds. While subGaussian distributions arise naturally in several applications, there are settings where the random vectors have nice concentration properties but the subGaussian parameter is very large (so that applying concentration bounds for general subGaussian random vectors gives loose bounds). In this short note, we consider a related but different class of distributions, called norm-subGaussian random vectors and establish tighter concentration bounds for them.
2 Norm SubGaussian Random Vector
The norm subGaussian random vector is defined as follows.
A random vector is norm-subGaussian (or ), if so that:
Norm subGaussian includes both subGaussian (with a smaller parameter) and bounded norm random vectors as special cases.
There exists absolute constant so that following random vectors are all .
A bounded random vector so that .
A random vector , where and random variable is -subGaussian.
A random vector that is -subGaussian.
The fact that the first two random vectors are immediately follows from the arguements in scalar version counterparts. For the third random vector, WLOG, assume . Let be a -cover of unit sphere (thus ). By property of subGaussian random vector, we know for each fixed :
Then let , since is a -cover, there always exists a so that in cover and . Therefore, we have:
Rearranging gives . Finally, the covering number of -cover over can be upper bounded by . Therefore, by union bound:
Now we are ready to check the second claim of Lemma 1, when , we have,
when , we let where , then:
In sum, this proves that is . ∎
Lemma 2 (Properties of norm-subGaussian).
For random vector , following statements are equivalent up to absolute constant difference in .
Moments: for any .
Super-exponential moment: .
Note is a 1-dimensional random variable. This lemma directly follows from the equivalent properties of -dimensional subGaussian, for instance, Lemma 5.5 in (Vershynin, 2010). ∎
The following lemma says that if a random vector is , then its norm squared is subexponential and its projection on any direction a is subGaussian random variable.
There is an absolute constant so that if random vector is zero-mean , then is -subExponential, and for any fixed unit vector , is -subGaussian.
The undesirable thing about the MGF characterization in Lemma 2 is that even if is a zero mean random vector, is not zero mean, so it is difficult to directly work with MGF of . Instead, we first convert the random vector to a matrix and characterize the MGF of .
Lemma 4 (MGF Characterization).
There is an absolute constant , if random vector is zero-mean , then let
we have for any .
3 Vector Martingales with SubGaussian Norm
Theorem 5 (Tropp (2012)).
Let be a fixed symmetric matrix, and let be a random symmetric matrix. Then,
We will prove our concentration result for norm subGaussian random vectors in a general setting where the subGaussian parameter for the vector can itself be a random variable.
Let random vectors , and corresponding filtrations for satisfy that is zero-mean with . i.e.,
There exists an absolute constant such that if satisfy condition 4, then for any fixed , , with probability at least :
According to Lemma 4, there exists an absolute constant so that holds for any . Therefore, we have:
where step (1) is due to Theorem 5, and step (2) used the fact that if matrix , then
. On the other hand, since identity matrix commutes with any matrix, we know:
Therefore, for any , , by Markov’s inequality, we have:
where step (1) is because is a rank-2 matrix whose eigenvalues are ; step (2) is due to all preconditions are symmetric with respect to 0. Finally, setting RHS equal to , we finish the proof. ∎
Corollary 7 (Hoeffding type inequality for norm-subGaussian).
There exists an absolute constant such that if satisfy condition 4 with fixed , then for any , with probability at least :
Since now are fixed which are not random, we can pick in Lemma 6 as a function of . Indeed, pick finishes the proof. ∎
There exists an absolute constant such that if satisfy condition 4, then for any fixed , and , with probability at least :
For simplicity, denote log factor By Lemma 6, we know for any fixed , with probability , we have:
Construct two sets of and , where and with last element , . It is easy to see . By union bound, we have with probability :
Consider following two cases: (1) . Then, there exists such that :
(2) . In this case we know and:
Combining two cases we finish the proof.
In this short note, we introduced the notion of norm subGaussian random vectors, which include subGaussian random vectors and bounded random vectors as special cases. While it is true that , applying concentration bounds for would yield bounds which have at least linear dependence on . In contrast, the bounds we develop (in Lemma 6 and Corollaries 7 and 8) have only logarithmic dependence on . It is not clear if this logarithmic dependence is tight – totally eliminating this dependence is an interesting open problem.
We thank Gabor Lugosi and Nilesh Tripuraneni for helpful discussions.
- Tropp  Joel A Tropp. User-friendly tail bounds for sums of random matrices. Foundations of computational mathematics, 12(4):389–434, 2012.
- Vershynin  Roman Vershynin. Introduction to the non-asymptotic analysis of random matrices. arXiv preprint arXiv:1011.3027, 2010.