DeepAI

# Computational Lower Bounds for Sparse PCA

In the context of sparse principal component detection, we bring evidence towards the existence of a statistical price to pay for computational efficiency. We measure the performance of a test by the smallest signal strength that it can detect and we propose a computationally efficient method based on semidefinite programming. We also prove that the statistical performance of this test cannot be strictly improved by any computationally efficient method. Our results can be viewed as complexity theoretic lower bounds conditionally on the assumptions that some instances of the planted clique problem cannot be solved in randomized polynomial time.

• 18 publications
• 43 publications
02/20/2019

### Optimal Average-Case Reductions to Sparse PCA: From Weak Assumptions to Strong Hardness

In the past decade, sparse principal component analysis has emerged as a...
02/23/2012

### Optimal detection of sparse principal components in high dimension

We perform a finite sample analysis of the detection levels for sparse p...
06/19/2018

### Reducibility and Computational Lower Bounds for Problems with Planted Sparse Structure

The prototypical high-dimensional statistics problem entails finding a s...
02/19/2019

### Universality of Computational Lower Bounds for Submatrix Detection

In the general submatrix detection problem, the task is to detect the pr...
05/21/2020

### Computationally efficient sparse clustering

We study statistical and computational limits of clustering when the mea...
08/21/2018

### Curse of Heterogeneity: Computational Barriers in Sparse Mixture Models and Phase Retrieval

We study the fundamental tradeoffs between statistical accuracy and comp...
07/25/2021

### Logspace Reducibility From Secret Leakage Planted Clique

The planted clique problem is well-studied in the context of observing, ...

## 1 Introduction

The modern scientific landscape has been significantly transformed over the past decade by the apparition of massive datasets. From the statistical learning point of view, this transformation has led to a paradigm shift. Indeed, most novel methods consist in searching for sparse structure in datasets, whereas estimating parameters over this structure is now a fairly well understood problem. It turns out that most interesting structures have a combinatorial nature, often leading to computationally hard problems. This has led researchers to consider various numerical tricks, chiefly convex relaxations, to overcome this issue. While these new questions have led to fascinating interactions between learning and optimization, they do not always come with satisfactory answers from a statistical point of view. The main purpose of this paper is to study one example, namely sparse principal component detection, for which current notions of statistical optimality should also be shifted, along with the paradigm.

Sparse detection problems where one wants to detect the presence of a sparse structure in noisy data falls in this line of work. There has been recent interest in detection problems of the form signal-plus-noise, where the signal is a vector with combinatorial structure

[ABBDL10, ACCP11, ACV13] or even a matrix [BI13, SN13, KBRS11, BKR11]. The matrix detection problem was pushed beyond the signal-plus-noise model towards more complicated dependence structures [ACBL12, ACBL13, BR12]. One contribution of this paper is to extend these results to more general distributions.

For matrix problems, and in particular sparse principal component (PC) detection, some computationally efficient methods have been proposed, but they are not proven to achieve the optimal detection levels. [JL09, CMW12, Ma13]

suggest heuristics for which detection levels are unknown and

[BR12] prove suboptimal detection levels for a natural semidefinite relaxation developed in [dGJL07] and an even simpler, efficient, dual method called Minimum Dual Perturbation (MDP). More recently, [dBG12] developed another semidefinite relaxation for sparse PC detection that performs well only outside of the high-dimensional, low sparsity regime that we are interested in. Note that it follows from the results of [AW09] that the former semidefinite relaxation is optimal if it has a rank-one solution. Unfortunately, rank-one solutions can only be guaranteed at suboptimal detection levels. This literature hints at a potential cost for computational efficiency in the sparse PC detection problem.

Partial results were obtained in [BR12] who proved that their bound for MDP and SDP are unlikely to be improved, as otherwise they would lead to randomized polynomial time algorithms for instances of the planted clique problem that are believed to be hard. This result only focuses on a given testing method, but suggests the existence of an intrinsic gap between the optimal rates of detection and what is statistically achievable in polynomial time. Such phenomena are hinted at in [CJ13] but their these results focus on the behavior of upper bounds. Closer to our goal, is [SSST12] that exhibits a statistical price to pay for computational efficiency. In particular, their derive a computational theoretic lower bound using a much weaker conjecture than the hidden clique conjecture that we employ here, namely the existence of one-way permutations. This conjecture is widely accepted and is the basis of many cryptographic protocols. Unfortunately, the lower bound holds only for a synthetic classification problem that is somewhat tailored to this conjecture. It still remains to fully describe a theory, and to develop lower bounds on the statistical accuracy that is achievable in reasonable computational time for natural problems. This article aims to do so for a general sparse PC detection problem.

This paper is organized in the following way. The sparse PC detection problem is formally described in Section 2. Then, we show in Section 3 that our general detection framework is a natural extension of the existing literature, and that all the usual results for classical detection of sparse PC are still valid. Section 4 focuses on testing in polynomial time, where we study detection levels for the semidefinite relaxation developed of [dGJL07] (It trivially extends to the MDP statistic of [BR12]). These levels are shown to be unimprovable using computationally efficient methods in Section 5. This is achieved by introducing a new notion of optimality that takes into account computational efficiency. Practically, we reduce the planted clique problem, conjectured to be computationally hard already in an average-case sense (i.e. over most random instances) to obtaining better rates for sparse PC detection.

Notation. The space of symmetric real matrices is denoted by . We write whenever is semidefinite positive. We denote by the set of nonnegative integers and define .

The elements of a vector are denoted by and similarly, a matrix has element on its th row and th column. For any , denotes the “norm” of a vector and is defined by . Moreover, we denote by its so-called “norm”, that is its number of nonzero elements. Furthermore, by extension, for , we denote by the norm of the vector formed by the entries of . We also define for the set of unit vectors within the -ball of radius

 Bq(R)={v∈Rd:|v|2=1,|v|q≤R}.

For a finite set , we denote by its cardinality. We also write for the submatrix with elements , and for the vector of with elements for . The vector denotes a vector with coordinates all equal to . If a vector has an index such as , then we use to denote its th element.

The vectors and matrices are the elements of the canonical bases of and . We also define as the unit Euclidean sphere of and the set of vectors in with support

. The identity matrix in

is denoted by .

A Bernoulli random variable with parameter

takes values or

with probability

and respectively. A Rademacher random variable takes values or with probability . A binomial random variable, with distribution is the sum of independent Bernoulli random variables with identical parameter . A hypergeometric random variable, with distribution is the random number of successes in draws from a population of size among which are successes, without replacement. The total variation norm, noted has the usual definition.

The trace and rank functionals are denoted by and respectively and have their usual definition. We denote by the complement of a set . Finally, for two real numbers and , we write , , and .

## 2 Problem description

Let be a centered random vector with unknown distribution

that has finite second moment along every direction. The first principal component for

is a direction

such that the variance

along direction is larger than in any other direction. If no such exists, the distribution of is said to be isotropic. The goal of sparse principal component detection is to test whether follows an isotropic distribution or a distribution for which there exists a sparse , , along which the variance is large. Without loss of generality, we assume that under the isotropic distribution , all directions have unit variance and under , the variance along is equal to for some positive . Note that since has unit norm, captures the signal strength.

To perform our test, we observe independent copies of . For any direction , define the empirical variance along by

 ˆVn(u)=1nn∑i=1(u⊤Xi)2.

Clearly the concentration of around will have a significant effect on the performance of our testing procedure. If, for any , the centered random variable satisfies the conditions for Bernstein’s inequality (see, e.g., [Mas07], eq. (2.18), p.24) under both and , then, up to numerical constants, we have

 ≤ν, ∀ν>0, (1) P⊗nv(ˆVn(v)−(1+θ)<−2√2θklog(2/ν)n−4log(2/ν)n) ≤ν, ∀ν>0, v∈B0(k) . (2)

Such inequalities are satisfied if we assume that and

are sub-Gaussian distributions for example. Rather than specifying such an ad-hoc assumption, we define the following sets of distributions under which the fluctuations of

around are of the same order as those of sub-Gaussian distributions. As a result, we formulate our testing problem on the unknown distribution of as follows

 H0 : P∈D0={P0:(???) holds} H1 : P∈Dk1(θ)=⋃v∈B0(k){Pv:(???) holds}.

Note that distributions in and are implicitly centered at zero.

We argue that interesting testing procedures should be robust and thus perform well uniformly over these distributions. In the rest of the paper, we focus on such procedures. The existing literature on sparse principal component testing, particularly in [BR12] and [ACBL12]

focuses on multivariate normal distributions, yet only relies on the sub-Gaussian properties of the empirical variance along unit directions. Actually, all the distributional assumptions made in

[VL12, ACBL12] and [BR12] are particular cases of these hypotheses. We will show that concentration of the empirical variance as in (1) and (2) is sufficient to derive the results that were obtained under the sub-Gaussian assumption.

Recall that a test for this problem is a family of -valued measurable functions of the data . Our goal is to quantify the smallest signal strength for which there exists a test with maximum test error bounded by , i.e.,

 supP0∈D0P1∈Dk1(θ){P⊗n0(ψ=1)∨P⊗n1(ψ=0)}≤δ.

To call our problem “sparse”, we need to assume somehow that is rather small. Throughout the paper, we fix a tolerance (e.g., ) and focus on the case where the parameters are in the sparse regime of positive integers defined by

 R0={(d,n,k)∈N31:15√klog(6ed/δ)n≤1,k≤d0.49}.

Note that the constant is arbitrary and can be replaced by any constant .

###### Definition 1.

Fix a set of parameters in the sparse regime. Let be a set of tests. A function of is called optimal rate of detection over the class  if for any , it holds:

• there exists a test that discriminates between and at level for some constant , i.e., for any

 supP0∈D0P1∈Dk1(θ){P⊗n0(ψ=1)∨P⊗n1(ψ=0)}≤δ.

In this case we say that discriminates between and at rate .

• for any test , there exists a constant such that implies

 supP0∈D0P1∈Dk1(θ){P⊗n0(ϕ=1)∨P⊗n1(ϕ=0)}≥δ.

Moreover, if both (i) and (ii) hold, we say that is an optimal test over the class .

This an adaptation of the usual notion of statistical optimality, when one is focusing on the class of measurable functions, for , also known as minimax optimality [Tsy09]. In order to take into account the asymptotic nature of some classes of statistical tests (namely, those that are computationally efficient), we allow the constant in (ii) to depend on the test.

## 3 Statistically optimal testing

We focus first on the traditional setting where contains all sequences of tests.

Denote by the covariance matrix of and by its empirical counterpart:

 ^Σ=1nn∑i=1XiX⊤i. (3)

Observe that and , for any . Maximizing over gives the largest empirical variance along any -sparse direction. It is also known as the

-sparse eigenvalue of

defined by

 λkmax(^Σ)=maxu∈B0(k)u⊤^Σu. (4)

The following theorem describes the performance of the test

 ψd,n,k=1{λkmax(^Σ)>1+τ},τ>0. (5)
###### Theorem 2.

Assume that and define

 ¯θ=15√klog(6edkδ)n.

Then, for , the test defined in (5) with threshold , satisfies

 supP0∈D0P1∈Dk1(θ){P⊗n0(ψ=1)∨P⊗n1(ψ=0)}≤δ.

Proof  Define . For , by (2), and for , using Lemma 10, we get

 P⊗n0(λkmax(^Σ)≥1+τ)≤δ,P⊗n1(λkmax(^Σ)≤1+θ−τ1)≤δ.

To conclude the proof, observe that .
The following lower bound follows directly from [BR12], Theorem 5.1 and holds already for Gaussian distributions.

###### Theorem 3.

For all , there exists a constant such that if

 θ<θ–ε=√klog(Cεd/k2+1)n,

any test satisfies

 supP0∈D0P1∈Dk1(θ){P⊗n0(ϕ=1)∨P⊗n1(ϕ=0)}≥12−ε.

Theorems 2 and 3 imply the following result.

###### Corollary 4.

The sequence

 θ∗=√klogdn,(d,n,k)∈R0,

is the optimal rate of detection over the class of all tests.

## 4 Polynomial time testing

It is not hard to prove that approximating up to a factor of , for any symmetric matrix of size and any is NP-hard, by a trivial reduction to CLIQUE (see [Hås96, Hås99, Zuc06] for hardness of approximation of CLIQUE). Yet, our problem is not worst case and we need not consider any matrix . Rather, here,

is a random matrix and we cannot directly apply the above results.

In this section, we look for a test with good statistical properties and that can be computed in polynomial time. Indeed, finding efficient statistical methods in high-dimension is critical. Specifically, we study a test based on a natural convex (semidefinite) relaxation of developed in [dGJL07].

For any let be defined as the optimal value of the following semidefinite program:

 SDPk(A)= max. Tr(AZ) (6) subject to Tr(Z)=1,|Z|1≤k,Z⪰0

This optimization problem can be reformulated as a semidefinite program in its canonical form with a polynomial number of constraints and can therefore be solved in polynomial time up to arbitrary precision using interior point methods for example [BV04]. Indeed, we can write

 SDPk(A)= max. ∑i,jAij(z+ij−z−ij) subject to z+ij=z+ji≥0,z−ij=z−ji≥0 ∑i(z+ii−z−ii)=1,∑i,j(z+ij+z−ij)≤k ∑i>j(z+ij−z−ij)(Eij+Eji)+∑ℓ(z+ℓℓ−z−ℓℓ)Eℓℓ⪰0.

Consider the following test

 ψd,n,k=1{SDP(n)k(^Σ)>1+τ},τ>0, (7)

where is a -approximation of . [BAd10] show that can be computed in elementary operations and thus in polynomial time.

###### Theorem 5.

Assume that are such that

 ~θ=23√k2log(4d2/δ)n≤1.

Then, for , the test defined in (7) with threshold satisfies

 supP0∈D0P1∈Dk1(θ){P⊗n0(ψ=1)∨P⊗n1(ψ=0)}≤δ.

Proof  Define

 τ0=16√k2log(4d2/δ)n,τ1=7√klog(4/δ)n.

For all , , by Lemma 11 and Lemma 10, since , it holds

 P⊗n0(SDPk(^Σ)≥1+τ0)≤δ,P⊗n1(SDPk(^Σ)≤1+θ−τ1)≤δ.

Recall that and observe that .
This size of the detection threshold is consistent with the results of [AW09, BR12] for Gaussian distributions.

Clearly, this theorem, together with Theorem 3, indicate that the test based on may be suboptimal within the class of all tests. However, as we will see in the next section, it can be proved to be optimal in a restricted class of computationally efficient tests.

## 5 Complexity theoretic lower bounds

It is legitimate to wonder if the upper bound in Theorem 5 is tight. Can faster rates be achieved by this method, or by other, possibly randomized, polynomial time testing methods? Or instead, is this gap intrinsic to the problem? A partial answer to this question is provided in [BR12], where it is proved that the test defined in (7) cannot discriminate at a level significantly lower than . Indeed, such a test could otherwise be used to solve instances of the planted clique problem that are believed to be hard. This result is supported by some numerical evidence as well.

In this section, we show that it is true not only of the test based on SDP but of any test computable in randomized polynomial time.

### 5.1 Lower bounds and polynomial time reductions

The upper bound of Theorem 5

, if tight, seems to indicate that there is a gap between the detection levels that can be achieved by any test, and those that can be achieved by methods that run in polynomial time. In other words, it indicates a potential statistical cost for computational efficiency. To study this phenomenon, we take the approach favored in theoretical computer science, where our primary goal is to classify problems, rather than algorithms, according to their computational hardness. Indeed, this approach is better aligned with our definition of optimal rate of detection where lower bounds should hold for any tests. Unfortunately, it is difficult to derive a lower bound on the performance of

any candidate algorithm to solve a given problem. Rather, theoretical computer scientists have developed reductions from problem A to problem B with the following consequence: if problem B can be solved in polynomial time, then so can problem A. Therefore, if problem A is believed to be hard then so is problem B. Note that our reduction requires extra bits of randomness and is therefore a randomized polynomial time reduction.

This question needs to be formulated from a statistical detection point of view. As mentioned above, can be proved to be NP-hard to approximate. Nevertheless, such worst case results are not sufficient to prove negative results on our average case problem. Indeed, the matrix is is random and we only need to be able to approximate up to constant factor on most realizations. In some cases, this small nuance can make a huge difference, as problems can be hard in the worst case but easy in average (see, e.g., [Bop87] for an illustration on Graph Bisection). In order to prove a complexity theoretic lower bound on the sparse principal component detection problem, we will build a reduction from a notoriously hard detection problem: the planted clique problem.

### 5.2 The Planted Clique problem

Fix an integer and let denote the set of undirected graphs on vertices. Denote by the distribution over generated by choosing to connect every pair of vertices by an edge independently with probability . For any , the distribution is constructed by picking vertices arbitrarily and placing a clique111A clique is a subset of fully connected vertices. between them, then connect every other pair of vertices by an edge independently with probability . Note that is simply the distribution of an Erdős-Rényi random graph. In the decision version of this problem, called Planted Clique, one is given a graph on vertices and the goal is to detect the presence of a planted clique.

###### Definition 6.

Fix . Let Planted Clique

denote the following statistical hypothesis testing problem:

 HPC0 : G∼G(m,1/2)=P(G)0 HPC1 : G∼G(m,1/2,κ)=P(G)1.

A test for the planted clique problem is a family , where .

The search version of this problem [Jer92, Kuč95], consists in finding the clique planted under . The decision version that we consider here is traditionally attributed to Saks [KV02, HK11]. It is known [Spe94] that if , the planted clique is the only clique of size in the graph, asymptotically almost surely (a.a.s.). Therefore, a test based on the largest clique of allows to distinguish and for , a.a.s. This is clearly not a computationally efficient test.

For there is no known polynomial time algorithm that solves this problem. Polynomial time algorithms for the case were first proposed in [AKS98], and subsequently in [McS01, AV11, DGGP10, FR10, FK00]. It is widely believed that there is no polynomial time algorithm that solves Planted Clique for any of order for some fixed positive . Recent research has been focused on proving that certain algorithmic techniques, such as the Metropolis process [Jer92] and the Lovàsz-Schrijver hierarchy of relaxations [FK03] fail at this task. The confidence in the difficulty of this problem is so strong that it has led researchers to prove impossibility results assuming that Planted Clique is indeed hard. Examples include cryptographic applications, in [JP00], testing for -wise dependence in [AAK07], approximating Nash equilibria in [HK11] and approximating solutions to the densest -subgraph problem by [AAM11].

We therefore make the following assumption on the planted clique problem. Recall that is a confidence level fixed throughout the paper.

• For any and all randomized polynomial time tests , there exists a positive constant that may depend on and such that

 P(G)0(ξm,κ(G)=1)∨P(G)1(ξm,κ(G)=0)≥1.2δ,∀ ma2<Γκ

Note that can be replaced by any constant arbitrary close to . Since is polynomial in , here a randomized polynomial time test is a test that can be computed in time at most polynomial in and has access to extra bits of randomness. The fact that may depend on is due to the asymptotic nature of polynomial time algorithms. Below is an equivalent formulation of Hypothesis 5.2.

• For any and all randomized polynomial time tests , there exists that may depend on and such that

 P(G)0(ξm,κ(G)=1)∨P(G)1(ξm,κ(G)=0)≥1.2δ,∀ ma2<κ

Note that we do not specify a computational model intentionally. Indeed, for some restricted computational models, Hypothesis 5.2 can be proved to be true for all [Ros10, FGR13]

. Moreover, for more powerful computational models such as Turing machines, this hypothesis is conjectured to be true. It was shown in

[BR12] that improving the detection level of the test based on SDP would lead to a contradiction of Hypothesis 5.2 for some . Herefater, we extend this result to all randomized polynomial time algorithms, not only those based on SDP.

### 5.3 Randomized polynomial time reduction

Our main result is based on a randomized polynomial time reduction of an instance of the planted clique problem to an instance of the sparse PC detection problem. In this section, we describe this reduction and call it the bottom-left transformation. For any , define

 Rμ=R0∩{k≥nμ}∩{n

The condition is necessary since “polynomial time” is an intrinsically asymptotic notion and for fixed , computing takes polynomial time in . The condition is an artifact of our reduction and could potentially be improved. Nevertheless, it characterizes the high-dimensional setup we are interested in and allows us to shorten the presentation.

Given , fix integers such that , and let be an instance of the planted clique problem with a potential clique of size . We begin by extracting a bipartite graph as follows. Choose right vertices at random among the possible and choose left vertices among the vertices that are not in . The edges of this bipartite graph222The “bottom-left” terminology comes from the fact that the adjacency matrix of this bipartite graph can be obtained as the bottom-left corner of the original adjacency matrix after a random permutation of the row/columns. are . Next, since , add new left vertices and place an edge between each new left vertex and every old right vertex independently with probability . Label the left (resp. right) vertices using a random permutation of (resp. ) and denote by the resulting bipartite graph. Note that if has a planted clique of size , then has a planted biclique of random size.

Let denote the adjacency matrix of and let be i.i.d Rademacher random variables that are independent of all previous random variables. Define

 X(G)i=ηi(2Bi−1)∈{−1,1}d,

where denotes the -th column of . Put together, these steps define the bottom-left transformation of a graph by

 bl(G)=(X(G)1,…,X(G)n)∈Rd×n. (8)

Note that can be constructed in randomized polynomial time in .

### 5.4 Optimal detection over randomized polynomial time tests

For any , define the detection level by

Up to logarithmic terms, it interpolates polynomially between the statistically optimal detection level

and the detection level that is achievable by the polynomial time test based on . We have and for some positive constant .

###### Theorem 7.

Fix and define

 a=2μ,b=1−(2−α)μ. (9)

For any , there exists a constant such that the following holds. For any , there exists such that , a random transformation , that can be computed in polynomial time and distributions such that for any test , we have

 P⊗n0(ψd,n,k=1)∨P⊗n1(ψd,n,k=0)≥P(G)0(ξm,κ(G)=1)∨P(G)1(ξm,κ(G)=0)−δ5,

where .

Proof  Fix . First, if is an Erdős-Rényi graph, is an array of i.i.d. vectors of independent Rademacher random variables. Therefore .

Second, if has a planted clique of size , let

denote the joint distribution of

. The choices of and depend on the relative size of and . Our proof relies on the following lemma.

###### Lemma 8.

Fix and integers such that , ,

 (a) mn≥8βδ,(b) nκm≥16log(mn),(c) nκm≥8k. (10)

Moreover, define

 ¯θ=(k−1)κ2m,

Let and be defined in (8). Denote by the distribution of . Then, there exists a distribution such that

 ∥∥Pbl(G)1−P⊗n1∥∥TV≤βδ.

Proof  Let (resp. ) denote the (random) right (resp. left) vertices of that are in the planted biclique.

Define the random variables

 ε′i=1{i∈S},i=1,…,nγ′j=1{j∈T},j=1,…,d.

On the one hand, if , i.e., if , then is a vector of independent Rademacher random variables. On the other hand, if , i.e., if then, for any ,

 X(G)i,j=Y′i,j={ηiif γ′j=1,rijotherwise,

where is a matrix of i.i.d Rademacher random variables.

We can therefore write

 X(G)i=(1−ε′i)ri+ε′iY′i,i=1,…,n,

where and is the th row of .

Note that the s are not independent. Indeed, they correspond to draws without replacement from an urn that contains balls (vertices) among which are of type (in the planted clique) and the rest are of type (outside of the planted clique). Denote by the joint distribution of and define their “with replacement” counterparts as follows. Let be i.i.d. Bernoulli random variables with parameter . Denote by the joint distribution of .

We also replace the distribution of the s as follows. Let have conditional distribution given be given by

 pγ|ε(A)=P(γ′∈A∣∣d∑i=1γ′≥k,ε′=ε).

Define by

 Xi=(1−εi)ri+εiYi,i=1,…,n,

where has coordinates given by

 Yi,j={ηiif γj=1rijotherwise

With this construction, the s are iid. Moreover, as we will see, the joint distribution of is close in total variation to the joint distribution of .

Note first that Markov’s inequality yields

 (11)

Moreover, given , we have . It follows from [DF80], Theorem  that

 ∥∥H(2m−n,κ−s,n)−B(n,κ−s2m−n)∥∥TV≤4n2m−n≤4nm.

Together with the Chernoff-Okamoto inequality [Dud99], Equation (1.3.10), it yields

 P(U

Combined with (11) and view of (10), it implies that with probability , it holds

 d∑j=1γj≥U≥nκ4m−√nκ4mlog(mn)≥nκ8m≥k. (12)

Denote by the joint distribution of and by that of . Using again [DF80], Theorem  and (10)(a), we get

 ∥p′−p∥TV≤6nm+∥pε′−pε∥TV≤6nm+4n2m=8nm≤βδ.

Since the conditional distribution of given is the same as that of given , we have

It remains to prove that . Fix and define by

 Zj={γj/√k,if ∑ji=1γi≤k0otherwise.

Denote by , the support of . Next, observe that for any , it holds

 infv∈B0(k)P⊗n1(ˆVn(v)−(1+θ)<−x)≤P⊗n1(ˆVn(Z)−(1+θ)<−x). (13)

Moreover, for any

 (Z⊤Xi)2=1k(kεiηi+(1−εi)∑j∈SZrij)2=εik+(1−εi)1k(∑j∈SZrij)2.

Therefore, since is independent of the s, the following equality holds in distribution:

 (Z⊤Xi)2 dist.= 1+εi(k−1)+2(1−εi)k(k2)∑ℓ=1ωi,ℓ,

where is a sequence of i.i.d Rademacher random variables that are independent of the s. Note that by Hoeffding’s inequality, it holds with probability at least ,

 2nkn∑i=1(k2)∑ℓ=1ωi,ℓ≥−4nk√2n(k2)log(2/ν)≥−4√log(2/ν)n.

Moreover, it follows from the Chernoff-Okamoto inequality [Dud99], Equation (1.3.10), that with probability at least , it holds

 k−1nn∑i=1