Debiasing Distributed Second Order Optimization with Surrogate Sketching and Scaled Regularization

In distributed second order optimization, a standard strategy is to average many local estimates, each of which is based on a small sketch or batch of the data. However, the local estimates on each machine are typically biased, relative to the full solution on all of the data, and this can limit the effectiveness of averaging. Here, we introduce a new technique for debiasing the local estimates, which leads to both theoretical and empirical improvements in the convergence rate of distributed second order methods. Our technique has two novel components: (1) modifying standard sketching techniques to obtain what we call a surrogate sketch; and (2) carefully scaling the global regularization parameter for local computations. Our surrogate sketches are based on determinantal point processes, a family of distributions for which the bias of an estimate of the inverse Hessian can be computed exactly. Based on this computation, we show that when the objective being minimized is l_2-regularized with parameter λ and individual machines are each given a sketch of size m, then to eliminate the bias, local estimates should be computed using a shrunk regularization parameter given by λ^'=λ·(1-d_λ/m), where d_λ is the λ-effective dimension of the Hessian (or, for quadratic problems, the data matrix).

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

02/16/2020

Distributed Averaging Methods for Randomized Second Order Optimization

We consider distributed optimization problems where forming the Hessian ...
05/28/2019

Distributed estimation of the inverse Hessian by determinantal averaging

In distributed optimization and distributed numerical linear algebra, we...
09/05/2017

A Generic Approach for Escaping Saddle points

A central challenge to using first-order methods for optimizing nonconve...
05/15/2021

Adaptive Newton Sketch: Linear-time Optimization with Quadratic Convergence and Effective Hessian Dimensionality

We propose a randomized algorithm with quadratic convergence rate for co...
02/16/2017

Sketched Ridge Regression: Optimization Perspective, Statistical Perspective, and Model Averaging

We address the statistical and optimization impacts of using classical s...
02/24/2021

Learning-Augmented Sketches for Hessians

Sketching is a dimensionality reduction technique where one compresses a...
05/22/2016

The De-Biased Whittle Likelihood

The Whittle likelihood is a widely used and computationally efficient ps...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Acknowledgments and Disclosure of Funding

1 Introduction

We consider the task of second order optimization in a distributed or parallel setting. Suppose that

workers are each given a small sketch of the data (e.g., a random sample or a random projection) and a parameter vector

. The goal of the -th worker is to construct a local estimate of the Newton step relative to a convex loss on the full dataset. The estimates are then averaged and the parameter vector is updated using this averaged step, obtaining . This basic strategy has been extensively studied and it has proven effective for a variety of optimization tasks because of its communication-efficiency [38]. However, a key problem that limits the scalability of this approach is that local estimates of second order steps are typically biased, which means that for a sufficiently large , adding more workers will not lead to any improvement in the convergence rate. Furthermore, for most types of sketched estimates this bias is difficult to compute, or even approximate, which makes it difficult to correct.

In this paper, we propose a new class of sketching methods, called surrogate sketches, which allow us to debias local estimates of the Newton step, thereby making distributed second order optimization more scalable. In our analysis of the surrogate sketches, we exploit recent developments in determinantal point processes (DPPs) to give exact formulas for the bias of the estimates produced with those sketches, enabling us to correct that bias. Due to algorithmic advances in DPP sampling, surrogate sketches can be implemented in time nearly linear in the size of the data, when the number of data points is much larger than their dimensionality, so our results lead to direct improvements in the time complexity of distributed second order optimization. Remarkably, our analysis of the bias of surrogate sketches leads to a simple technique for debiasing the local Newton estimates for -regularized problems, which we call scaled regularization. We show that the regularizer used on the sketched data should be scaled down compared to the global regularizer, and we give an explicit formula for that scaling. Our empirical results demonstrate that scaled regularization significantly reduces the bias of local Newton estimates not only for surrogate sketches, but also for a range of other sketching techniques.

1.1 Debiasing via Surrogate Sketches and Scaled Regularization

Our scaled regularization technique applies to sketching the Newton step over a convex loss, as described in Section 3, however, for concreteness, we describe it here in the context of regularized least squares. Suppose that the data is given in the form of an matrix and an -dimensional vector , where . For a given regularization parameter , our goal is to approximately solve the following problem:

(1)

Following the classical sketch-and-solve paradigm, we use a random sketching matrix , where , to replace this large regularized least squares problem with a smaller problem of the same form. We do this by sketching both the matrix and the vector , obtaining the problem given by:

(2)

where we deliberately allow to be different than . The question we pose is: What is the right choice of so as to minimize , i.e., the bias of , which will dominate the estimation error in the case of massively parallel averaging? We show that the choice of is controlled by a classical notion of effective dimension for regularized least squares [1].

Definition 1

Given a matrix and regularization parameter , the -effective dimension of is defined as .

For surrogate sketches, which we define in Section 2, it is in fact possible to bring the bias down to zero and we give an exact formula for the correct that achieves this (see Theorem 6 in Section 3 for a statement which applies more generally to the Newton’s method).

Theorem 1

If is constructed using a size surrogate sketch from Definition 3, then:

Figure 1: Estimation error against the number of averaged outputs for the Boston housing prices dataset (see Section 5). The dotted curves show the error when the regularization parameter is rescaled as in Theorem 1.

Thus, the regularization parameter used to compute the local estimates should be smaller than the global regularizer . While somewhat surprising, this observation does align with some prior empirical [37] and theoretical [13] results which suggest that random sketching or sampling introduces some amount of implicit regularization. From this point of view, it makes sense that we should compensate for this implicit effect by reducing the amount of explicit regularization being used.

One might assume that the above formula for is a unique property of surrogate sketches. However, we empirically show that our scaled regularization applies much more broadly, by testing it with the standard Gaussian sketch ( has i.i.d. entries ), a Rademacher sketch ( has i.i.d. entries equal or

with probability

), and uniform row sampling. In Figure 1, we plot normalized estimates of the bias, , by averaging i.i.d. copies of , as grows to infinity, showing the results with both scaled (dotted curves) and un-scaled (solid curves) regularization. Remarkably, the scaled regularization seems to correct the bias of very effectively for Gaussian and Rademacher sketches as well as for the surrogate sketch, resulting in the estimation error decaying to zero as grows. For uniform sampling, scaled regularization also noticeably reduces the bias. In Section 5 we present experiments on more datasets which further verify these claims.

Sketch Averaging Regularizer Convergence Rate Assumption
[38] i.i.d. row sample uniform
[14] i.i.d. row sample determinantal
Thm. 2 surrogate sketch uniform
Table 1: Comparison of convergence guarantees for the Distributed Iterative Hessian Sketch on regularized least squares (see Theorem 2), with workers and sketch size . Note that both the references [38, 14] state their results for uniform sampling sketches. This can be easily adapted to leverage score sampling, in which case each sketch costs to construct.

1.2 Convergence Guarantees for Distributed Newton Method

We use the debiasing technique introduced in Section 1.1 to obtain the main technical result of this paper, which gives a convergence and time complexity guarantee for distributed Newton’s method with surrogate sketching. Once again, for concreteness, we present the result here for the regularized least squares problem (1), but a general version for convex losses is given in Section 4 (see Theorem 10). Our goal is to perform a distributed and sketched version of the classical Newton step: , where is the Hessian of the quadratic loss, and is the gradient. To efficiently approximate this step, while avoiding the cost of computing the exact Hessian, we use a distributed version of the so-called Iterative Hessian Sketch (IHS), which replaces the Hessian with a sketched version , but keeps the exact gradient, resulting in the update direction [31, 28, 25, 23]. Our goal is that

should be cheap to construct and it should lead to an unbiased estimate of the exact Newton step

. When the matrix is sparse, it is desirable for the algorithm to run in time that depends on the input sparsity, i.e., the number of non-zeros denoted .

Theorem 2

Let denote the condition number of the Hessian , let be the initial parameter vector and take any . There is an algorithm which returns a Hessian sketch in time , such that if are i.i.d. copies of then,

with probability enjoys a linear convergence rate given as follows:

Remark 3

To reach we need iterations. See Theorem 10 in Section 4 for a general result on convex losses of the form .

Crucially, the linear convergence rate decays to zero as goes to infinity, which is possible because the local estimates of the Newton step produced by the surrogate sketch are unbiased. Just like commonly used sketching techniques, our surrogate sketch can be interpreted as replacing the matrix with a smaller matrix , where is a sketching matrix, with denoting the sketch size. Unlike the Gaussian and Rademacher sketches, the sketch we use is very sparse, since it is designed to only sample and rescale a subset of rows from , which makes the multiplication very fast. Our surrogate sketch has two components: (1) standard i.i.d. row sampling according to the so-called -ridge leverage scores [18, 1]; and (2) non-i.i.d. row sampling according to a determinantal point process (DPP) [20]. While leverage score sampling has been used extensively as a sketching technique for second order methods, it typically leads to biased estimates, so combining it with a DPP is crucial to obtain strong convergence guarantees in the distributed setting. The primary computational costs in constructing the sketch come from estimating the leverage scores and sampling from the DPP.

1.3 Related Work

While there is extensive literature on distributed second order methods, it is useful to first compare to the most directly related approaches. In Table 1, we contrast Theorem 2 with two other results which also analyze variants of the Distributed IHS, with all sketch sizes fixed to . The algorithm of [38] simply uses an i.i.d. row sampling sketch to approximate the Hessian, and then uniformly averages the estimates. This leads to a bias term in the convergence rate, which can only be reduced by increasing the sketch size. In [14], this is avoided by performing weighted averaging, instead of uniform, so that the rate decays to zero with increasing

. Similarly as in our work, determinants play a crucial role in correcting the bias, however with significantly different trade-offs. While they avoid having to alter the sketching method, the weighted average introduces a significant amount of variance, which manifests itself through the additional factor

in the term . Our surrogate sketch avoids the additional variance factor while maintaining the scalability in . The only trade-off is that the time complexity of the surrogate sketch has a slightly worse polynomial dependence on , and as a result we require the sketch size to be at least , i.e., that . Finally, unlike the other approaches, our method uses a scaled regularization parameter to debias the Newton estimates.

Distributed second order optimization has been considered by many other works in the literature and many methods have been proposed such as DANE [34], AIDE [33], DiSCO [40], and others [27, 2]

. Distributed averaging has been discussed in the context of linear regression problems in works such as

[4]

and studied for ridge regression in

[37]. However, unlike our approach, all of these methods suffer from biased local estimates for regularized problems. Our work deals with distributed versions of iterative Hessian sketch and Newton sketch and convergence guarantees for non-distributed version are given in [31] and [32]. Sketching for constrained and regularized convex programs and minimax optimality has been studied in [30, 39, 35]. Optimal iterative sketching algorithms for least squares problems were investigated in [22, 25, 23, 24, 26]. Bias in distributed averaging has been recently considered in [3], which provides expressions for regularization parameters for Gaussian sketches. The theoretical analysis of [3]

assumes identical singular values for the data matrix whereas our results make no such assumption. Finally, our analysis of surrogate sketches builds upon a recent line of works which derive expectation formulas for determinantal point processes in the context of least squares regression

[15, 16, 12, 13].

2 Surrogate Sketches

In this section, to motivate our surrogate sketches, we consider several standard sketching techniques and discuss their shortcomings. Our purpose in introducing surrogate sketches is to enable exact analysis of the sketching bias in second order optimization, thereby permitting us to find the optimal hyper-parameters for distributed averaging.

Given an data matrix , we define a standard sketch of as the matrix , where is a random matrix with i.i.d. rows distributed according to measure with identity covariance, rescaled so that . This includes such standard sketches as:

  1. Gaussian sketch: each row of is distributed as .

  2. Rademacher sketch: each entry of is with probability and otherwise.

  3. Row sampling: each row of is , where and .

Here, the row sampling sketch can be uniform (which is common in practice), and it also includes row norm squared sampling and leverage score sampling (which leads to better results), where the distribution depends on the data matrix .

Standard sketches are generally chosen so that the sketched covariance matrix is an unbiased estimator of the full data covariance matrix, . This is ensured by the fact that . However, in certain applications, it is not the data covariance matrix itself that is of primary interest, but rather its inverse. In this case, standard sketching techniques no longer yield unbiased estimators. Our surrogate sketches aim to correct this bias, so that, for example, we can construct an unbiased estimator for the regularized inverse covariance matrix, (given some ). This is important for regularized least squares and second order optimization.

We now give the definition of a surrogate sketch. Consider some -variate measure , and let be the i.i.d. random design of size for , i.e., an random matrix with i.i.d. rows drawn from . Without loss of generality, assume that has identity covariance, so that . In particular, this implies that is a random sketching matrix.

Before we introduce the surrogate sketch, we define a so-called determinantal design (an extension of the definitions proposed by [13, 16]), which uses determinantal rescaling to transform the distribution of into a non-i.i.d. random matrix . The transformation is parameterized by the matrix , the regularization parameter and a parameter which controls the size of the matrix .

Definition 2

Given scalars and a matrix , we define the determinantal design as a random matrix with randomized row-size, so that

We next give the key properties of determinantal designs that make them useful for sketching and second-order optimization. The following lemma is an extension of the results shown for determinantal point processes by [13].

Lemma 4

Let . Then, we have:

The row-size of , denoted by

, is a random variable, and this variable is

not distributed according to , even though can be used to control its expectation. As a result of the determinantal rescaling, the distribution of is shifted towards larger values relative to , so that its expectation becomes:

We can now define the surrogate sketching matrix by rescaling the matrix , similarly to how we defined the standard sketching matrix for .

Definition 3

Let . Moreover, let be the unique positive scalar for which , where . Then, is a surrogate sketching matrix for .

Note that many different surrogate sketches can be defined for a single sketching distribution , depending on the choice of and

. In particular, this means that a surrogate sketching distribution (even when the pre-surrogate i.i.d. distribution is Gaussian or the uniform distribution) always depends on the data matrix

, whereas many standard sketches (such as Gaussian and uniform) are oblivious to the data matrix.

Of particular interest to us is the class of surrogate row sampling sketches, i.e. where the probability measure is defined by for . In this case, we can straightforwardly leverage the algorithmic results on sampling from determinantal point processes [10, 11] to obtain efficient algorithms for constructing surrogate sketches.

Theorem 5

Given any matrix , and , we can construct the surrogate row sampling sketch with respect to (of any size ) in time .

3 Unbiased Estimates for the Newton Step

Consider a convex minimization problem defined by the following loss function:

where each is a twice differentiable convex function and are the input feature vectors in . For example, if , then we recover the regularized least squares task; and if

, then we recover logistic regression. The Newton’s update for this minimization task can be written as follows:

Newton’s method can be interpreted as solving a regularized least squares problem which is the local approximation of at the current iterate . Thus, with the appropriate choice of matrix (consisting of scaled row vectors ) and vector , the Hessian and gradient can be written as: and . We now consider two general strategies for sketching the Newton step, both of which we discussed in Section 1 for regularized least squares.

3.1 Sketch-and-Solve

We first analyze the classic sketch-and-solve paradigm which has been popularized in the context of least squares, but also applies directly to the Newton’s method. This approach involves constructing sketched versions of both the Hessian and the gradient, by sketching with a random matrix . Crucially, we modify this classic technique by allowing the regularization parameter to be different than in the global problem, obtaining the following sketched version of the Newton step:

Our goal is to obtain an unbiased estimate of the full Newton step, i.e., such that , by combining a surrogate sketch with an appropriately scaled regularization .

We now establish the correct choice of surrogate sketch and scaled regularization to achieve unbiasedness. The following result is a more formal and generalized version of Theorem 1. We let be any distribution that satisfies the assumptions of Definition 3, so that corresponds to any one of the standard sketches discussed in Section 2.

Theorem 6

If is constructed using a surrogate sketch of size , then:

3.2 Newton Sketch

We now consider the method referred to as the Newton Sketch [32, 29], which differs from the sketch-and-solve paradigm in that it only sketches the Hessian, whereas the gradient is computed exactly. Note that in the case of least squares, this algorithm exactly reduces to the Iterative Hessian Sketch, which we discussed in Section 1.2. This approach generally leads to more accurate estimates than sketch-and-solve, however it requires exact gradient computation, which in distributed settings often involves an additional communication round. Our Newton Sketch estimate uses the same as for the sketch-and-solve, however it enters the Hessian somewhat differently:

The additional factor comes as a result of using the exact gradient. One way to interpret it is that we are scaling the data matrix instead of the regularization. The following result shows that, with chosen as before, the surrogate Newton Sketch is unbiased.

Theorem 7

If is constructed using a surrogate sketch of size , then:

4 Convergence Analysis

Here, we study the convergence guarantees of the surrogate Newton Sketch with distributed averaging. Consider i.i.d. copies of the Hessian sketch defined in Section 3.2. We start by finding an upper bound for the distance between the optimal Newton update and averaged Newton sketch update at the ’th iteration, defined as . We will use Mahalanobis norm as the distance metric. Let denote the Mahalanobis norm, i.e., . The distance between the updates is equal to the distance between the next iterates:

We can bound this quantity in terms of the spectral norm approximation error of as follows:

Note that the second term, , is the exact Newton step. To upper bound the first term, we now focus our discussion on a particular variant of surrogate sketch that we call surrogate leverage score sampling. Leverage score sampling is an i.i.d. row sampling method, i.e., the probability measure is defined by for . Specifically, we consider the so-called -ridge leverage scores which have been used in the context of regularized least squares [1], where the probabilities must satisfy ( denotes a row of ). Such ’s can be found efficiently using standard random projection techniques [17, 9].

Lemma 8

If and we use the surrogate leverage score sampling sketch of size , then the i.i.d. copies of the sketch with probability satisfy:

Note that, crucially, we can invoke the unbiasedness of the Hessian sketch, , so we obtain that with probability at least ,

(3)

We now move on to measuring how close the next Newton sketch iterate is to the global optimizer of the loss function . For this part of the analysis, we assume that the Hessian matrix is -Lipschitz.

Assumption 9

The Hessian matrix is -Lipschitz continuous, that is, for all and .

Combining (3) with Lemma 14 from [14] and letting , we obtain the following convergence result for the distributed Newton Sketch using surrogate leverage score sampling sketch.

Theorem 10

Let and

be the condition number and smallest eigenvalue of the Hessian

, respectively. The distributed Newton Sketch update constructed using a surrogate leverage score sampling sketch of size and averaged over workers, satisfies:

Remark 11

The convergence rate for the distributed Iterative Hessian Sketch algorithm as given in Theorem 2 is obtained by using (3) with . The assumption that in Theorem 2 is only needed for the time complexity (see Theorem 5). The convergence rate holds for .

5 Numerical Results

In this section we present numerical results, with further details provided in Appendix D. Figures 2 and 4 show the estimation error as a function of the number of averaged outputs for the regularized least squares problem discussed in Section 1.1, on Cifar-10 and Boston housing prices datasets, respectively.

(a) Gaussian (b) Uniform (c) Surrogate sketch
Figure 2: Estimation error against the number of averaged outputs for regularized least squares on first two classes of Cifar-10 dataset (, , ) for different regularization parameter values . The dotted lines show the error for the debiased versions (obtained using expressions) for each straight line with the same color and marker.
(a) statlog-australian-credit (b) breast-cancer-wisc (c) ionosphere
Figure 3: Distributed Newton Sketch algorithm for logistic regression with -regularization on different UCI datasets. The dotted curves show the error for when the regularization parameter is rescaled using the provided expression for . In all the experiments, we have workers and . The dimensions for each dataset are (), (), (), and the sketch sizes are for plots a,b,c, respectively. The step size for distributed Newton sketch updates has been determined via backtracking line search with parameters , , .

Figure 4: Estimation error of the surrogate sketch, against uniform sampling with unweighted averaging [38] and determinantal averaging [14].

Figure 2 illustrates that when the number of averaged outputs is large, rescaling the regularization parameter using the expression , as in Theorem 1, improves on the estimation error for a range of different values. We observe that this is true not only for the surrogate sketch but also for the Gaussian sketch (we also tested the Rademacher sketch, which performed exactly as the Gaussian did). For uniform sampling, rescaling the regularization parameter does not lead to an unbiased estimator, but it significantly reduces the bias in most instances. Figure 4 compares the surrogate row sampling sketch to the standard i.i.d. row sampling used in conjunction with averaging methods suggested by [38] (unweighted averaging) and [14] (determinantal averaging), on the Boston housing dataset. We used: , , and sketch size

. We show an average over 100 trials, along with the standard error. We observe that the better theoretical guarantees achieved by the surrogate sketch, as shown in Table 

1, translate to improved empirical performance.

Figure 3 shows the estimation error against iterations for the distributed Newton sketch algorithm running on a logistic regression problem with regularization on three different binary classification UCI datasets. We observe that the rescaled regularization technique leads to significant speedups in convergence, particularly for Gaussian and surrogate sketches.

6 Conclusion

We introduced two techniques for debiasing distributed second order methods. First, we defined a family of sketching methods called surrogate sketches, which admit exact bias expressions for local Newton estimates. Second, we proposed scaled regularization, a method for correcting that bias.

Acknowledgements

This work was partially supported by the National Science Foundation under grant IIS-1838179. Also, MD and MWM acknowledge DARPA, NSF, and ONR for providing partial support of this work.

References

Appendix A Expectation Formulas for Surrogate Sketches

In this section show the expectation formulas given in Lemma 4. First, we derive the normalization constant of the determinantal design introduced in Definition 2. For this, we rely on the framework of determinant preserving random matrices recently introduced by [13]. The proofs here roughly follow the techniques from [13], the main difference being that we consider regularized matrices, whereas they focus on the unregularized case.

A square random matrix is determinant preserving (d.p.) if taking expectation commutes with computing a determinant for that matrix and all its submatrices. Consider the matrix for an isotropic -variate measure and , as in Definition 2. In Lemma 5, [13] show that the matrix is determinant preserving. Thus, using closure under addition (Lemma 4 in [13]), the matrix is also d.p., so the normalization constant for the probability defined in Definition 2 is:

Proof of Lemma 4  By definition, any d.p. matrix satisfies , where denotes the adjugate of a square matrix, which for any positive definite matrix is given by . This allows us to show the second expectation formula from Lemma 4 for . Note that the proof is analogous to the proof of Lemma 11 in [13].

We next prove the first expectation formula from Lemma 4, by following the steps outlined by [13] in the proof of their Lemma 13. Let denote any vector in . The th entry of the vector can be obtained by left multiplying it by the th standard basis vector . We will use the following observation (Fact 2.14.2 from [5]):

Combining this with the fact that both the matrices and are determinant preserving for defined as before, we obtain that:

Since the above holds for all indices and all vectors , this completes the proof.  

We next use Lemma 4 to prove Theorems 6 and 7.

Proof of Theorem 6  Suppose that, in Lemma 4, parameter is chosen as in Definition 3 and let . Then we have and the surrogate sketch in Theorem 6 is given by . Note that we have , so we can write:

where in we used both formulas from Lemma 4. This concludes the proof.  

The proof of Theorem 7 follows analogously.

Proof of Theorem 7  Suppose that, in Lemma 4, parameter is chosen as in Definition 3 and let , with . Once again, we have , so we can write:

where in we used the second formula from Lemma 4. This concludes the proof.  

Appendix B Efficient Algorithms for Surrogate Sketches

In this section, we provide a framework for implementing surrogate sketches by relying on the algorithmic techniques from the DPP sampling literature. We then use these results to give the input-sparsity time implementation of the surrogate leverage score sampling sketch.

Definition 4

Given a probability measure over domain and a kernel function , we define a determinantal point process as a distribution over finite subsets of , such that for any and event :

Remark 12

If is the set of row vectors of an matrix and , then , with being the uniform measure over , reduces to a standard L-ensemble DPP [20]. In particular, let denote a random subset of sampled so that . Then the set of rows of is distributed identically to .

A key property of L-ensembles, which relates them to the -effective dimension is that if for an isotropic measure and , then .

We next show that our determinantal design (Definition 2) can be decomposed into a DPP portion and an i.i.d. portion, which enables efficient sampling for surrogate sketches. A similar result was previously shown by [13] for their determinantal design (which is different from ours in that it is not regularized). The below result also immediately leads to the formula for the expected size of a surrogate sketch given in Section 2, which states that .

Lemma 13

Let be a probability measure over . Given scalars and a matrix , let , where and for . Then the matrix formed by adding the elements of as rows into the matrix and then randomly permuting the rows of the obtained matrix, is distributed as .

Proof  Let be an event measurable with respect to . We have:

which concludes the proof.  
We now give an algorithm for sampling a surrogate of the i.i.d. row-sampling sketch where the importance sampling distribution is given by . Here, the probability measure is defined so that for each . The surrogate sketch for this can be constructed as follows:

  1. Sample set so that , i.e., according to .

  2. Draw for and sample i.i.d. from .

  3. Let be a sequence consisting of and randomly permuted together.

  4. Then, the th row of is for .

We next present an implementation of the surrogate row sampling sketch which runs in input-sparsity time for tall matrices (i.e., when ). The algorithm samples exactly from the surrogate sketching distribution, which is crucial for the analysis. Our algorithm is based on two recent papers on DPP sampling [10, 11], however we use a slight modification due to [7], which ensures exact sampling in input sparsity time.

1:  input: