A popular method to deal with modern massive data sets with large volume is to sample a representative data set that is much smaller than the original data set and then carry out analysis on the representative data set. The sampling of the representative data set is usually dependent on the analysis method employed. As long as the sampling process is not too computationally expensive, if the size of the sample is much smaller than the size of the original data set, we could have considerable savings on the total computation time.
Algorithmic leveraging refers to the special case where the probability an observation is sampled is positively correlated with the influence of each observation on the results of data analysis. The definition of influence varies based on the data analysis problem considered.
We are concerned with algorithmic leveraging for the problem of least squares approximation (linear regression) , but algorithmic leveraging has been applied to many other data analysis problems with large data sets. They include low-rank matrix approximation  and least absolute deviations regression .
Previous literature has primarily been concerned with estimation, providing high probability bounds on errors. However, in many practical regression settings, we also would like to perform uncertainty quantification. Therefore, in this paper we explore how to efficiently construct confidence intervals and tests of significance for the estimated coefficients. We utilize the framework of Ma et al. , which computes the expectation and variance of algorithmic leveraging estimates for linear regression.
Assume that the data come from the following model.
where are the predictors and are the responses.
The statistically efficient estimator of the regression coefficient vector
is the ordinary least squares (OLS) estimator, which requires time to compute (see Section 2.1). When both and are large, that would be expensive. This is why a data reduction method like algorithmic leveraging is desirable.
In section 2, we introduce ordinary least squares and algorithmic leveraging for linear regression, and present some useful lemmas. Section 3 shows how to construct confidence intervals and tests of significance using those estimated coefficients in time. The confidence intervals are exact when is known and asymptotically exact when is not known. In section 4, using simulated data, we confirm that our proposed confidence intervals have at least the nominal coverage probability and that our tests of significance control the type 1 error rate and have low type 2 error rates. In contrast, the bootstrap, a popular approach to obtaining confidence intervals for complex estimators, is more computationally expensive and may lead to confidence intervals with too small coverage probabilities. Section 5 concludes and discusses possible future work.
2 Background and Preliminary Work
In this section, we provide some background on least squares estimation of linear regression models and describe algorithmic leveraging as applied to that problem. We end with a characterization of the distribution of the resulting regression coefficient estimates.
Let indicate the th entry of a vector , and indicate entry of a matrix .
2.1 Ordinary Least Squares and Statistical Leverage Scores
Suppose that we have data from Model (1). Let be the matrix with rows and be the -dimensional vector of ’s. The OLS problem is
and its solution is
By the Gauss-Markov Theorem ,
is the minimum variance unbiased estimator of
. By using the singular value decomposition (SVD) of, and, when is known, can be computed in time . However, when both and are large, doing so is expensive.
Let the OLS fitted values be
where is called the hat matrix.
The statistical leverage score of the th observation is defined to be the th diagonal entry of , . The statistical leverage scores may be directly computed from the SVD of , and thus requires computation time. is considered to be the influence of the th observation on the regression results . For linear regression, algorithmic leveraging samples observations so that those with higher statistical leverage scores are more likely to be sampled and solves a least squares problem on the sample.
2.2 Algorithmic Leveraging in Linear Regression
Note that in theory could be any natural number, but in practice we would choose .
Step of Algorithm 1 has computational complexity , and step . Step , as in section 2.1, requires computation time. Therefore, as long as the computation time to obtain the distribution is , the total computation time of Algorithm 1 is .
The ideal distribution ; by doing so, we are more likely to sample observations that are influential. Because computing the statistical leverage scores requires time, in practice, approximations such as those of Clarkson et al.  and Drineas et al.  are used; they are computable in time and time, respectively.
However, for our theoretical results we do not assume that was computed using either of the above algorithms. We only assume that is not dependent on the responses and that does not have any entries equal to zero.
Let be the matrix where entry is if the th sample is the th observation in the original data set and otherwise. The sampled data can be written as . Let be the diagonal matrix with th diagonal element if the th sample is the th observation of the original data set.
2.3 Asymptotic Normality of the Algorithmic Leveraging Estimates
In this section, we study the distribution of . We first note that , as defined in , is a diagonal matrix. Therefore, we can write , where .
For given , as , converges almost surely to .
See Section 6.1. ∎
Using the multivariate Lindeberg-Feller central limit theorem and results from Ma et al. , we obtain
Suppose that as , converges to a finite positive definite matrix and . Then as , is approximately distributed as
See Section 6.2 ∎
This theorem can be used to construct confidence intervals for each element of that have the correct coverage probability as and approach infinity. However, doing so requires computing , which requires computation time, and thus we proceed in a different direction.
Because the main bottleneck in applying Theorem 1 to construct confidence intervals is in computing the variance of , we could consider using the bootstrap as follows. For a total of times, the bootstrap samples with replacement observations from the data set and applies Algorithm 1 to the sample. Then, for
, the standard deviationof the th coordinate of the algorithmic leveraging estimates is computed; the level bootstrap confidence interval for is constructed as
However, since in practice we must use fairly large values of , usually at least on the order of , the computational complexity of the bootstrap procedure is at least . It is more expensive than our proposal presented in the next section, and we show that experimentally it may not lead to valid confidence intervals. Moreover, the asymptotic guarantees of the bootstrap would require both and to approach infinity, while our proposal’s guarantees only require to approach infinity.
3 Inference on
This section presents our proposed approach for uncertainty quantification for the algorithmic leveraging estimator , both when the error variance is known and when it is unknown.
Consider the distribution of conditional on , i.e. conditional on the sample. is normally distributed with mean and variance
Entry of is if the th and th sample are the same observation, and otherwise. Thus, can be computed in time. Note that we already know and from computing . Since is , is diagonal of size , and is , can be computed in time. Because can be found from the computation of , can be computed in time. If we choose for some , then that computation time is .
3.1 is Known
By the argument in the previous paragraph, using the following theorem, we can get exact confidence intervals for each element of in time.
For , an exact level confidence interval for based on is
See Section 6.3 ∎
By the correspondence between confidence intervals and hypothesis testing, in time we can also test for the significance of each regression coefficient estimated by algorithmic leveraging using, for , the hypothesis . Specifically, a test with significance level rejects when
3.2 Is Unknown
In practice, the error variance is rarely known. In this section, we discuss this more realistic situation.
We estimate analogously to OLS. Letting be the predicted values, define
where and . can be computed in time given .
Classical statistical inference, which allows us to find exact confidence intervals for regression coefficients, computes the exact distribution of . However, in our case, that distribution depends on the singular values of , which in general cannot be computed in time. Instead, we utilize Lemma 1, which states that for large enough, with probability one , where is the
-dimensional identity matrix.
Intuitively, we may make the approximation that . Then, , the hat matrix from OLS, and our estimates and
approximate the regression coefficients and standard error estimate from OLS.
Therefore, inspired by the classic confidence interval for OLS regression coefficients, we propose the following approximate level confidence interval for based on :
where is the quantile of .
These confidence intervals have the correct coverage probability asymptotically as approaches infinity.
If has full rank, as , the confidence interval (9) has coverage probability .
See Section 6.4 ∎
By the argument in the previous section and the fact that is computed in time, the above confidence interval can be computed in time. As before, in time we can also test for the significance of each regression coefficient estimated by algorithmic leveraging using, for , the hypothesis . Specifically, we propose a test with approximate significance level that rejects when
4 Experimental Results
In this section, we illustrate the behavior of our proposed confidence intervals (9) and significance tests (10) and compare them to the bootstrap. Our experiments were carried out in R, using the special package mvtnorm . They follow the same setup as in Ma et al. .
We use data simulated from model (1). We consider the data sizes and . For each tuple , we generate independently from the multivariate
-distribution with three degrees of freedom and covariance matrix, where . This will give a matrix with some large statistical leverage scores and some small ones . The sampling distribution was chosen to be proportional to the approximate leverage scores computed using the algorithm in Drineas et al. .
For each , we randomly choose half the entries of to be zero, a quarter to be , and the rest to be . Then, for each tuple we repeat the following procedure times:
Carry out Algorithm 1 with to obtain .
Compute the bootstrap standard deviations as described in Section 2.4 with and record the computation time used.
For , compute a level confidence interval for using (9). Calculate the actual coverage probability, i.e. the fraction of ’s for which the interval includes the true value of ; the nominal coverage probability is .
For , test at significance level using (10). Compute the proportion of type 1 and 2 errors.
Repeat the above two steps for the bootstrap, constructing the confidence intervals as described in Section 2.4 and using the corresponding tests of significance.
Finally, for each tuple , for both our algorithm and bootstrap we compute the averages of the computation times, actual coverage probabilities, and proportions of type 1 and type 2 errors over the iterations.
For each tuple , we plot four graphs to evaluate and compare the quality of our proposed confidence intervals and bootstrap confidence intervals. The first plots the computation time of our algorithm and that of bootstrap versus . The second plots, for both our algorithm and the bootstrap, the actual coverage probability versus the nominal coverage probability of the confidence intervals at three small values of . The third and fourth plot, for our algorithm and the bootstrap, the average type 1 error rate and average type 2 error rate of the significance tests at versus .
for both our algorithm and the bootstrap. We defer them to the appendix because they are usually used to evaluate classifiers and not methods for uncertainty quantification.
We see that our algorithm is indeed faster than the bootstrap, with the gap widening as increases. Indeed, the rate of increase in computation time is greater for the bootstrap.
For , the actual coverage probability of the confidence intervals for both our algorithm and bootstrap is approximately equal to the nominal coverage probability. The confidence intervals constructed by our algorithm are conservative, but the actual coverage probability approaches the nominal coverage probability as increases. The latter is expected, since for larger the approximation made in Section 3.2 should be more accurate. The conservativeness of the confidence intervals constructed by our algorithm seems to increase with . The bootstrap confidence intervals are generally less conservative than those from our algorithm, but may have less than the nominal coverage probability as shown in Figure 3(a).
The type 1 error of our algorithm is generally below that of bootstrap, especially for smaller . While type 1 error of our algorithm is generally smaller than the nominal , the type 1 error of the bootstrap may be much more, in particular for . This is consistent with the fact that the bootstrap coverage probability may be less than the nominal value. It also appears that for fixed and , as increases the type 1 errors of our algorithm and of bootstrap decreases.
Analogously, the type 2 error of our algorithm is usually above that of bootstrap. However, for both approaches, the type 2 error is decreasing with , below for , and close to at . Therefore, as long as we don’t use too small samples, we can obtain an estimate of in less time without sacrificing much power in our significance tests. As increases the type 2 errors of both approaches increase.
5 Conclusion and Future Work
Learning from and mining massive data sets post great challenges given our limited storage and computational resources. Many data reduction approaches have been devised to overcome such challenges. Algorithmic leveraging is such an approach. In this paper, for linear regression coefficients estimated using algorithmic leveraging, we described how to efficiently construct finite sample confidence intervals and significance tests. Simulations show that our proposed confidence intervals have the desired coverage probability and our proposed significance tests control the type 1 error rate and have low type 2 error rates. The simulations also show that bootstrap confidence intervals may have smaller than the desired coverage probability.
There are several avenues for future work investigating the statistical properties of algorithmic leveraging applied to data analyses beyond simple linear regression. For instance, we believe that determining how sampling affects feature selection in Lasso regression could have important practical implications. Finally, we may consider uncertainty quantification for estimates from other data reduction methods, such as sketching algorithms, which are another popular method to deal with massive data sets.
The author was supported by US NSF under grants DGE-114747, DMS-1407397, and DMS-1521145.
-  P. Billingsley. Probability and Measure. Wiley Series in Probability and Statistics. Wiley, 2012.
Samprit Chatterjee and Ali S. Hadi.
Influential observations, high leverage points, and outliers in linear regression: Rejoinder.Statistical Science, 1(3):415–416, 1986.
-  Kenneth L. Clarkson, Petros Drineas, Malik Magdon-Ismail, Michael W. Mahoney, Xiangrui Meng, and David P. Woodruff. The fast Cauchy transform and faster robust linear regression. In Proceedings of the Twenty-fourth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 466–477. Society for Industrial and Applied Mathematics, 2013.
Petros Drineas, Malik Magdon-Ismail, Michael W Mahoney, and David P. Woodruff.
Fast approximation of matrix coherence and statistical leverage.
Journal of Machine Learning Research, 13:3475–3506, 2012.
-  Petros Drineas, Michael W. Mahoney, and S. Muthukrishnan. Sampling algorithms for L2 regression and applications. In Proceedings of the Seventeenth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1127–1136. Society for Industrial and Applied Mathematics, 2006.
-  Alan Genz, Frank Bretz, Tetsuhisa Miwa, Xuefei Mi, Friedrich Leisch, Fabian Scheipl, and Torsten Hothorn. mvtnorm: Multivariate Normal and t Distributions, 2014. R package version 1.0-2.
-  Gene H. Golub and Charles F. Van Loan. Matrix Computations. Johns Hopkins University Press, Baltimore, 3rd edition, 1996.
-  William H. Greene. Econometric Analysis. Pearson, 6th edition, 2008.
-  Ping Ma, Michael W. Mahoney, and Bin Yu. A statistical perspective on algorithmic leveraging. Journal of Machine Learning Research, 16:861–911, 2015.
-  Michael W. Mahoney and Petros Drineas. CUR matrix decompositions for improved data analysis. Proceedings of National Academy of Sciences, 106:697–702, 2009.
-  Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), 58(1):267–288, 1996.
-  Sanford Weisberg. Applied Linear Regression. Wiley Series in Probability and Statistics. Wiley, 2005.
6.1 Proof of Lemma 1
6.2 Proof of Theorem 1
We first need the following lemma from Ma et al. .
where is the -dimensional vector of ones and .
Since , it follows that converges in probability to as with fixed.
We rewrite . Let be the -dimensional vector of ’s.
By Slutsky’s Theorem  and Lemma 1, as for fixed , converges in probability to . We have assumed that as , converges to a finite, positive definite matrix .
Therefore, as , converges in probability to . In order to show that is asymptotically normal, we just have to show that is asymptotically normal.
With fixed, as , converges in probability to .
where has mean zero and variance . We have assumed that as , converges to . Assuming that for each ,
by the multivariate Lindeberg-Feller central limit theorem , is asymptotically normal.
Hence, as , is normally distributed. Then, is asymptotically Gaussian. Its mean and variance are computed from approximate formulas given in Ma et al. , using the fact that as and is fixed converges in probability to .
6.3 Proof of Theorem 2
6.4 Proof of Theorem 3
From Lemma 1, as , almost surely. Then, with probability , . Therefore, as ,
Considering only the th predictor and scaling appropriately, we have
Applying the continuous mapping theorem, we have
where the last line follows from  assuming that is full rank.
Therefore, as , has coverage probability .
6.5 ROC Curves
Here are the ROC curves for the simulations in the main paper.
As expected, as or increase, the tests of significance improve in terms of the area under the ROC curve. However, there does not seem to be an appreciable difference between the areas under the ROC curve for our algorithm and the bootstrap. Our algorithm seems to have more power when the false positive rate is small and the bootstrap seems to have more power when the false positive rate is large. Since in practice we would like to limit the false positive rate in order to avoid any costly actions, we believe that our algorithm is superior.