Fast and Accurate Least-Mean-Squares Solvers

by   Alaa Maalouf, et al.

Least-mean squares (LMS) solvers such as Linear / Ridge / Lasso-Regression, SVD and Elastic-Net not only solve fundamental machine learning problems, but are also the building blocks in a variety of other methods, such as decision trees and matrix factorizations. We suggest an algorithm that gets a finite set of n d-dimensional real vectors and returns a weighted subset of d+1 vectors whose sum is exactly the same. The proof in Caratheodory's Theorem (1907) computes such a subset in O(n^2d^2) time and thus not used in practice. Our algorithm computes this subset in O(nd) time, using O( n) calls to Caratheodory's construction on small but "smart" subsets. This is based on a novel paradigm of fusion between different data summarization techniques, known as sketches and coresets. As an example application, we show how it can be used to boost the performance of existing LMS solvers, such as those in scikit-learn library, up to x100. Generalization for streaming and distributed (big) data is trivial. Extensive experimental results and complete open source code are also provided.



page 1

page 2

page 3

page 4


Tight Sensitivity Bounds For Smaller Coresets

An ε-coreset for Least-Mean-Squares (LMS) of a matrix A∈R^n× d is a smal...

Coresets for Kinematic Data: From Theorems to Real-Time Systems

A coreset (or core-set) of a dataset is its semantic compression with re...

Coresets for Decision Trees of Signals

A k-decision tree t (or k-tree) is a recursive partition of a matrix (2D...

Introduction to Coresets: Approximated Mean

A strong coreset for the mean queries of a set P in ℝ^d is a small weigh...

Coresets For Monotonic Functions with Applications to Deep Learning

Coreset (or core-set) in this paper is a small weighted subset Q of the ...

Faster PAC Learning and Smaller Coresets via Smoothed Analysis

PAC-learning usually aims to compute a small subset (ε-sample/net) from ...

Introduction to Coresets: Accurate Coresets

A coreset (or core-set) of an input set is its small summation, such tha...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction and Motivation

Least-Mean-Squares (LMS) solvers are the family of fundamental optimization problems in machine learning and statistics that include linear regression, Principle Component Analysis (PCA), Singular Value Decomposition (SVD), Lasso and Ridge regression, Elastic net, and many more 

Golub und Reinsch (1971); Jolliffe (2011); Hoerl und Kennard (1970); Seber und Lee (2012); Zou und Hastie (2005); Tibshirani (1996); Safavian und Landgrebe (1991). See formal definition below. First closed form solutions for problems such as linear regression were published by e.g. Pearson Pearson (1900)

around 1900 but were probably known before. Nevertheless, today they are still used extensively as building blocks in both academy and industry for normalization 

Liang u. a. (2013); Kang u. a. (2011); Afrabandpey u. a. (2016)

, spectral clustering 

Peng u. a. (2015), graph theory Zhang und Rohe (2018), prediction Copas (1983); Porco u. a. (2015), dimensionality reduction Laparra u. a. (2015)

, feature selection 

Gallagher u. a. (2017) and many more; see more examples in Golub und Van Loan (2012).

Least-Mean-Squares solver in this paper is an optimization problem that gets as input an real matrix , and another -dimensional real vector (possibly the zero vector). It aims to minimize the sum of squared distances from the rows (points) of

to some hyperplane that is represented by its normal or vector of

coefficients , that is constrained to be in a given set :

Here, is called a regularization term. For example: in linear regression , and for every . In Lasso and for every and . Such LMS solvers can be computed via the covariance matrix . For example, the solution to linear regression of minimizing is .

1.1 Related work

While there are many LMS solvers and implementations, there is always a trade-off between their accuracy and running time; see comparison table in Bauckhage (2015) with references therein. The reason is related to the fact that computing the covariance matrix of can be done essentially in two ways: (i) summing the outer product of the th row of over every , . This is due to the fact that , or (ii) factorization of

, e.g. using SVD or the QR decomposition.

Numerical issues.

Method (i) is easy to implement for streaming rows of by maintaining only entries of the covariance matrix for the vectors seen so far, or maintaining its inverse as explained e.g. in Golub und Van Loan (2012). This takes for each vector insertion. However, every such addition may introduce another numerical error which accumulates over time. This error increases significantly when running the algorithms using 32 bit floating point representation, which is common for GPU computations. This solution is similar to maintaining the set of rows of the matrix , where is the SVD of , which is not a subset of the original input matrix but has the same covariance matrix . A common problem is that to compute the matrix must be invertible. This may not be the case due to numerical issues. In algorithms such as Lasso, the input cannot be a covariance matrix, but can be its Cholesky decomposition Bjorck (1967) . However, Cholesky decomposition can be applied only on positive-definite matrices, which is not the case even for small numerical errors that are added to . See Section 4 for more details and empirical evidence.

Running-time issues.

Method (ii) above utilizes factorizations such as SVD, i.e., to compute the covariance matrix via or the QR decomposition to compute . This approach is known to be much more stable. However, it is much more time consuming: while in theory the running time is as in the first method, the constants that are hidden in the notation are significantly larger. Moreover, unlike Method (i), it is impossible to compute such factorizations exactly for streaming data Clarkson und Woodruff (2009).

Caratheodory’s Theorem Carathéodory (1907)

states that every point contained in the convex hull of points in can be represented as a convex combination of a subset of at most points, which we call the Caratheodory set; see Section 2 and Fig. 1. This implies that we can maintain a weighted (scaled) set of points (rows) whose covariance matrix is the same as , since is the mean of matrices and thus in the convex hull of their corresponding points in ; see Algorithm 3. The fact that we maintain such a small sized subset of points instead of updating linear combinations of all the points seen so far, significantly reduces the numerical errors as shown in Fig. 9(b)9(a). Unfortunately, computing this set from Caratheodory’s Theorem takes or time via calls to an LMS solver. This fact makes it non-practical to use in an LMS solver, as we aim to do in this paper, and may explain the lack of source code for this algorithm on the web.

Approximations via Coresets and Sketches.

In the recent decades numerous approximation and data summarization algorithms were suggested to approximate the covariance matrix or its singular values. This is by computing a small matrix whose covariance approximates, in some sense, the covariance matrix of the input data . The term coreset is usually used when is a weighted (scaled) subset of rows from the rows of the input matrix. The matrix is sometimes called a sketch if each rows in is a linear combination of few or all its rows, i.e. for some matrix . However, those coresets and sketches usually yield -multiplicative approximations for by where the matrix is of rows and may be any vector, or the smallest/largest singular vector of or ; see lower bounds in Feldman u. a. (2010). Moreover, a -approximation to by

does not guarantee an approximation to the actual entries or eigenvectors of

by that may be very different; see Drineas u. a. (2006).

Accurately handling big data.

The algorithms in this paper return accurate coresets (), which is less common in the literature, including computations of the exact covariance matrix . Such coresets can easily support infinite stream of input rows using memory that is linear in their size, and also support dynamic/distributed data in parallel. This is by the useful merge-and-reduce property of coresets that allow them to handle big data; see details e.g. in Agarwal u. a. (2004). Unlike traditional coresets that pay additional logarithmic multiplicative factors due to the usage of merge-reduce trees and increasing error, the suggested weighted subsets in this paper do not introduce additional error to the resulting compression since they preserve the result accurately. The actual numerical errors are measured in the experimental results, with analysis that explain the differences.

A main advantage of a coreset over a sketch is that it preserves sparsity of the input rows Feldman u. a. (2016), which usually reduces theoretical running time. Our experiments show, as expected, that coresets can also be used to significantly improve the numerical stability of existing algorithms, even if the running time is the same. Another advantage is that the same coreset can be used for parameter tuning over a large set of candidates. In addition to other reasons, this significantly reduced the running time of such algorithms in our experiments; see Section 4.

1.2 Our contribution

A natural question that follows from the previous section is: can we maintain the optimal solution for LMS problems both accurately and fast? We answer this question affirmably by suggesting:

  1. the first algorithm that computes the Caratheodory set of input points in time that is linear in the input for asymptotically large , and using only calls to an LMS solver. This is by using a novel approach of coreset/skecthes fusion that is explained in the next section; see Algorithm 2 and Theorem 2.

  2. an algorithm that maintains a ("coreset") matrix such that: (a) its set of rows is a weighted subset of the matrix whose rows are the input points, and (b) the covariance matrices of and are the same, i.e., ; see Algorithm 3 and Theorem 3.2.

  3. example applications for boosting the performance of existing solvers by running them on the matrix above or its variants for Linear/Ridge/Lasso Regressions and Elastic-net.

  4. extensive experimental results on synthetic and real-world data for common LMS solvers of Scikit-learn library with either CPython or Intel’s distribution. Either the running time or numerical stability is improved up to two orders of magnitude.

  5. Open code Maalouf u. a. (2019) for our algorithms that we hope will be used for the many other LMS solvers and future research as suggested in our Conclusion section; see Section 5.

1.3 Novel approach: Coresets meet Sketches

As explained in Section 1.1, the covariance matrix of itself can be considered as a sketch which is relatively less numerically stable to maintain (especially its inverse, as desired by e.g. linear regression). The Caratheodory set that corresponds to the outer products of the rows of is a coreset whose covariance matrix is and, as a weighted subset of the original rows, is more numerically stable but takes much more time to compute; see Theorem 2.2.

We thus suggest a meta-algorithm that combines these two approaches: sketches and coresets. It may be generalized to other, not-necessarily accurate -coresets and sketches (); see Section 5.

The input to our meta-algorithm is 1) a set of items, 2) an integer from to where is maximum accuracy but longest running time, and 3) a pair of coreset and sketch construction schemes for the problem at hand.
The output is a coreset for the problem whose construction time is faster; see Fig. 1.

Step I: Compute a balanced partition of the input set into clusters of roughly the same size. While the correctness holds for any such arbitrary partition (e.g. see Algorithm 3.1), to reduce numerical errors – a partition that minimizes the sum of loss with respect to the problem at hand would be optimal.

Step II: Compute a sketch for each cluster , where , using the input sketch scheme. This step does not return a subset of as desired, and is usually numerically less stable.

Step III: Compute a coreset for the union of sketches from Step II, using the input coreset scheme. Note that is not a subset (or coreset) of .

Step IV: Compute the union of clusters in that correspond to the selected sketches in Step III, i.e. . By definition, is a coreset for the problem at hand.

Step V: Recursively compute a coreset for until a sufficiently coreset is obtained. This step is used to reduce running time, without selecting that is too small.

We then run an existing solver on the coreset to obtain a faster accurate solution for . Algorithm 2 and 3.1 are special cases of this meta-algorithm, where the sketch is simply the sum of a set of points/matrices, and the coreset is the existing (slow) implementation of the Caratheodory set from Theorem 2.2.

Paper organization.

In Section 2 we give our notations, definitions and the current state-of-the-art result. Section 3 presents our main algorithms for efficient computation of the Caratheodory (core-)set and a subset that preserves the inputs covariance matrix, their theorems of correctness and proofs. Section 4 demonstrates the applications of those algorithms to common LMS solvers, with extensive experimental results on both real-world and synthetic data via the Scikit-learn library with either CPython or Intel’s Python distributions. We conclude the paper with open problems and future work in Section 5.

Figure 1: Overview of Algorithm 2 and the steps in Section 1.3. Images left to right: Steps I and II (Partition and sketch steps): A partition of the input weighted set of points (in blue) into equal clusters (in circles) whose corresponding means are (in red). The mean of (and these means) is (in green). Step III (Coreset step): Caratheodory (sub)set of points (bold red) with corresponding weights (in green) is computed only for these means. Step IV (Recover step): the Caratheodory set is replaced by its corresponding original points (dark blue). The remaining points in (bright blue) are deleted. Step V (Recursive step): Previous steps are repeated until only points remains. This takes iterations for .

2 Notation and Preliminaries

For integers , we denote by the set of real matrices, and . To avoid abuse of notation, we use the big O notation where is a set Cormen u. a. (2009). A weighted set is a pair where is an ordered finite set in , and is a positive weights function. A linear system solver is an algorithm that solves a system of linear equations with variables, i.e., return such that for a given and , assuming there is such a solution.

Given a point inside the convex hull of a set of points , Caratheodory’s Theorem proves that there a subset of at most points in whose convex hull also contains . This geometric definition can be formulated as follows.

Definition 2.1 (Caratheodory set).

Let be a weighted set of points in such that . A weighted set is called a Caratheodory Set for if: (i) its size is , (ii) its weighted mean is the same, , and (iii) its sum of weights .

Caratheodory’s Theorem suggests a constructive proof for computing this set in time Carathéodory (1907); Cook und Webster (1972). This is implemented in Algorithm 1, which takes as input a weighted set such that and computes a Caratheodory set for in time. However, as observed e.g. in Nasser u. a. (2015) it can be computed only for the first points, and then be updated point by point in time per point, to obtain overall time. This still takes calls to a linear system solver that solves for a given matrix and vector , each of rows and columns, in time per call.

Theorem 2.2 (Carathéodory (1907)).

A Caratheodory set can be computed for any weighted set where in time.

Input : A weighted set of points in .
Output : A Caratheodory set for in time.
1 if  then
2      return
3for every  do
5 Compute such that .
8 for every s.t. .
10 if  then
Algorithm 1

3 Faster Caratheodory Set

In this section, we present our main algorithm that reduces the running time for computing a Caratheodory set from in Theorem 2.2 or Nasser u. a. (2015) to for sufficiently large ; see Theorem 3.1. A visual illustration of the corresponding Algorithm 2 is shown in Fig 1. We also present a second algorithm, called Caratheodory-Matrix, which computes a small weighted subset of a the given input that has the same covariance matrix as the input data; see Algorithm 3.

Theorem 3.1 (Caratheodory-Set Booster).

Let be a weighted set of points in , and be an integer. Let be the output of a call to ; See Algorithm 2. Let be the time it takes to compute a Caratheodory Set for points in , as in Theorem 2.2. Then is a Caratheodory set of that can be computed in time


We use the notation and variable names as defined in Algorithm 2.

Identify the input set and the set that is computed at Line 2 of Algorithm 2 as . We will first prove that the weighted set computed in Lines 22 at the current (some) iteration is a Caratheodory set for , i.e., , and .

For every , let .

Let be the pair computed at the current iteration at Line 2. By the definition of Caratheodory, we have that the pair computed at Line 2 is a Caratheodory set of the weighted set , i.e., it satisfies that


Observe that by the definition of for every at Line 2 we have that


We now have that


where the first equality holds by the definitions of and , the third equality holds by the definition of at Line 2, the fourth equality is by (1), and the last equality is by (2).

We also have that the new sum of weights is equal to


Combining (3) and (4) yields that the weighted computed before the recursive call at Line 2 of the algorithm is a Caratheodory set for the weighted input set . Since at each iteration we either return such a Caratheodory set at Line 2 or return the input weighted set itself at Line 2, by induction we get that the output weighted set of a call to is a Caratheodory set for the original input .

By (1) we have that contains at most points. Hence, there are at most recursive calls before the stopping condition at line 2 is met. The time complexity of each iteration is where is the number of points in the current iteration. Thus the total time complexity is

21 Input: A set of points in , a (weight) function such that , and an integer (number of clusters) for the accuracy/speed trade-off. Output: A Caratheodory set of ; see Definition 2.1. if   then
      return   // is already small
a partition of into disjoint subsets (clusters), each contains at most points. for every  do
         // The weight of the th cluster.
         // the weighted mean of
8   // see Algorithm 1 and Theorem 2.2.
10 for every and  do
         // assign weight for each point in
  // recursive call
Algorithm 2 ; see Theorem 3.1
Input : A matrix and an integer that denotes accuracy/speed trade-off.
Output : A matrix whose union of rows is a weighted subset of , and .
1 for every  do
2       Set as the concatenation of the entries of .
       // The order of entries may be arbitrary but the same for all points.
  // is a set of vectors in .
// and by Theorem 3.1
a matrix whose th row is for every . return
Algorithm 3 ; see Theorem 3.2
Theorem 3.2.

Let be a matrix, and be an integer. Let be the output of a call to ; see Algorithm 3. Let be the computation time of Caratheodory given point in . Then satisfies that . Furthermore, can be computed in time.


We use the notation and variable names as defined in Algorithm 3.

Since at Line 3 of Algorithm 3 is the output of a call to , by Theorem 3.1 we have that: (i) the weighted means of and are equal, i.e.,


(ii) since , and (iii) is computed in time.

Combining (5) with the fact that is simply the concatenation of the entries of , we get that


By the definition of on Line 3, we have that


We also have that


where the second derivation holds since .

Theorem 3.2 now holds by combining (6), (7) and (8). ∎

4 Experimental Results

width= Solver Objective function Python’s Package Example Python’s solver Linear Regression scipy.linalg lstsq Ridge Regression sklearn.linear_model RidgeCV() Lasso Regression sklearn.linear_model LassoCV() Elastic-Net Regression sklearn.linear_model ElasticNetCV()

Table 1: The four example solvers that were applied on the LMS-coreset in Algorithm 4. Each gets a matrix , a vector and aims to compute that minimizes the objective function. Additional regularization parameters include and . The Python’s solvers use -fold cross validation over every in a given set .

In this section we apply our fast construction of the Carathoodory Set from previous section to boost the running time of common LMS solvers in Table 1 by a factor of ten to hundreds, or to improve their numerical accuracy by a similar factor to support, e.g. 32 bit floating point representation. This is by running the given solver as a black box on the small matrix that is returned by Algorithms 58, which is based on . That is, our algorithm does not compete with existing solvers but relies on them, which is why we called it "booster". Open code for our algorithms is provided Maalouf u. a. (2019).

From Caratheodory Matrix to LMS solvers.

As stated in Theorem 3.2, Algorithm 3 gets an input matrix and an integer , and returns a matrix of the same covariance , where is a parameter for setting the desired accuracy. To "learn" a given label vector , Algorithm 4 partitions the matrix into partitions, computes using Algorithm 3 a subset for each partition that preserves its covariance matrix, and returns the union of subsets as a pair where and . For , it is easy to see that for every ,


where the third and fourth equalities are by Theorem 3.2 and the construction of , respectively. This enables us to replace the original pair by the smaller pair for the solvers in Table 1 as in Algorithms 58. A scaling factor is also needed in Algorithms 78.

Cross validation.

To select the value of the regularization term , the existing Python solvers we used partition the rows of into folds (subsets) and run the solver times for every fold and to select the desired ; see Kohavi u. a. (1995) for details. For consistency, Algorithm 4 computes a coreset for each of these folds in Line 4 and concatenate them in Line 4. Thus, (9) holds similarly for .

The experiments.

We evaluated the algorithms in Table 1 using the common Python’s solvers in its right two columns. Most of these experiments were repeated twice: using the default CPython distribution Wikipedia contributors (2019a) and Intel’s distribution LTD (2019) of Python. All the experiments were conducted on a standard Lenovo Z70 laptop with an Intel i7-5500U CPU @ 2.40GHZ and 16GB RAM. We used the following real-world datasets in our experiments:

  1. 3D Road Network (North Jutland, Denmark) Data Set Kaul u. a. (2013). It contains records. We used the attributes: (Web Mercaptor (Google format) longitude [real], Web Mercaptor (Google format) latitude [real]) to predict the attribute (Height in meters [real]).

  2. Individual household electric power consumption Data Set dataset:power (2012). It contains records. We used the attributes: (global active power [kilowatt - real], global reactive power [kilowatt - real]) to predict the attribute (voltage [volt - real]).

  3. House Sales in King County, USA dataset:sales (2015). It contains records. We used the following attributes: (bedrooms [integer], sqft living [integer], sqft lot [integer], floors [integer], waterfront [boolean], sqft above [integer], sqft basement [integer], year built [integer]) to predict the (house price [integer]) attribute.

The synthetic data is an matrix and vector of length , both of random entries in . As expected by the analysis, since our compression introduces no error to the computation accuracy, the actual values of the data had no affect on the results, unlike the size of the input which affects the computation time. Table 2 summarizes the experimental results.

4.1 Discussion

Why running time is faster?

The number of rows in the reduced matrix is , which is usually much smaller than the original matrix . This also explains why some coresets (dashed red line) failed for small values of in Fig. 1(b),1(c),3(b) and 3(c). The construction of takes . Solving Linear Regression takes the same time, with or without the coreset. However, the constant are much smaller since the time for computing becomes neglected for large values of , as shown in Fig. 11. We emphasize that, unlike common coresets, there is no accuracy loss due to the use of our coreset, ignoring additive errors/improvements. The improvement factor in running time due to our booster is in order of up to x10. The contribution of the coreset is much significant, already for smaller values of , when it boosts other solvers that use cross validation for parameter tuning as explained above. In this case, the time complexity reduced by a factor of since the coreset is computed only once for each of the folds, regardless of the size . In this case, the running time improvement is between x10–x100.
As shown in the graphs, the computations via Intel’s Python distribution reduced the running times by 15-40%, with or without the booster, probably due to its tailored implementation for our hardware.

Why numerical stability is better?

A sketch that simply sums the -rank matrices of outer products of rows in the input matrix yields its covariance matrix . The Cholesky decomposition then returns a small matrix that can be plugged to the solvers, similarly to our coreset. This algorithm which we call SKETCH + CHOLESKY is so simple and there is no hope to improve its running time via our much more involved booster. Nevertheless, it is numerically unstable for the reasons that are explained in Section 1.1. In fact, on most of our experiments we could not even apply this technique at all using 32-bit floating point representation. This is because the resulting approximation to was not a positive definite matrix as required by the Cholesky Decomposition, and we could not compute the matrix at all. In case of success, the running time of the booster was slower by at most a factor of but even in these cases numerical accuracy is improved up to orders of magnitude; See Fig. 9(b)9(a) for histogram of errors using such 32-bit float representation which is especially common in GPUs for saving memory, running time and power Wikipedia contributors (2019b). For the special case of Linear regression, we can avoid Cholesky decomposition and compute the solution directly after maintaining such a sketch for and . However, this sketch wich we call SKETCH + INVERSE still has large numerical issues compared to our coreset computation as shown in Fig. 9(b)9(a).

1 Input: A matrix , a vector , and a number (integer) of cross-validation folds, and an integer that denotes accuracy/speed trade-off. Output: A matrix of weighted subset of rows in , and a vector .   // A matrix
2 split the matrix into block matrices, each of size for every  do
       // see Algorithm 3
// concatenation of the matrices into a single matrix of rows and columns
the first columns of .
the last column of . return
Algorithm 4
321 return
Algorithm 5
321 return
Algorithm 6
4321 return
Algorithm 7
4321 return
Algorithm 8

width= Figure Algorithm’s number x/y Axes labels Python Distribution Dataset Input Parameter 2 68 Size/Time for various CPython Synthetic , 3 68 Size/Time for various CPython Synthetic 4 68 Size/Time for various Intel’s Synthetic 5 68 Size/Time for various Intel’s Synthetic 6 68 /Time CPython Datasets (i)–(ii) 7 68 /Time Intel’s Datasets (i)–(ii) 8 68 Time/maximal than is feasible CPython Datasets (i)–(ii) 9 68 Time/maximal than is feasible Intel’s Datasets (i)–(ii) 10 5 Error/Count Histogram + Size/Error CPython Datasets (i),(iii) 11 5 Size/Time for various Distributions CPython, Intel’s Synthetic ,

Table 2: Experiments. Our booster was applied on the common CPython Wikipedia contributors (2019a) and Intel’s LTD (2019) distributions (implementations). The input matrix is with label vector , where is "Data size". Cross validation uses folds for evaluating each regularization term in . Number of clusters is chosen as in order to have iterations in Algorithm 2, and for Algorithm 8. Computation time includes the computation of the reduced input ; See Section 3. The histogram graphs consist of bins that count the number of occurrences of a range of errors.
Figure 2: Time comparison on synthetic data using CPython distribution.
Figure 3: Time comparison on synthetic data using CPython distribution.
Figure 4: Time comparison on synthetic data using Intel’s python distribution.
Figure 5: Time comparison on synthetic data using Intel’s python distribution.
(a) Dataset (i).
(b) Dataset (ii).
Figure 6: Time comparison on real world data using CPython distribution.
(a) Dataset (i).
(b) Dataset (ii).
Figure 7: Time comparison on real world data using Intel’s python distribution.
(a) Dataset (i).
(b) Dataset (ii).
Figure 8: Number of alphas that can be tested in a predefined amount of time using CPython distribution.
(a) Dataset (i).
(b) Dataset (ii).
Figure 9: Number of alphas that can be tested in a predefined amount of time using Intel’s Python distribution.
(a) Dataset (i).
(b) Dataset (iii).
Figure 10: Accuracy comparison using real word data. . was computed using the methods specified in the legend; see Section 4.1.
Figure 11: Time comparison using synthetic data.

5 Conclusion and Future Work

We presented a novel framework that combines sketches and coresets. As an example application, we proved that the set from the Caratheodory Theorem can be computed in overall time for sufficiently large (for a set of points in ). This is instead of time as in the original theorem. We then generalized the result for a matrix whose rows are a weighted subset of the input matrix and their covariance matrix is the same. Our experimental results section shows how to significantly boost the numerical stability or running time of existing LMS solvers by applying them on . Future work includes: (a) applications of our framework to combine other sketch-coreset pairs e.g. as listed in Phillips (2016)

, (b) Experiments for streaming/distributed/GPU data and other potential applications such as for Deep Learning e.g. as part of the Stochastic Gradient Descent that uses the LMS adaptive filter 

Widrow u. a. (1977); Mandic (2004), and (c) experiments with higher dimensional data: we may compute each of the entries in the covariance matrix by calling our algorithm with and the corresponding pair of columns in the columns of the input matrix.


  • dataset:power (2012) dataset:power 2012 : Individual household electric power consumption Data Set . 2012
  • dataset:sales (2015) dataset:sales 2015 : House Sales in King County, USA. 2015
  • Afrabandpey u. a. (2016) Afrabandpey u. a. 2016 Afrabandpey, Homayun ; Peltola, Tomi ; Kaski, Samuel: Regression Analysis in Small-n-Large-p Using Interactive Prior Elicitation of Pairwise Similarities. In: FILM 2016, NIPS Workshop on Future of Interactive Learning Machines, 2016
  • Agarwal u. a. (2004) Agarwal u. a. 2004 Agarwal, Pankaj K. ; Har-Peled, Sariel ; Varadarajan, Kasturi R.: Approximating extent measures of points. In: Journal of the ACM (JACM) 51 (2004), Nr. 4, S. 606–635
  • Bauckhage (2015) Bauckhage 2015 Bauckhage, Christian:

    NumPy/SciPy Recipes for Data Science: Ordinary Least Squares Optimization.

    In: researchgate. net, Mar (2015)
  • Bjorck (1967) Bjorck 1967 Bjorck, Ake: Solving linear least squares problems by Gram-Schmidt orthogonalization. In: BIT Numerical Mathematics 7 (1967), Nr. 1, S. 1–21
  • Carathéodory (1907) Carathéodory 1907 Carathéodory, Constantin: Über den Variabilitätsbereich der Koeffizienten von Potenzreihen, die gegebene Werte nicht annehmen. In: Mathematische Annalen 64 (1907), Nr. 1, S. 95–115
  • Clarkson und Woodruff (2009) Clarkson und Woodruff 2009 Clarkson, Kenneth L. ; Woodruff, David P.: Numerical linear algebra in the streaming model. In:

    Proceedings of the forty-first annual ACM symposium on Theory of computing

    ACM (Veranst.), 2009, S. 205–214
  • Cook und Webster (1972) Cook und Webster 1972 Cook, WD ; Webster, RJ: Caratheodory’s theorem. In: Canadian Mathematical Bulletin 15 (1972), Nr. 2, S. 293–293
  • Copas (1983) Copas 1983 Copas, John B.: Regression, prediction and shrinkage. In: Journal of the Royal Statistical Society: Series B (Methodological) 45 (1983), Nr. 3, S. 311–335
  • Cormen u. a. (2009) Cormen u. a. 2009 Cormen, Thomas H. ; Leiserson, Charles E. ; Rivest, Ronald L. ; Stein, Clifford: Introduction to algorithms. MIT press, 2009
  • Drineas u. a. (2006) Drineas u. a. 2006 Drineas, Petros ; Mahoney, Michael W. ; Muthukrishnan, Shan: Sampling algorithms for l 2 regression and applications. In: Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm Society for Industrial and Applied Mathematics (Veranst.), 2006, S. 1127–1136
  • Feldman u. a. (2010) Feldman u. a. 2010 Feldman, Dan ; Monemizadeh, Morteza ; Sohler, Christian ; Woodruff, David P.: Coresets and sketches for high dimensional subspace approximation problems. In: Proceedings of the twenty-first annual ACM-SIAM symposium on Discrete Algorithms Society for Industrial and Applied Mathematics (Veranst.), 2010, S. 630–649
  • Feldman u. a. (2016) Feldman u. a. 2016 Feldman, Dan ; Volkov, Mikhail ; Rus, Daniela: Dimensionality Reduction of Massive Sparse Datasets Using Coresets. In: Advances in neural information processing systems (NIPS), 2016
  • Gallagher u. a. (2017) Gallagher u. a. 2017 Gallagher, Neil ; Ulrich, Kyle R. ; Talbot, Austin ; Dzirasa, Kafui ; Carin, Lawrence ; Carlson, David E.: Cross-spectral factor analysis. In: Advances in Neural Information Processing Systems, 2017, S. 6842–6852
  • Golub und Reinsch (1971) Golub und Reinsch 1971 Golub, Gene H. ; Reinsch, Christian: Singular value decomposition and least squares solutions. In: Linear Algebra. Springer, 1971, S. 134–151
  • Golub und Van Loan (2012) Golub und Van Loan 2012 Golub, Gene H. ; Van Loan, Charles F.: Matrix computations. Bd. 3. JHU press, 2012
  • Hoerl und Kennard (1970) Hoerl und Kennard 1970 Hoerl, Arthur E. ; Kennard, Robert W.:

    Ridge regression: Biased estimation for nonorthogonal problems.

    In: Technometrics 12 (1970), Nr. 1, S. 55–67
  • Jolliffe (2011) Jolliffe 2011 Jolliffe, Ian: Principal component analysis. Springer, 2011
  • Kang u. a. (2011) Kang u. a. 2011 Kang, Byung ; Lim, Woosang ; Jung, Kyomin:

    Scalable kernel K-means via centroid approximation.

    In: Proc. NIPS, 2011
  • Kaul u. a. (2013) Kaul u. a. 2013 Kaul, Manohar ; Yang, Bin ; Jensen, Christian S.: Building accurate 3d spatial networks to enable next generation intelligent transportation systems. In: 2013 IEEE 14th International Conference on Mobile Data Management Bd. 1 IEEE (Veranst.), 2013, S. 137–146
  • Kohavi u. a. (1995) Kohavi u. a. 1995 Kohavi, Ron u. a.: A study of cross-validation and bootstrap for accuracy estimation and model selection. In: Ijcai Bd. 14 Montreal, Canada (Veranst.), 1995, S. 1137–1145
  • Laparra u. a. (2015) Laparra u. a. 2015 Laparra, Valero ; Malo, Jesús ; Camps-Valls, Gustau: Dimensionality reduction via regression in hyperspectral imagery. In: IEEE Journal of Selected Topics in Signal Processing 9 (2015), Nr. 6, S. 1026–1036
  • Liang u. a. (2013) Liang u. a. 2013 Liang, Yingyu ; Balcan, Maria-Florina ; Kanchanapally, Vandana: Distributed PCA and k-means clustering. In: The Big Learning Workshop at NIPS Bd. 2013 Citeseer (Veranst.), 2013
  • LTD (2019) LTD 2019 LTD, Intel: Accelerate Python* Performance. 2019
  • Maalouf u. a. (2019) Maalouf u. a. 2019 Maalouf, Alaa ; Jubran, Ibrahim ; Feldman, Dan: Open code for the algorithms in this paper. 2019. – Open code will be provided upon the publication of this paper.
  • Mandic (2004) Mandic 2004 Mandic, Danilo P.: A generalized normalized gradient descent algorithm. In: IEEE signal processing letters 11 (2004), Nr. 2, S. 115–118
  • Nasser u. a. (2015) Nasser u. a. 2015 Nasser, Soliman ; Jubran, Ibrahim ; Feldman, Dan: Coresets for Kinematic Data: From Theorems to Real-Time Systems. In: arXiv preprint arXiv:1511.09120 (2015)
  • Pearson (1900) Pearson 1900 Pearson, Karl: X. On the criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling. In: The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science 50 (1900), Nr. 302, S. 157–175
  • Peng u. a. (2015) Peng u. a. 2015 Peng, Xi ; Yi, Zhang ; Tang, Huajin: Robust subspace clustering via thresholding ridge regression. In:

    Twenty-Ninth AAAI Conference on Artificial Intelligence

    , 2015
  • Phillips (2016) Phillips 2016 Phillips, Jeff M.: Coresets and sketches. In: arXiv preprint arXiv:1601.00617 (2016)
  • Porco u. a. (2015) Porco u. a. 2015 Porco, Aldo ; Kaltenbrunner, Andreas ; Gómez, Vicenç: Low-rank approximations for predicting voting behaviour. In: Workshop on Networks in the Social and Information Sciences, NIPS, 2015
  • Safavian und Landgrebe (1991) Safavian und Landgrebe 1991 Safavian, S R. ; Landgrebe, David:

    A survey of decision tree classifier methodology.

    In: IEEE transactions on systems, man, and cybernetics 21 (1991), Nr. 3, S. 660–674
  • Seber und Lee (2012) Seber und Lee 2012 Seber, George A. ; Lee, Alan J.: Linear regression analysis. Bd. 329. John Wiley & Sons, 2012
  • Tibshirani (1996) Tibshirani 1996 Tibshirani, Robert: Regression shrinkage and selection via the lasso. In: Journal of the Royal Statistical Society: Series B (Methodological) 58 (1996), Nr. 1, S. 267–288
  • Widrow u. a. (1977) Widrow u. a. 1977 Widrow, Bernard ; McCool, John ; Larimore, Michael G. ; Johnson, C R.: Stationary and nonstationary learning characteristics of the LMS adaptive filter. In: Aspects of Signal Processing. Springer, 1977, S. 355–393
  • Wikipedia contributors (2019a) Wikipedia contributors 2019a Wikipedia contributors: CPython — Wikipedia, The Free Encyclopedia. 2019
  • Wikipedia contributors (2019b) Wikipedia contributors 2019b Wikipedia contributors: List of Nvidia graphics processing units — Wikipedia, The Free Encyclopedia. 2019
  • Zhang und Rohe (2018) Zhang und Rohe 2018 Zhang, Yilin ; Rohe, Karl: Understanding Regularized Spectral Clustering via Graph Conductance. In: Advances in Neural Information Processing Systems, 2018, S. 10631–10640
  • Zou und Hastie (2005) Zou und Hastie 2005 Zou, Hui ; Hastie, Trevor: Regularization and variable selection via the elastic net. In: Journal of the royal statistical society: series B (statistical methodology) 67 (2005), Nr. 2, S. 301–320