oASIS: Adaptive Column Sampling for Kernel Matrix Approximation

05/19/2015 ∙ by Raajen Patel, et al. ∙ 0

Kernel matrices (e.g. Gram or similarity matrices) are essential for many state-of-the-art approaches to classification, clustering, and dimensionality reduction. For large datasets, the cost of forming and factoring such kernel matrices becomes intractable. To address this challenge, we introduce a new adaptive sampling algorithm called Accelerated Sequential Incoherence Selection (oASIS) that samples columns without explicitly computing the entire kernel matrix. We provide conditions under which oASIS is guaranteed to exactly recover the kernel matrix with an optimal number of columns selected. Numerical experiments on both synthetic and real-world datasets demonstrate that oASIS achieves performance comparable to state-of-the-art adaptive sampling methods at a fraction of the computational cost. The low runtime complexity of oASIS and its low memory footprint enable the solution of large problems that are simply intractable using other adaptive methods.



There are no comments yet.


page 5

Code Repositories


Form a sparse a low rank factorization of a dataset using samples from the data

view repo


subset selection methods for large-scale kernel learning

view repo


sparse self-expressive decompositions

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

I-a Kernel Matrix Approximation

Many machine learning and data analysis frameworks for classification, clustering, and dimensionality reduction require the formation of kernel matrices that contain the pairwise “similarities” or distances between signals in a collection of data. For instance, kernel methods for classification

[1], nonlinear dimensionality reduction [2, 3]

, and spectral clustering

[4] all require computing and storing an kernel matrix, where is the number of examples (points) in the dataset.

As the size of the dataset grows, the computation and storage of kernel matrices becomes increasingly difficult. For instance, explicitly storing a kernel matrix with dimension using IEEE standard binary64 requires gigabytes of memory [5]. To extend kernel-based methods to extremely large , a host of methods have focused on sampling a small subset of columns from the kernel matrix and then computing a low-rank approximation of the matrix using some flavor of the Nyström method [6].

An accurate low-rank approximation determines and keeps only the most important dimensions in the column space of the matrix. In doing so, it captures the majority of the information in a matrix with a smaller ambient dimensional space. In the kernel matrix setting, the size of the dataset is larger than its dimensionality. This results in a kernel matrix with low-dimensional structure lying in a large ambient space. A low-rank approximation, then, can capture all of the information in the kernel matrix without requiring entries.

Nyström methods are one example of a general approach to computing low-rank matrix approximation from a subset of rows and/or columns of the matrix [7]. Choosing a relevant subset is broadly referred to as column subset selection (CSS). CSS methods have been applied successfully in applications ranging from image segmentation [8] to genomic analysis [9] and matrix factorization [10].

The success of CSS-based approaches for matrix approximation depends strongly on the columns chosen to approximate the range space of the matrix. Intuitively, uniform random sampling of the columns will provide an accurate approximation when the columns of the kernel matrix are independently and identically distributed. However, when the underlying data are non-uniformly distributed or even clustered, uniform sampling requires extra column draws to ensure an accurate approximation. In these settings, it has been shown in both theory

[11] and practice [12] that adaptive sampling methods provide accurate approximations of low-rank kernel matrices with far fewer samples than uniform random sampling. In this way, we say that adaptive methods are more efficient than uniform random sampling.

Current adaptive sampling methods take advantage of the structure of the kernel matrix. Random adaptive sampling methods use the entries of the kernel matrix to compute a weighted probability distribution over the column indices. The distribution improves the chances of sampling relevant columns. Deterministic adaptive sampling methods use the already-sampled columns to compute a residual over the kernel matrix, from which a new column index is selected.

The downside of current adaptive methods is their computational burden. Adaptive methods do not scale well to large problem sizes for two reasons. First, existing adaptive methods inspect the entire kernel matrix to determine which columns to select. For extremely large datasets, both forming and storing an explicit representation of the kernel matrix is intractable. Second, existing adaptive methods require dense matrix computations, even for sparse matrices that are otherwise easy to store because they have relatively few non-zeros elements. For these reasons, current adaptive methods cannot be applied to extremely large collections of data [13].

I-B Contributions

For adaptive sampling methods, it is not generally possible to determine the best columns to select from a kernel matrix without explicitly forming all of the candidate columns. However, as the kernel matrix is symmetric, a small sample of columns provides partial information about the remaining un-sampled columns. Based upon this observation, we introduce a principled adaptive sampling scheme called Accelerated Sequential Incoherence Selection (oASIS) that predicts the most informative columns to sample from a kernel matrix without forming the entire matrix. oASIS has several advantages over existing adaptive sampling schemes.

  • oASIS does not require a fully precomputed kernel matrix. It can operate solely on the data, using the kernel function. This is because oASIS selects the column to be included in the approximation before explicitly computing it. For this reason, only the submatrix of sampled columns must be computed/stored.

  • oASIS’s total runtime scales linearly with the matrix dimension making it practical for large datasets. For this reason, oASIS is orders of magnitude faster than other adaptive methods that have or higher runtime.

  • oASIS can exactly recover the rank kernel matrix in steps.

  • oASIS preserves zero entries in sampled columns of sparse kernel matrices, enabling greater efficiency in these cases. This is in contrast to conventional greedy methods that require dense matrix computations that “fill in” the zero entries in a matrix [12].

oASIS provides a tractable solution for approximating extremely large kernel matrices where other adaptive methods simply cannot be applied [14, 12, 15]. In a range of numerical experiments below, we demonstrate that oASIS provides accuracy comparable to the best existing adaptive methods [14, 12, 15, 16] with dramatically reduced complexity and memory requirements.

While oASIS is useful for kernel matrices, its usefulness becomes more pronounced when the dataset is so large that it can no longer be held entirely in memory. This is because we can parallelize oASIS by splitting up the dataset and the working matrices among various processors. We introduce an algorithm called oASIS-P that efficiently distributes the data and the submatrices used in approximation, as well as the computation and selection of a new column to add to the approximation. oASIS-P is highly scalable as it incurs minimum communication overhead in between the parallel computing nodes. We implemented oASIS-P using a standard message passing interface (MPI) [17]. With oASIS-P, we can perform the Nyström approximation in a data size regime where even the simplest algorithms become difficult to run.

In addition, we study oASIS from a theoretical perspective and propose conditions under which oASIS will exactly recover a rank- kernel matrix in steps. Other greedy methods do not have this guarantee. oASIS can perform this recovery because it chooses linearly independent columns at each step, which enables efficient sampling. Random selection methods do not necessarily choose independent columns, and can choose redundant columns, resulting in inefficient sampling.

This paper is organized as follows. In Section II, we introduce the Nyström method, survey existing sampling methods, and describe important applications of kernel matrices in classification, clustering, and dimensionality reduction. In Section III, we describe the motivation behind our initial column sampling method, called Sequential Incoherence Selection or SIS. We then describe the accelerated version of SIS, or oASIS. We then describe a parallel version of oASIS, which we call oASIS-P. In Section IV, we provide theory determining the conditions under which oASIS will exactly recover the kernel matrix. And in Section V, we use multiple synthetic and real datasets to demonstrate the efficacy of oASIS for approximating kernel matrices and diffusion distances for nonlinear dimensionality reduction [2].

Ii Background

To set the stage, we will quickly describe a few common kernel and distance matrices used in machine learning. Following this, we introduce the Nyström method and describe its variants and applications.

Ii-a Notation

We write matrices

and vectors

in upper and lowercase script, respectively. We use to denote the Moore-Penrose pseudo-inverse of . We represent the element-wise product of matrices and as denotes a row vector, where the entry contains the sum of the column of When describing algorithms, we use “Matlab” style indexing in which denotes the entry of the matrix and denotes its column. We use to denote a collection of chosen indices of the columns of a matrix ; is the set of indices not chosen. For example, are all of the columns of indexed by the set .

Ii-B Kernel Matrices and their Applications

Kernel methods are widely used in classification and clustering to “lift” datasets into a high-dimensional space where the classes of data are linearly separable. This is done with the help of a kernel function which measures pairwise similarities between points in the lifted space. An kernel matrix is then formed from data points with where high magnitude entries of correspond to pairs of similar data points. The singular vectors of are then computed and used to map the data back into a low-dimensional space where the data is still linearly separable. The kernel trick is widely used in classification and clustering [1, 8, 4, 18, 19, 20, 21, 22]

Manifold learning methods, including diffusion maps [2] and Laplacian eigenmaps [23]

, map high-dimensional data that lie on nonlinear but low-dimensional manifolds onto linear low-dimensional spaces. These methods use a kernel matrix that encodes the

geodesic distance between pairs of points — the shortest path between two points along the surface of the underlying manifold. High-dimensional data are mapped into a low-dimensional space using the left singular vectors of

, and thus a singular value decomposition (SVD) of

is required. For a review of dimensionality reduction using geodesic and diffusion distance kernel matrices, see [24, 13].

Ii-C The Nyström Method

Williams and Seeger [1] first presented the Nyström method to improve the speed of kernel-based methods for classification. The method approximates a low-rank symmetric positive semidefinite (PSD) matrix using a subset of its columns.

Consider an PSD matrix of rank For all PSD matrices , there exists a matrix such that . Suppose we choose a set of indices and then sample those columns from as , . We collect the indices into a set . The sampling forms a partition of . We can then express as


where consists of the sampled columns of , and is the symmetric matrix containing the row entries of the sampled columns at the indices of the sampled columns. Note that without loss of generality we can permute the rows and columns of so that the columns in are the first columns of . The Nyström approximation of is defined as


Note that neither nor any partition of is found explicitly, but that the is found through the set of sampled columns and the respective rows .

An approximate SVD of can be obtained from the SVD of which is written as The singular values of the approximation are given by [25], and the singular vectors are given by

Since is this computation is much faster than computing the full SVD of . The complexity of the SVD step reduces from to with .

Note that the Nyström method enables the singular vectors of and thus a low dimensional embedding of the data, to be computed from only a subset of its columns. When is large, it is desirable to form only a subset of the columns of rather than calculate and store all pairwise kernel distances.

Ii-D Column Sampling Methods

We now describe the four main categories of column selection methods. We compare oASIS with the methods listed below, as together they cover all of the types of sampling used currently in Nyström approximation.

Ii-D1 Uniform Random Sampling

Early work on the Nyström method focused on random column sampling methods [1]. Theoretical bounds on the accuracy of a rank- approximation to after sampling a sufficient number of columns have been delveloped for various norms [15].

Uniform random sampling is an appealing method as it is computationally fast. However, the accuracy of a matrix approximation depends directly on the specific columns sampled. This method does not take advantage of the structure inherent in the kernel matrix, leading to redundant sampling and larger approximation error. Improvements on this sampling method can be made by weighting the probability distribution used for the column draw, to increase the chance of selecting relevant columns.

Ii-D2 Non-deterministic Adaptive Sampling

Leverage scores are a recent method for computing the distribution over the column draw [15]. Given the rank- SVD of , the scores are computed as , and each column is selected with probability proportional to its score. This method provides accurate approximations by sampling more relevant columns. However, Leverage scores require the low-rank approximate SVD of to be precomputed at expensive cost. There are fast approximations available for finding the first few singular vectors and values of [26]. Regardless, must be completely formed and stored before it is approximated.

Ii-D3 Deterministic Adaptive Sampling

Early deterministic adaptive methods [14] use an exhaustive search to find columns that minimize the Frobenius norm error . While accurate, this method also requires a precomputed , and has combinatorial complexity. A more efficient adaptive scheme by Farahat [12] builds a Nyström approximation by selecting columns sequentially using a matrix of “residuals.” At each stage of the method, the column with the largest residual is selected, and the residual matrix is updated to reflect its contribution. While accurate, the method also requires a precomputed , and the cost of updating the residual matrix is per iteration.

The residual criterion is related to the adaptive probability distribution calculated by Deshpande [11]. After a sufficient number of columns are chosen, an orthogonalization step obtains a rank- approximation of .

Ii-D4 -means Nyström

Instead of approximating with direct column sampling, an approximation can be made from representative data points found with a -means algorithm. A dataset consisting of clouds of points clustering around

centroids can be described by finding the locations of the centroids. Each datapoint is then remapped into the eigenspace of the centroids. This method was first described by Zhang

[16]. Since the computed centroids do not exist in the dataset, the method does not directly sample columns, but remaps them onto a rank- space. Once the solution to the -means is found, the remapping is . While finding an exact solution to -means is NP-hard, generally -means will converge in time. The resulting can not be formed from the columns of , and so has no space saving representation.

In a survey of methods by Kumar [25], -means was found to be the state-of-the-art approximation method compared to previous sampling methods such as Incomplete Cholesky Decomposition [27], Sparse Matrix Greedy Approximation [14], and Kumar’s Adaptive Partial method derived from Deshpande’s Adaptive Sampling method [11]. For this reason, in lieu of comparisons with many different adaptive sampling techniques we can compare our results directly with -means. For our experiments, we used the same code as provided in [16], with parameters used in both [16] and [25].

Ii-E Finding Low-Dimensional Structure

In addition to using CSS for low-rank kernel approximation, column selection approaches have also been used to find important data points and in turn, reveal low-dimensional structure in data [28, 29]. Recently, it was shown in [30] that oASIS can be used to select representative examples from large datasets in order to enable clustering and dimensionality reduction. This method, called Sparse Self-Expressive Decomposition (SEED), consists of two main steps. First, oASIS is used to select data points for a dictionary. Second, all of the data is then represented by a sparse combination of the points in the dictionary using Orthogonal Matching Pursuit [31, 32]. The sparsity patterns found in the representations can be used for for clustering, denoising, or classification. SEED’s ability to properly describe data hinges on the selection of data points used for the dictionary. oASIS is able to efficiently sample good points to use for this task, compared to other adaptive sampling methods. A full treatment of SEED is can be found in [30].

Iii Accelerated Sequential Incoherence Selection (oASIS)

In this section, we introduce oASIS, a new adaptive sampling method for column subset selection. We also introduce a parallel version called oASIS-P, and we analyze the complexity of both.

Iii-a Sequential Incoherence Selection (SIS)

We now address the question of how to build a Nyström approximation by sequentially choosing columns from Suppose we have already selected columns from and computed a Nyström approximation Our goal is to select a new column that improves the approximation. If a candidate column lies in the span of the columns that we have already sampled, then adding this column has no effect on . Ideally, we would like to quantify how much each candidate column will change the existing approximation and select the column that will produce the most significant impact. Since an ideal column does not lie in the span of the columns that have been selected, we say that this column should be incoherent with those already selected.

We can develop a criteria for finding this new, incoherent column of as follows. Consider a PSD matrix . Recall that any such can be written as , where contains points in dimensions. Given an index set of columns we can form a partition , and can collect the selected columns from into a matrix . To improve the approximation , we select the best new column from , append it to , and compute a new . The best new column index in directly corresponds to the index of the column that lies farthest from the span of . This column satisfies



is the identity matrix and

is an orthogonal projection onto the span of the columns in that have already been selected. Provided that the columns in are linearly independent, we can expand (3) as


Even though is not known explicitly, (4) can be evaluated based upon knowledge of and . The first term of the expression in (4) is the diagonal entry of , which we denote as . The second term can be written as , where is one of the rows of indexed by , and is comprised of the rows of indexed by . When contains linearly dependent columns, we replace with . Therefore, we can iteratively select columns to sample for the approximation without precomputing the entire kernel matrix , as shown in Figure 1. This sets oASIS apart from all other adaptive methods, as discussed in Sections II-D2 and II-D4.

With the evaluation of our criteria now possible, we develop the following sampling method for sequential incoherent selection (SIS). We assume that the process has been seeded with a small set of column indices chosen at random. Columns are then selected to be included in the approximation as follows:

  1. Let . Collect the columns of indexed by as . Form from the rows of indexed by .

  2. Let denote row of , and let denote element of . For each unselected column , calculate

  3. Select the column that maximizes and set .

  4. If the selected value of is smaller than a user set threshold, then terminate. Otherwise, return to Step 1.

Fig. 1: All of the terms in the criterion in (3) for the next column index to select can be found in the structure of the columns that have already been sampled, and the diagonal of . This allows for a deterministic sampling without precomputing all of . See Section III-A for details.


A naive implementation of SIS in Section III-A is inefficient, because each step requires a matrix inversion to form in addition to calculating the errors Fortunately, both of these calculations can be performed efficiently by updating the results from the previous step using block matrix inversion formulas. We dub this new method oASIS.

We first consider the calculation of after a new column is added to the approximation made from columns. We assume throughout the rest of this section that is invertible and thus We show that our column selection rule guarantees the invertibility of in Section IV-A. Let denote the first entries of the new column, denote the relevant element of , and . Using a block inversion formula, we obtain


where is the (scalar valued) Schur complement and is a column vector. This update formula enables to be formed by updating and only requires inexpensive vector-vector multiplication. Note that is invertible as long as is non-zero, which is guaranteed since the algorithm terminates if , in which case our approximation is exact.

Now consider the calculation of for all candidate columns Note that We can evaluate all simultaneously by computing the entry-wise product of with the matrix and then summing the resulting columns. If we have already formed and , then the matrix needed on the next iteration is obtained by applying (5) to to obtain


Equation (6) forms by updating the matrix from the previous iteration. The update requires only matrix-vector and vector-vector products. The application of this fast update rule to the method described in Section III-A yields oASIS, detailed in Figure 2.

oASIS can be initialized with a small random subset of starting columns from Next, the starting matrices and are formed. On each iteration of the algorithm, the vector of Schur complements is formed by computing

Next, the largest entry in is found, and its index is used to select the next column from The update formulas (5) and (6) are then used to form the matrices and required for the next iteration.

  Algorithm 1: oASIS
  Inputs: Symmetric matrix ,
             Diagonal elements of , stored in ,
             Maximum number of sampled columns, ,
             Initial number of sampled columns,
             Non-negative stopping tolerance,
  Outputs: The sampled columns
               The inverse of the sampled rows
  Initialize: Choose a vector of random starting indices.
  while  do
     if  then
     end if
     Form using (5)
     Update using (6)
  end while
Fig. 2: The oASIS algorithm. Note that need not be precomputed.

Iii-C Parallel oASIS

For the case of kernel matrices generated from a dataset , oASIS does not need to explicitly store . Therefore, oASIS saves time, as computing the full is expensive. oASIS also saves space, as the full increases quadratically with . However, the dataset may be itself very difficult to store due to size. In addition, oASIS requires space to build , , and . In total, oASIS requires of memory. In cases where dataset is too large to fit in memory, the matrix operations for oASIS can be distributed among separate processor nodes. We begin by arranging the dataset columnwise into a matrix . Each node stores a submatrix consisting of columns of , a copy of , and the column entries of and corresponding to the results of the kernel function over the entries in with the entries in . When a new column index is selected, the node storing column vector is found and the column is broadcast to all of the nodes. Each node can then calculate the appropriate new entries as needed, as shown in Fig. 3. For each new column, the size of the communicated vector is the dimensionality of the data point, which is much smaller than the kernel matrix. Low communication overhead is an essential property of oASIS-P since, in distributed settings, the cost of internode communication can be larger than intranode computation. The memory requirements for over each node becomes , which makes performing oASIS over datasets with millions of points tractable.

We call this method Parallel oASIS, or oASIS-P, and it is detailed in Figure 4. The implementation makes use of the standard MPI commands (to send data from one node to every node) and (to concatenate variables in each node into a single variable on one central node).

Fig. 3: Diagram of a single node in oASIS-P. Each node retains a subset of the data. The node receives the selected data point , and updates and using the kernel function as in Fig. 4. It then computes and broadcasts to determine the next point to sample from the entire dataset. See Section III-C for details.
  Algorithm 2: oASIS-P
  Inputs: Dataset arranged as a matrix
             Kernel function ,
             Number of nodes indexed by ,
             Maximum number of sampled columns,
             Initial number of sampled columns,
             Number of columns in ,
             Non-negative stopping tolerance,
  Outputs: The sampled columns
               The inverse of the sampled rows
  Initialize: Choose a vector of random
                 starting indices.
                 Load separate column blocks of into
                 each node as .
                 as .
  On each node
   = over all in
   = over all in
      and all in
   = over all in
      and all in
  while  do
     On each node
     if  then
     end if
     On each node
      = over all in
      over all in
     Update using (5)
     Update using (6)
  end while
Fig. 4: oASIS-P, for datasets too large for a single node.

Iv Properties and Applications of oASIS

Iv-a Theoretical Guarantees of oASIS

For general PSD matrices of rank , we can guarantee that oASIS will finish in steps. We develop the theory needed to prove this guarantee in Sections IV-A1 and IV-A2. When is a Gram matrix, formed as , we can use the sample index set found with oASIS to make additional guarantees on approximating the dataset itself. We mention the guarantees briefly in Section IV-A3, and they are described fully in [30].

In Section IV-A1, we show that oASIS will select linearly independent columns of at each step. This becomes very useful in practice, as is computed from the , where . If the selected columns of are not independent, then is a singular matrix. By selecting linearly independent columns of , oASIS can guarantee that , enabling time and space saving calculation of this element when computing . We discuss this further in Section IV-A4.

Iv-A1 Independent Selection Property of oASIS

Given a PSD matrix , . In Lemma 1 below, we provide a sufficient condition that describes when oASIS will return a set of linearly independent columns. This is a similar condition as that provided in [30], although we provide an alternate proof.

Lemma 1.

At each step of Alg. 2, the column of the matrix is linearly independent from the previously selected columns provided that .


We prove this by construction of . Consider adding a new column to with nonzero and . Then is linearly independent of each column in , and so is linearly independent from any other . Therefore as long as at each step, the column selected will be linearly independent from the previous columns selected. ∎

Remark. This result guarantees that oASIS will return a set of linearly independent columns in steps as long as the selection criterion holds before exact reconstruction occurs. While the algorithm may terminate early if before columns have been selected, we have not observed this early termination in practice.

Iv-A2 Exact Matrix Recovery

We now prove that when oASIS selects columns from , then .

Theorem 1.

If oASIS chooses columns from a PSD matrix with , the Nyström .


As is rank , and oASIS has chosen linearly independent columns, then at the next step all as . Therefore , or , and therefore , or Expanding in terms of and ,

We examine the lower right block of this expansion as the others exactly match that of in (1).

By Lemma 1 is full rank, and so

Thus the expansion of is equal to the expansion of in (1). ∎

Fig. 8: For a PSD Gram matrix formed from the dataset in (a), we compare approximation errors for formed using oASIS vs. 5 separate trials of uniform random sampling in (b). We terminate trials at exact recovery. oASIS guarantees exact recovery after sampling 3 columns. Random sampling chooses redundant columns, and this inefficient sampling results in less accurate approximations. For uniform random sampling many of the error curves lie directly on top of each other, as the method repeatedly chooses redundant columns from the bottom cluster of data. The variety of columns sampled can be illustrated by plotting the rank of by number of sampled columns in (c). oASIS chooses columns that increases the rank of at each step. In contrast, choosing redundant columns results in a rank-deficient approximation. See Section IV-A4 for details.

Iv-A3 Guarantees for oASIS when is a Gram Matrix

When is a Gram matrix, we can arrange the points in the dataset columnwise into a matrix . Then we can write , with of rank . oASIS selects a set such that exactly. This property is useful in solving a more general CSS problem than Nyström, precisely formulated as


where is an matrix. This problem has combinatorial complexity, and many of the selection schemes for Nyström arise from attempting to solve this more general problem. Indeed, the adaptive selection methods in [6, 11], and others, can also apply to this problem. We have developed guarantees on oASIS’s ability to exactly recover so that we can use the columns in in developing a self-expressive decomposition of . Although a full treatment of SEED is described in [30], we briefly describe this extension in Section II-E.

Iv-A4 Comparison with Other Theory

oASIS can guarantee exact matrix recovery in an information theoretically efficient number of steps. We show a synthetic example in Figure 8. Using a dataset consisting of points drawn from a Gaussian distribution centered on and points drawn from a Gaussian distribution centered on , we compute a PSD Gram matrix with a resulting rank. oASIS selects columns linearly independent from previously selected columns at each step, increasing the rank of the approximation each time. This enables oASIS to use an iterative update of instead of computing after all columns have been selected. At 3 steps, oASIS terminates, with within machine tolerance.

Random or adaptive random sampling techniques have theoretical guarantees that will be close to a rank- approximation of after a certain number of iterations [15, 6]. However, the lack of guarantees on column selection make for redundant sampling. As an illustration, we include separate trials of uniform random sampling in Figure 8. Uniform random sampling frequently selects columns within the span of previously selected columns at each step, and as a result the approximation error is generally higher than oASIS. In cases of very large data, this becomes a practical concern in computing both and .

Iv-B Complexity of oASIS

The rate-limiting step of oASIS in Fig. 2 is the computation of by updating Equation (6) enables this to be performed by sweeping over the entries of which has dimension The complexity of a single iteration is thus If columns are sampled in total, then entries must be updated. The resulting complexity of the entire oASIS algorithm is thus In practice, the number of sampled columns is much less than This makes oASIS considerably more efficient than adaptive methods such as Farahat’s [12], which requires the computation of residual matrices at each stage resulting in complexity. oASIS is also more efficient than Leverage scores [15], since the scores use an approximate SVD of that requires computations over dense matrices. oASIS is about as efficient as -means Nyström, with complexity . However, -means does not select columns, and instead forms the full from the low-dimensional remapping. As a result, while -means is useful in Nyström approximation, it may not be as useful for more general CSS methods. oASIS, in contrast, can be used in more general CSS problems via the Gram matrix.

If we only compare the speed in finding , oASIS is much slower than uniform random sampling, with its sampling speed. But oASIS also computes and along the way, while these still need to be computed after selecting the columns to be used under uniform random sampling. In large data regimes, these become practical considerations that make implementation of uniform random sampling less efficient, as we discuss in Section IV-C.

Iv-C Complexity of oASIS-P

The low complexity of oASIS makes it practical for extremely large matrices where other adaptive sampling schemes are intractable. For oASIS-P, the computational complexity of oASIS is divided by the nodes, such that each node has complexity. At first blush, oASIS-P is still slower than uniform random sampling and its sampling speed. However, in regimes where the dataset cannot be loaded entirely in memory, three practical considerations make uniform random sampling less competitive. First, forming columns from a dataset with takes at least time. For many applications, forming the columns as they are sampled is substantially more expensive than the process of adaptively selecting columns. Second, communication of data vectors among nodes becomes the bottleneck in the parallel implementation, and oASIS-P and random sampling appear competitive in terms of column selection/generation time, as shown in Table III. Third, random sampling may require substantially more columns than oASIS to achieve the same accuracy, in which case adaptive sampling is highly advantageous.

V Numerical Experiments

Fig. 13: Nyström Approximation Error Curves and Run Time Comparison. See Sections V-A and V-B for details. oASIS is accurate and scales well to large problem sizes due to its low runtime complexity (see Section IV-B).
Fig. 20: Nyström Error curves and number of samples based on time. See Section V-B for details. Note that oASIS is accurate and scales well to large problem sizes due to its low runtime complexity (see Section IV-B).

V-a General Setup

To evaluate the performance of oASIS, we measure the accuracy of Nyström approximations for three size classes of kernel matrices. We first consider matrices where we can directly measure the approximation error of the Nyström method. Second, we consider larger problems for which forming the entire kernel matrix is impractical. Third, we consider problems so large that the dataset will not fit in memory. For each dataset in all classes, we consider Gaussian kernel matrices where . For datasets in the first class we also consider diffusion distance matrices where is a diagonal matrix containing the row sums of , and is a Gaussian kernel matrix [2]. For each dataset, we tune to provide good matrix approximation for any sampling method.

We compare oASIS to the following state-of-the-art Nyström approximation methods: (i) uniform random sampling, (ii) Leverage scores [15] (Section II-D2), (iii) -means Nyström approximation [16] (Section II-D4). and (iv) Farahat’s greedy update method [12] (Section II-D3). For methods (i), (ii), and (iii), we repeat experiments 10 times and average the results. For the second class of matrices we consider oASIS, uniform random sampling, and -means since the other methods become intractable when the matrix becomes too large to explicitly store. For the largest class of matrices we only consider oASIS and uniform random sampling. Specific datasets and experiments are described below.

V-B Full Kernel Matrices

Here, we consider datasets for which the kernel matrices can entirely fit in memory, making all the sampling methods tractable. Convergence curves are generated by forming for increasing and then calculating the approximation error defined by We consider the following datasets, run using MATLAB on an iMac with a 2.7 GHz processor and 16GB of memory. Results and column selection runtimes at the largest sample sizes are shown for full matrices in Table I. oASIS is competitive with the most accurate adaptive schemes, at a fraction of the runtime. Figure 13 shows selected convergence curves (normalized error vs. number of columns sampled). Figure 13 also presents a plot of column selection runtime vs. matrix size for a variety of methods. Figure 20 shows convergence curves (normalized error vs. number of seconds runtime) and column sampling rates (number of columns sampled vs. number of seconds runtime) for all adaptive methods using the Gaussian kernel. These curves allow for a fair assessment of approximation error achieved after a set run time for various adaptive methods, as methods will select columns at different rates. See Section V-E for a full discussion of these results.

Two Moons

We consider a common synthetic dataset for clustering algorithms that consists of 2-dimensional points arranged in two interlocking moons. This set is 2,000 points of dimension 2. We set the kernel equal to 5% of the maximum Euclidean distance between points.


The Abalone dataset is a collection of physical characteristics of abalone [33], which are analyzed to find the age of the abalone without direct measurement. This set is 4,177 points of dimension 8. We set the kernel equal to 5% of the maximum Euclidean distance between points.

Binary Organization of Random Gaussians (BORG)

The BORG assimilates sets of points clustered tightly around each vertex of an -dimensional cube. The points around each vertex are distributed as with . This dataset is constructed to be pathologically difficult, with many clusters and many points per cluster. Many columns are needed from to ensure sampling from each cluster. Using an 8 dimensional cube with 30 points at each vertex, the total dataset is 7,680 points of dimension 8. We set the kernel equal to 12.5% of the maximum Euclidean distance between points.

V-C Implicit Kernel Matrices

Here, we consider datasets for which the resulting kernel matrix would become impractical to explicitly calculate and store. Rather, columns from are generated “on the fly” at the time they are sampled. Since a full representation of

is no longer available, we estimate the approximation error as the Frobenius-norm discrepancy between 100,000 randomly sampled entries in

and the corresponding entries in . Because the Leverage scores [15] and Farahat [12] schemes require a full representation of (which is intractable for these problem instances), we compare only with uniform random sampling and -means. We consider the following datasets, run using MATLAB on an iMac with a 2.7 GHz processor and 16GB of memory. Results and column selection runtimes at the largest sample sizes are shown for implicit matrices in Table II.

Problem oASIS Random Leverage scores -means Farahat
Two Moons 2,000 450 (1.20) (0.01) (3.96) (0.38) (19.7)
(1.16) (0.01) (4.00) (1.21) (19.6)
Abalone 4,177 450 (2.60) (0.01) (35.8) (0.84) (64.8)
(2.51) (0.01) (35.9) (8.26) (64.7)
BORG 7,680 450 (4.71) (0.01) (252) (2.53) (176)
(4.73) (0.01) (244) (48.3) (174)
TABLE I: Error and Selection Runtime Results for Explicit Gaussian (first line) and Diffusion (second line) Kernel Matrices.
Problem oASIS Random -means
MNIST 50,000 4,000 (8260) (946) (188)
Salinas 54,129 5,000 (13372) (819) (1120)
Light Field 82,265 4,000 (13600) (989) (1890)
TABLE II: Error and Selection Runtime Results for Implicit Kernel Matrices.
Problem oASIS-P Random
Two Moons 1,000,000 1,000 (108) (279)
Tiny Images 1,000,000 4,500 (10672) (28225)
Tiny Images 4,000,000 4,500 (14013) (26566)
TABLE III: Error and Factorization Runtime Results for Parallelized Implicit Kernel Matrices.

The MNIST dataset consists of handwritten digits used as a benchmark test for classification [34]. MNIST training data contains 50,000 images of pixels each. Similarity matrices formed from the digits are known to have low-rank structure, because there are only 10 different numerical digits. We set the kernel equal to 50 of the maximum Euclidean distance between points.


The Salinas dataset is a hyperspectral image taken in 1998 by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) over Salinas Valley, CA. The image is of size

, over 204 spectral bands, and can be used to classify various areas of crops. Each pixel is assigned to one of 16 classes according to a ground truth image. Classes represent crops such as broccoli or lettuce. We consider all pixels assigned to a nonzero class, for a total number of 54,129 data points. We set the kernel

equal to 10.

Light Field

Light fields are 4-dimensional datasets describing both the intensity and directionality of light as it travels through a plane. We consider patches taken from the “chessboard” dataset of the Stanford Multi-Camera Array [35]. The samples are 85,265 vectorized 4-dimensional “patches,” each with spatial resolution and angular resolution for a total dimensionality of 400. We set the kernel equal to 50% of the maximum Euclidean distance between points.

V-D Implicit Kernel Matrices with oASIS-P

Finally, we consider datasets that cannot be fit in memory. As such, the dataset is split onto a variety of nodes and the kernel matrix is approximated using oASIS-P as described in Figure 4. At this size, we compare only with uniform random sampling. We choose a Gaussian kernel for all datasets in this class.

Since a full representation of is no longer available, we estimate the approximation error as the Frobenius-norm discrepancy between 100,000 randomly sampled entries in and the corresponding entries in . We consider the following datasets, run using OpenMPI with the Eigen C++ library [36] over 16 nodes (192 cores) on the DaVinCi cluster at Rice University, with each 2.83 GHz processor core having 4GB of memory. Results at the largest sample sizes are shown for parallelized implicit matrices in Table III. These sample sizes were chosen at the limit of available run time on the cluster when using uniform random sampling to both sample and form columns.

Two Moons

This dataset is as described in Section  V-B, but the number of data points has been increased to 1,000,000 points. Determining a good kernel from the maximum Euclidean distance among all points becomes intractable, and so we found a kernel of that provided good approximations for all sampling methods at smaller trial datasets of Two Moons. This kernel was then used for the full set. For this experiment, oASIS was run to an error tolerance of , and random sampling was performed for samples.

Millions of Tiny Images

To show the capability of oASIS to approximate kernel matrices over very large datasets, we select two random subsets of the 80 Million Tiny Images dataset [37], consisting of 1,000,000 and 4,000,000 images of size . For reference, storing the full kernel matrix for 4,000,000 tiny images in binary64 would take up 128,000 gigabytes of space.

We compute the approximation over one color channel of the images as we consider the number of data points to be the prime focus of this experiment. Determining a good kernel from the maximum Euclidean distance among all points becomes intractable, and so we found a kernel of that provided good approximations for all sampling methods at smaller trial datasets of Tiny Images. This kernel was then used for the full set.

V-E Discussion of Results

As shown in in Table I, oASIS achieves lower approximation error than both uniform random sampling and Leverage scores for full kernel matrix experiments when given a set number of columns. In addition, it is competitive with both -means Nyström and Farahat’s method in terms of accuracy while having substantially faster run times.

In addition, oASIS’s strength as a deterministic method enables oASIS to run for a set length of time, as opposed to a fixed . When running the experiments in Figure 20 for adaptive random schemes, it was not known a priori how many columns either -means or Leverage scores should sample. One must guess at the appropriate or to use given a certain amount of time. Our experiments found the appropriate parameters through exhaustive search, resetting the clock and increasing for each trial until the time limit was reached.

The primary advantage of oASIS over random selection schemes is its approximation accuracy. In the Abalone example in Figure 13, uniform random sampling does not provide better accuracy as more columns are sampled, while oASIS continues to find columns that can add significantly to the accuracy of the approximation. In Figure 20, we observe that oASIS achieves exact matrix recovery with Two Moons at about 30 seconds with around 1000 columns sampled. This is an example of the efficiency of oASIS’s column sampling. This efficient accuracy is necessary for the subsequent dimensionality reduction critical to most kernel machine learning tasks. We further observe that the error with -means flattens after 5 seconds. -means Nyström first clusters the original data and computes centroids. It then remaps the data onto the eigenspace of the centroids, and then computes an approximate kernel matrix from this remapping. As such, there is a floor for the approximation based on the accuracy of the eigenspace calculation. This results in a best possible error for -means that can be overcome by oASIS. We observe a similar, flat error result for Leverage scores, as the SVD of the entire can be performed within the runtime limit. This does not occur when the datasets grow larger.

The primary advantage of oASIS over other adaptive schemes is its efficiency. oASIS saves both time and space. In the runtime results shown in Figure 20, we observe fast, accurate approximation of both Two Moons and Abalone with fewer columns sampled. For the BORG dataset, oASIS is second only to

-means, which is as expected given that BORG’s dataset containing of spherical clusters of equal variance exactly fits the inherent data model that

-means clusters. When the data do not fit that model, as in Abalone or Two Moons, we can see the efficiency and accuracy gains of oASIS. We observe in Table I that while -means Nyström runs faster for a single sample size , it needs to be run multiple times for consistency. For example, while running a single -means Nyström approximation on Two Moons Gaussian with and takes 0.38 seconds, the 10 runs used for consistency takes 3.8 seconds. Furthermore, -means approximations performed for samples provides no remapping for any samples fewer than , nor an index set of columns for CSS.

The advantages of oASIS-P over uniform random selection become clear in its application. First, oASIS-P can guarantee an invertible . Uniform random sampling can not guarantee that will be invertible, and so we must calculate to compute . Indeed, preliminary experiments frequently showed to be rank-deficient. This is most likely due to the “birthday problem” - as more columns are selected, the chances that any two columns are of the same direction grows surprisingly fast. As oASIS can iteratively compute , it does not need to invert an matrix as a separate step. Note that 1% of a 1,000,000 point dataset still results in a matrix. Each of DaVinCi’s cores had 4GB of memory, and uniform random sampling became infeasible after approximately 4,500 columns as computing became too memory intensive for a single node, and distributing the computation of is not straightforward. Second, while the complexity of oASIS-P appears much higher than uniform random sampling, in practice oASIS-P is faster than uniform random sampling at sampling and forming columns. Column generation takes the same amount of time regardless of the column selection scheme, and in large data regimes communication of data vectors between nodes becomes the computational bottleneck. Computing an iterative is faster than computing , and so for very large data regimes oASIS-P becomes faster than uniform random sampling, as shown in Table III. For example, oASIS-P samples and forms columns over the Two Moons dataset in less than half of the time it takes for uniform random sampling to perform the same task.

In addition to its low runtime complexity, oASIS is also capable of benefiting from sparse matrix structure. For such matrices, adaptive methods like the one in [12] requires the computation of “residuals,” which may be dense even in the case that is extremely sparse. In contrast, oASIS requires only the storage of much smaller matrices. This benefit of oASIS is highly relevant for extremely large datasets where sparse approximations to similarity matrices are formed using -nearest-neighbor algorithms that only store the most significant entries in each matrix column. Further analysis is necessary, however, as fast approximation methods have been developed specifically for sparse kernel matrices [38].

Vi Conclusion

In this paper, we have introduced oASIS, a novel adaptive sampling algorithm for Nyström based low-rank approximation. oASIS combines the high accuracy of adaptive matrix approximation with the low runtime complexity and memory requirements of inexpensive random sampling. oASIS achieves exact matrix recovery in an optimal number of columns. We have demonstrated the speed and efficacy of the method by accurately approximating large matrices using a single processor. In addition, we have parallelized oASIS so it could be run over datasets of arbitrary size. This allows oASIS to be the only adaptive greedy method available in large data regimes. In addition, our numerical experiments show oASIS to be competitive with random schemes at this level, in both accuracy and speed. Using a dataset of 1,000,000 examples we are able to achieve 1% of the approximation error of random sampling methods. oASIS has been applied to sparse subspace clustering [30], and future work will focus on applying oASIS to other machine learning tasks, such as manifold learning and spectral clustering.


  • [1] C. Williams and M. Seeger, “Using the Nyström method to speed up kernel machines,” in Proc. Adv. in Neural Processing Systems (NIPS).   Denver: MIT Press, 2001, pp. 682–688.
  • [2] R. R. Coifman and S. Lafon, “Diffusion maps,” Appl. Comput. Harmon. Anal., vol. 21, no. 1, 2006.
  • [3] J. B. Tenenbaum, V. de Silva, and J. C. Langford, “A global geometric framework for nonlinear dimensionality reduction,” Science, vol. 290, no. 5500, p. 2319, 2000.
  • [4] J. Shi and J. Malik, “Normalized cuts and image segmentation,” IEEE Trans. Pattern Anal. Machine Intell., vol. 22, no. 8, 2000.
  • [5]

    “IEEE standard for floating-point arithmetic,”

    IEEE Std 754-2008, pp. 1–70, Aug 2008.
  • [6] P. Drineas and M. W. Mahoney, “On the Nyström method for approximating a gram matrix for improved kernel-based learning.” J. Mach. Learn. Res., vol. 6, pp. 2153–2175, 2005.
  • [7] S. Wang and Z. Zhang, “Improving CUR matrix decomposition and the Nyström approximation via adaptive sampling,” J. Mach. Learn. Res., vol. 14, pp. 2729–2769, 2013.
  • [8] C. Fowlkes, S. Belongie, F. Chung et al., “Spectral grouping using the Nyström method,” IEEE Trans. Pattern Anal. Machine Intell., vol. 26, no. 2, pp. 214–225, 2004.
  • [9] P. Paschou, M. W. Mahoney, A. Javed et al., “Intra- and interpopulation genotype reconstruction from tagging SNPs,” Genome Research, vol. 17, no. 1, pp. 96–107, Jan 2007.
  • [10] L. W. Mackey, A. Talwalkar, and M. I. Jordan, “Divide-and-conquer matrix factorization.” in Proc. Adv. in Neural Processing Systems (NIPS), J. Shawe-Taylor, R. S. Zemel, P. L. Bartlett et al., Eds., 2011, pp. 1134–1142.
  • [11] A. Deshpande, L. Rademacher, S. Vempala et al., “Matrix approximation and projective clustering via volume sampling,” in Proc. of ACM-SIAM Symp. on Discrete Algorithms.   ACM, 2006, pp. 1117–1126.
  • [12] A. K. Farahat, A. Ghodsi, and M. S. Kamel, “A novel greedy algorithm for Nyström approximation.” in Proc. Int. Conf. Art. Intell. Stat. (AISTATS), G. J. Gordon, D. B. Dunson, and M. Dud k, Eds.   JMLR.org, 2011, pp. 269–277.
  • [13] A. Talwalkar, S. Kumar, M. Mohri et al., “Large-scale SVD and manifold learning,” J. Mach. Learn. Res., vol. 14, pp. 3129–3152, 2013.
  • [14] A. J. Smola and B. Schölkopf, “Sparse greedy matrix approximation for machine learning,” in Proc. Int. Conf. Machine Learning, P. Langley, Ed., San Francisco, 2000, pp. 911–918.
  • [15] A. Gittens and M. W. Mahoney, “Revisiting the Nyström method for improved large-scale machine learning.” in Proc. Int. Conf. Machine Learning.   JMLR.org, 2013, pp. 567–575.
  • [16] K. Zhang, I. W. Tsang, and J. T. Kwok, “Improved Nyström low-rank approximation and error analysis,” in Proc. Int. Conf. Machine Learning.   New York, NY, USA: ACM, 2008, pp. 1232–1239.
  • [17] E. Gabriel, G. E. Fagg, G. Bosilca et al., “Open MPI: Goals, concept, and design of a next generation MPI implementation,” in Proc., 11th European PVM/MPI Users’ Group Meeting, Budapest, Hungary, September 2004, pp. 97–104.
  • [18] T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning: Data Mining, Inference and Prediction, 2nd ed.   Springer, 2009.
  • [19]

    P. Simard, Y. LeCun, and J. S. Denker, “Efficient pattern recognition using a new transformation distance,” in

    Proc. Adv. in Neural Processing Systems (NIPS).   San Francisco, CA, USA: Morgan Kaufmann Publishers Inc., 1993, pp. 50–58.
  • [20] M. Filippone, F. Camastra, F. Masulli et al., “A survey of kernel and spectral methods for clustering,” Pattern Recogn., vol. 41, no. 1, pp. 176–190, Jan. 2008.
  • [21]

    U. H.-G. Kreßel, “Pairwise classification and support vector machines,” in

    Advances in Kernel Methods, B. Schölkopf, C. J. C. Burges, and A. J. Smola, Eds.   Cambridge, MA, USA: MIT Press, 1999, pp. 255–268.
  • [22] T. Hastie and R. Tibshirani, “Classification by pairwise coupling,” in Proc. Adv. in Neural Processing Systems (NIPS).   Cambridge, MA, USA: MIT Press, 1998, pp. 507–513.
  • [23] M. Belkin and P. Niyogi, “Laplacian eigenmaps for dimensionality reduction and data representation,” Neural Computation, vol. 15, no. 6, pp. 1373–1396, 2003.
  • [24] R. Talmon, I. Cohen, S. Gannot et al., “Diffusion maps for signal processing: A deeper look at manifold-learning techniques based on kernels and graphs,” IEEE Signal Process. Mag., vol. 30, no. 4, pp. 75–86, 2013.
  • [25] S. Kumar, M. Mohri, and A. Talwalkar, “Sampling techniques for the Nyström method,” J. Mach. Learn. Res., vol. 13, pp. 981–1006, 2012.
  • [26] P. Drineas, M. Magdon-Ismail, M. W. Mahoney et al., “Fast approximation of matrix coherence and statistical leverage,” J. Mach. Learn. Res., vol. 13, pp. 3475–3506, 2012.
  • [27] S. Fine, K. Scheinberg, N. Cristianini et al., “Efficient SVM training using low-rank kernel representations,” J. Mach. Learn. Res., vol. 2, pp. 243–264, 2001.
  • [28] M. W. Mahoney and P. Drineas, “CUR matrix decompositions for improved data analysis,” Proc. Natl. Acad. Sci., vol. 106, no. 3, pp. 697–702, 2009.
  • [29] H. Cheng, Z. Gimbutas, P. G. Martinsson et al., “On the compression of low rank matrices.” SIAM J. Sci. Comput., vol. 26, no. 4, pp. 1389–1404, 2005.
  • [30] E. L. Dyer, T. A. Goldstein, R. Patel et al., “Self-Expressive Decompositions for Matrix Approximation and Clustering,” ArXiv e-prints, May 2015.
  • [31] G. Davis, S. Mallat, and Z. Zhang, “Adaptive time-frequency decompositions,” SPIE J. Opt. Engin., vol. 33, no. 7, pp. 2183–2191, 1994.
  • [32] R. Rubinstein, M. Zibulevsky, and M. Elad, “Efficient implementation of the K-SVD algorithm using batch orthogonal matching pursuit,” CS Technion, Tech. Rep., 2008.
  • [33] K. Bache and M. Lichman, “UCI machine learning repository,” http://archive.ics.uci.edu/ml, 2013.
  • [34] Y. LeCun, L. D. Jackel, L. Bottou et al., “Comparison of learning algorithms for handwritten digit recognition,” in

    Int. Conf. on Artificial Neural Networks

    , F. Fogelman and P. Gallinari, Eds.   Paris: EC2 & Cie, 1995, pp. 53–60.
  • [35] B. Wilburn, N. Joshi, V. Vaish et al., “High performance imaging using large camera arrays,” ACM Trans. on Graphics, vol. 24, no. 3, pp. 765–776, Jul. 2005.
  • [36] G. Guennebaud, B. Jacob et al., “Eigen v3,” http://eigen.tuxfamily.org, 2010.
  • [37]

    A. Torralba, R. Fergus, and W. T. Freeman, “80 million tiny images: A large data set for nonparametric object and scene recognition,”

    IEEE Trans. Pattern Anal. Machine Intell., vol. 30, no. 11, 2008.
  • [38] N. Halko, P. G. Martinsson, and J. A. Tropp, “Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions,” SIAM Review, vol. 53, no. 2, pp. 217–288, 2011.