Kernel methods (Schölkopf and Smola, 2001; Hastie et al., 2009) constitute a principled framework that extends linear statistical techniques to nonparametric modeling and inference. Applications of kernel methods span the entire spectrum of statistical learning, including classification, regression, clustering, time-series analysis, sequence modeling (Song et al., 2013), dynamical systems (Boots et al., 2013), hypothesis testing (Harchaoui et al., 2013), and causal modeling (Zhang et al., 2011). Under a Bayesian treatment, kernel methods also admit a parallel view in Gaussian processes (GP) (Rasmussen and Williams, 2006; Stein, 1999) that find broad applications in statistics and computational sciences, including geostatistics (Chilès and Delfiner, 2012), design of experiments (Koehler and Owen, 1996), and uncertainty quantification (Smith, 2013).
This power and generality of kernel methods, however, are limited to moderate sized problems because of the high computational costs. The root cause of the bottleneck is the fact that kernel matrices generated by kernel functions are typically dense and unstructured. For training examples, storing the matrix costs memory and performing matrix factorizations requires arithmetic operations. One remedy is to resort to compactly supported kernels (e.g., splines (Monaghan and Lattanzio, 1985) and Wendland functions (Wendland, 2004)) that potentially lead to a sparse matrix. In practice, however, the support of the kernel may not be sufficiently narrow for sparse linear algebra computations to be competitive. Moreover, prior work (Anitescu et al., 2012)
revealed a subtle drawback of compactly supported kernels in the context of parameter estimation, where the likelihood surface is bumpy and the optimum is difficult to locate. For this reason, we focus on the dense setting in this paper and the goal is to exploit structures that can reduce the prohibitive cost of dense linear algebra computations.
Let be a set and let be a symmetric and strictly positive-definite function. Denote by a set of points. We write , or sometimes for short when the context is clear, to denote the kernel matrix of elements . Because of the confusing terminology on functions and their counterparts on matrices, here, we follow the convention that a strictly positive-definite function corresponds to a positive-definite matrix , whereas a positive-definite function corresponds to a positive semi-definite matrix. For notational convenience, we write
to denote the row vector of elementsand similarly to denote the column vector. In the context of regression/classification, the set is often the -dimensional Euclidean space or a domain . Some of the methods discussed in this paper naturally generalize to a more abstract space. Associated to each point is a target value . We write for the vector of all target values.
The Reproducing Kernel Hilbert Space associated to a kernel is the completion of the function space
equipped with the inner product
Given training data , a typical kernel method finds a function that minimizes the following risk functional
is a loss function andis a regularization. When is the squared loss , the Representer Theorem (Schölkopf et al., 2001) implies that the minimizer is
which is nothing but the well-known kernel ridge regression. Similarly, when
is the hinge loss, the minimizer leads to support vector machines.
In the GP view, the kernel serves as a covariance function. Assuming a zero-mean Gaussian prior with covariance , for any separate set of points and the associated vector of target values
, the joint distribution ofand is thus
Because of the Gaussian assumption, the posterior is the conditional where
Equations (2)–(4) exemplify the demand for kernels that may simplify computations for a large . We discuss a few popular approaches in the following. To motivate the discussion, we consider stationary kernels whose function value depends on only the difference of the two input arguments; that is, we can write by abuse of notation , where ; for example, the Gaussian kernel
. The Fourier transform of, coined spectral density
, in a sense characterizes the decay of eigenvalues of the finitely dimensional covariance matrix(Stein, 1999; Chen, 2013). The decay is known to be the fastest among the Matérn class of kernels (Stein, 1999; Rasmussen and Williams, 2006; Chilès and Delfiner, 2012), where Gaussian being a special case is the smoothest. Additionally, the range parameter also affects the decay. When , tends to a rank-1 matrix; whereas when , tends to the identity. The numerical rank of the matrix varies when moves between the two extremes. The decay of eigenvalues plays an important role on the effectiveness of the approximate kernels discussed below.
1.2 Approximate Kernels
The first approach is low-rank kernels. Examples are Nyström approximation (Williams and Seeger, 2000; Schölkopf and Smola, 2001; Drineas and Mahoney, 2005), random Fourier features (Rahimi and Recht, 2007), and variants (Yang et al., 2014). The Nyström approximation is based on a set of landmark points, , randomly sampled from the training data . Then, the kernel can be written as
For the convenience of deriving approximation bounds, Drineas and Mahoney (2005) consider sampling with repetition, which makes possibly a multiset and the matrix inverse in (6) necessarily replaced by a pseudo inverse. Various approaches for choosing the landmark points were compared in Zhang and Kwok (2010). For random Fourier features, let be the Fourier transform of and let be normalized such that it integrates to unity; that is, is the normalized spectral density of . Then, the kernel is
The Nyström approximation admits a conditional interpretation in the context of GP. The covariance kernel (6) can be equivalently written as
is nothing but the covariance of and conditioned on . In other words, the covariance kernel of Nyström approximation comes from a deduction of the original covariance by a conditional covariance. The conditional covariance for any (or symmetrically, ) within vanishes and hence the approximation is lossless. Intuitively speaking, the closer the sites are to , the smaller the loss is. This explains frequent observations that when the size of the set
is small, using the centers of a k-means clustering as the landmark points often improves the approximation(Zhang et al., 2008; Yang et al., 2012). A caveat is that the time cost of performing k-means clustering is often much higher than that of the Nyström calculation itself. Hence, the improvement gained from clustering may not be as significant as that from increasing the size of the conditioned set . When is large, the improvement brought about by clustering is less significant (see, e.g., Rasmussen and Williams (2006, Section 8.3.7)).
A limitation of the low-rank kernels is that the size of the conditioned set, or equivalently the rank, needs to correlate with the decay of the spectrum of in order to yield a good approximation. For a slow decay, it is not rare to see in practice that the rank grows to thousands (Yen et al., 2014) or even several hundred thousands (Huang et al., 2014; Avron and Sindhwani, 2016) in order to yield comparable results with other methods, for a data set of size on the order of millions. See also the experimental results in Section 5.
The second approach is a cross-domain independent kernel. Simply speaking, the kernel matrix is approximated by keeping only the diagonal blocks of the matrix. In a GP language, we partition the domain into sub-domains , , and make an independence assumption across sub-domains. Then, the covariance between and vanishes when the two sites come from different sub-domains. That is,
Whereas such a kernel appears ad hoc and associated theory is possibly limited, the scenarios when it exhibits superiority over a low-rank kernel of comparable sizes are not rare (see the likelihood comparison in Stein (2014) and also the experimental results in Section 5). An intuitive explanation exists in the context of classification. The cross-domain independent kernel works well when geographically nearby points possess a majority of the signal for classifying points within the domain. This happens more often when the kernel has a reasonably centralized bandwidth (outside of which the kernel value becomes marginal). In such a case, nearby points are the most influential.
The tapered kernel matrix is an elementwise product of two positive-definite matrices; hence, it is positive-definite, too (Horn and Johnson, 1994, Theorem 5.2.1)
. The primary motivation of this kernel is to introduce sparsity to the matrix. The supporting theory is drawn on the confidence interval (cf. (4)) rather than on the prediction (3
). It is cast in the setting of fixed-domain asymptotics, which is similar to a usual practice in machine learning—a prescaling of each attribute to within a finite interval. The theory hints that if the spectral density ofhas a lighter tail (i.e., the spectrum of the corresponding kernel matrix decays faster) than that of , then the ratio between the prediction variance by using the tapered kernel and that by using the original kernel tends to a finite limit, as the number of training data increases to infinity in the domain. The theory holds a guarantee on the prediction confidence if we choose judiciously. Tapering is generally applicable to heavy-tailed kernels (e.g., Matérn kernels with low smoothness) rather than light-tailed kernels such as the Gaussian. Nevertheless, a drawback of this approach is similar to the one we stated earlier for using a compactly supported kernel alone: the range of the support must be sufficiently small for sparse linear algebra to be efficient.
1.3 Proposed Kernel
In this paper, we propose a novel approach for constructing approximate kernels motivated by low-rank kernels and cross-domain independent kernels. The construction aims at deriving a kernel that (a) maintains the (strict) positive-definiteness, (b) leverages the advantages of low-rank and independent approaches, (c) facilitates the evaluation of the kernel matrix and the out-of-sample extension , and (d) admits fast algorithms for a variety of matrix operations. The premise of the idea is a hierarchical partitioning of the data domain and a recursive approximation across the hierarchy. Space partitioning is a frequently encountered idea for kernel matrix approximations (Si et al., 2014; March et al., 2014; Yu et al., 2017; Si et al., 2017), but maintaining positive definiteness is quite challenging. Moreover, when the approximation is performed in a hierarchical fashion and is focused on only the matrix, it is not always easy to generalize to out of samples. A particularly intriguing property of the approach proposed in this article is that both positive definiteness and out-of-sample extensions are guaranteed, because the construction acts on the kernel function itself.
2 Hierarchically Compositional Kernel
The low-rank kernel (in particular, the Nyström approximation ) and the cross-domain independent kernel are complementary to each other in the following sense: the former acts on the global space, where the covariance at every pair of points and are deducted by a conditional covariance based on the conditioned set chosen globally; whereas the latter preserves all the local information but completely ignores the interrelationship outside the local domain. We argue that an organic composition of the two will carry both advantages and alleviate the shortcomings. Further, a hierarchical composition may reduce the information loss in nearby local domains.
2.1 Composition of Low-Rank Kernel with Cross-Domain Independent Kernel
Let the domain be partitioned into disjoint sub-domains . Let be a set of landmark points in . For generality, needs not be a subset of the training data . Consider the function
Clearly, leverages both (6) and (8). When two points and are located in the same domain, they maintain the full covariance (8); whereas when they are located in separate domains, their covariance comes from the low-rank kernel (6). Such a composition complements missing information across domains in and also complements the information loss in local domains caused by the Nyström approximation. The following result is straightforward in light of the fact that if , then
is a column of the identity matrix where the only nonzero element (i.e.,) is located with respect to the location of inside .
if , or if either of belongs to .
An alternative view of the kernel is that it is an additive combination of a globally low-rank approximation and local Schur complements within each sub-domain. Hence, the kernel is (strictly) positive-definite. See Lemma 2 and Theorem 3 in the following.
The Schur-complement function
is positive-definite, if is strictly positive-definite, or if is positive-definite and is invertible.
For any set , let . It amounts to showing that the corresponding kernel matrix is positive semi-definite; then, as a principal submatrix is also positive semi-definite.
Denote by , which could possibly be empty, and let the points in be ordered before those in . Then,
By the law of inertia, the matrices
have the same number of positive, zero, and negative eigenvalues, respectively. If is strictly positive-definite, then the eigenvalues of both matrices are all positive. If is positive-definite, then the eigenvalues of both matrices are all nonnegative. In both cases, the Schur-complement matrix is positive semi-definite and thus so is . ∎
The function is positive-definite if is positive-definite and is invertible. Moreover, is strictly positive-definite if is so.
Write , where and
Clearly, is positive-definite; by Lemma 2, is so, too. Thus, is positive-definite.
We next show the strict definiteness when is strictly positive-definite. That is, for any set of points and any set of coefficients that are not all zero, the bilinear form cannot be zero. Note that whenever or . Moreover, we have seen in the proof of Lemma 2 that the Schur-complement matrix is positive-definite when is strictly positive-definite. Therefore, we have only when for all satisfying . In such a case,
Because of the strict positive-definiteness of , the above summation cannot be zero if any of the involved (that is, those satisfying ) is nonzero. Then, only when all are zero. ∎
Since the composition replaces the Nyström approximation in local domains by the full covariance, it bares no surprise that improves over in terms of matrix approximation.
Given a set of landmark points and for any set ,
where is the 2-norm or the Frobenius norm.
Based on the split in the proof of Theorem 3, one easily sees that
where block-diag means keeping only the diagonal blocks of a matrix. Denote by and . In what follows we show that
Because is positive semi-definite and nonzero, its diagonal cannot be zero. Then, eliminating the block-diagonal part reduces the Frobenius norm. Thus, (9) holds for the Frobenius norm. To see that (9) also holds for the 2-norm, let and let the points in be ordered before those in . Then,
Because the zero rows and columns do not contribute to the 2-norm, and because the top-left block of is positive-definite, it suffices to prove (9) for any positive-definite matrix .
Note the following two straightforward inequalities
Because consists of the diagonal blocks of , the interlacing theorem of eigenvalues states that for each diagonal block , we have
Then, taking the max/min eigenvalues of all blocks, we obtain
which immediately implies that . ∎
2.2 Hierarchical Composition
While maintains the full information inside each domain , the information loss across domains caused by the low-rank approximation may still be dramatic. Consider the scenario of a large number of disjoint domains ; such a scenario is necessarily typical for the purpose of reducing the computational cost. If each domain is adjacent to only a few neighboring domains, it is possible to reduce the information loss in nearby domains.
The idea is to form a hierarchy. Let us first take a two-level hierarchy for example. A few of the neighboring domains form a super-domain . These super-domains are formed such that they are disjoint and they collectively partition the whole domain; i.e., . Under such a hierarchical formation, instead of using landmark points in to define the covariance across the bottom-level domains , we may use landmark points chosen from the super-domain to define the covariance. The intuition is that the conditional covariance for tends to be smaller than , because and are geographically closer to than to . Then, the information loss is reduced for points inside the same super-domain .
To formalize this idea, we consider an arbitrary hierarchy, which is represented by a rooted tree . See Figure 1 for an example. The root node is associated with the whole domain . Each nonleaf node possesses a set of children ; correspondingly, the associated domain is partitioned into disjoint sub-domains satisfying . The partitioning tree is almost the most general rooted tree, except that no nodes in have exactly one child.
Each nonleaf node is associated with a set of landmark points, located within the domain . We now recursively define the kernel on domains across levels. Continuing the example of Figure 1, node 4 has two children 8 and 9. Since these two children are leaf nodes, the covariance within (or ) comes from the original kernel , whereas the covariance across and comes from the Nyström approximation by using landmark points . That is, the kernel is equal to if and are both in (or in ), and equal to if they are in and separately. Such a covariance bares less information loss caused by the conditioned set, compared with the use of landmark points located within the whole domain .
Next, consider the covariance between child domains of and those of (say, and , respectively). At a first glance, we could have used to define the kernel, because consists of landmark points located in the domain that covers both and . However, such a definition cannot guarantee the positive-definiteness of the overall kernel. Instead, we approximate by using the Nyström approximation based on the landmark points . Then, the covariance for and is defined as
Formally, for a leaf node and , define . For a nonleaf node and , define
where if is a child of and if , then
The at the root level gives the hierarchically compositional kernel of this paper:
Clearly, the kernel in Section 2.1 is a special case of when the partitioning tree consists of only the root and the leaf nodes (which are children of the root).
where is the least common ancestor of and , and and are the paths connecting and the two leaf nodes, respectively. Therefore, we have the following result.
Based on the notation in the preceding paragraph, for and , we have
whenever , , and either of belongs to .
On (16), recursively apply the fact that if , then is a column of the identity matrix where the only nonzero element (i.e., ) is located with respect to the location of inside . ∎
The function is positive-definite if is positive-definite and is invertible for all sets of landmark points associated with the nonleaf nodes . Moreover, is strictly positive-definite if is so.
3 Matrix View
The kernel matrix for a set of training points exhibits a hierarchical block structure. Figure 2 pictorially shows such a structure for the example in Figure 1. To avoid degenerate empty blocks, we assume that for all leaf nodes in the partitioning tree .
Formally, for any node , let ; that is, consists of the training points that fall within the domain . Then, define a matrix with the following structure:
For every node , is a diagonal block whose rows and columns correspond to ; for every pair of sibling nodes and , is an off-diagonal block whose rows correspond to and columns to .
For every leaf node , .
For every pair of sibling nodes and , , where is the parent of and , with and defined next.
For every nonleaf node , .
For every leaf node , , where is the parent of .
For every pair of child node and parent node not being the root, the block of corresponding to node is , where and is the parent of . That is, in the matrix form,
Such a hierarchical block structure is a special case of the recursively low-rank compressed matrix studied in Chen (2014b). In this matrix, off-diagonal blocks are recursively compressed into low rank through change of basis, while the main-diagonal blocks at the leaf level remain intact. Its connections and distinctions with related matrices (e.g., FMM matrices (Sun and Pitsianis, 2001) and hierarchical matrices (Hackbusch, 1999)) were discussed in detail in Chen (2014b). The signature of a recursively low-rank compressed matrix is that many matrix operations can be performed with a cost, loosely speaking, linear in . The matrix structure in this paper, which results from the kernel , specializes a general recursively low-rank compressed matrix in the following aspects: (a) the matrix is symmetric; (b) the middle factor in each is the same for all child pairs of ; and (c) the change-of-basis factor is the same for all children of . Hence, storage and time costs of matrix operations are reduced by a constant factor compared with those of the algorithms in Chen (2014b). In what follows, we discuss the algorithmic aspects of the matrix operations needed to carry out kernel method computations, but defer the complexity analysis in Section 4. Discussions of additional matrix operations useful in a general machine learning context are made toward the end of the paper.
3.1 Matrix-Vector Multiplication
First, we consider computing the matrix-vector product . Let the blocks of a vector be labeled in the same manner as those of the matrix . That is, for any node , denotes the part of that corresponds to the point set . Then, the vector is clearly an accumulation of the smaller matrix-vector products in the appropriate blocks, for all sibling pairs and for all leaf nodes (cf. Figure 2). When is a leaf node, the computation of is straightforward. On the other hand, when and constitute a pair of sibling nodes, we consider a descendant leaf node of . The block of corresponding to node admits an expanded expression
where is the path connecting and the parent of . This expression assumes that all the leaf descendants of are on the same level so that the expression is not overly complex; but the subsequent reasoning applies to the general case. The terms inside the nested parentheses of (17) motivate the following recursive definition:
Clearly, all ’s can be computed within one pass of a post-order tree traversal. Upon the completion of the traversal, we rewrite (17) as . Then we note that for any leaf node , is a sum of these expressions with all along the path connecting and the root, and additionally, of . In other words, we have
where is the path connecting and the root. Therefore, we recursively define another quantity
for all nonroot nodes with parent . Clearly, all ’s can also be computed within one pass of a pre-order tree traversal. Upon the completion of this second traversal, we have , which concludes the computation of the whole vector .
As such, computing the matrix-vector product consists of one post-order tree traversal, followed by a pre-order one. We refer the reader to Chen (2014b) for a full account of the computational steps. For completeness, we summarize the pesudocode in Algorithm 1 as a reference for computer implementation.
3.2 Matrix Inversion
Next, we consider computing , for which we use the tilded notation . One can show (Chen, 2014b) that has exactly the same hierarchical block structure as does . In other words, the structure of can be described by reusing the verbatim at the beginning of Section 3, with the factors , , , , replaced by the tilded version , , , , , respectively. The proof is both inductive and constructive, providing explicit formulas to compute , , , , level by level. The essential idea is that if have been computed for all children of a node , and if is the parent of , then
Hence, the inversion of can be done by applying the Sherman–Morrison–Woodbury formula, which results in a form , where is determined based on the change of basis from to and the middle factor is also determined. In addition, the factors <