Kernel Methods and their derivatives: Concept and perspectives for the Earth system sciences

07/29/2020 ∙ by J. Emmanuel Johnson, et al. ∙ 45

Kernel methods are powerful machine learning techniques which implement generic non-linear functions to solve complex tasks in a simple way. They Have a solid mathematical background and exhibit excellent performance in practice. However, kernel machines are still considered black-box models as the feature mapping is not directly accessible and difficult to interpret.The aim of this work is to show that it is indeed possible to interpret the functions learned by various kernel methods is intuitive despite their complexity. Specifically, we show that derivatives of these functions have a simple mathematical formulation, are easy to compute, and can be applied to many different problems. We note that model function derivatives in kernel machines is proportional to the kernel function derivative. We provide the explicit analytic form of the first and second derivatives of the most common kernel functions with regard to the inputs as well as generic formulas to compute higher order derivatives. We use them to analyze the most used supervised and unsupervised kernel learning methods: Gaussian Processes for regression, Support Vector Machines for classification, Kernel Entropy Component Analysis for density estimation, and the Hilbert-Schmidt Independence Criterion for estimating the dependency between random variables. For all cases we expressed the derivative of the learned function as a linear combination of the kernel function derivative. Moreover we provide intuitive explanations through illustrative toy examples and show how to improve the interpretation of real applications in the context of spatiotemporal Earth system data cubes. This work reflects on the observation that function derivatives may play a crucial role in kernel methods analysis and understanding.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 10

page 12

page 15

page 16

page 17

page 19

Code Repositories

kNDVI

Kernel vegetation indices and the kernel NDVI


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Abstract

Kernel methods are powerful machine learning techniques which implement generic non-linear functions to solve complex tasks in a simple way. They have a solid mathematical background and exhibit excellent performance in practice. However, kernel machines are still considered black-box models as the feature mapping is not directly accessible and difficult to interpret.

The aim of this work is to show that it is indeed possible to interpret the functions learned by various kernel methods is intuitive despite their complexity. Specifically, we show that derivatives of these functions have a simple mathematical formulation, are easy to compute, and can be applied to many different problems. We note that model function derivatives in kernel machines is proportional to the kernel function derivative. We provide the explicit analytic form of the first and second derivatives of the most common kernel functions with regard to the inputs as well as generic formulas to compute higher order derivatives. We use them to analyze the most used supervised and unsupervised kernel learning methods: Gaussian Processes for regression, Support Vector Machines for classification, Kernel Entropy Component Analysis for density estimation, and the Hilbert-Schmidt Independence Criterion for estimating the dependency between random variables. For all cases we expressed the derivative of the learned function as a linear combination of the kernel function derivative. Moreover we provide intuitive explanations through illustrative toy examples and show how to improve the interpretation of real applications in the context of spatiotemporal Earth system data cubes. This work reflects on the observation that function derivatives may play a crucial role in kernel methods analysis and understanding.

Keywords:

Kernel methods, Derivatives, Multivariate Analysis, Gaussian Process Regression, Kernel Ridge Regression, Support Vector Machines, Sensitivity Maps, Density Estimation, Hilbert-Schmidt Independence Criterion

1 Introduction

Kernel methods (KMs) constitute a standard set of tools in machine learning and pattern analysis [Scholkopf02, ShaweTaylor04]. They are based on a mathematical framework to cope with nonlinear problems while still relying on well-known concepts of linear algebra. KMs are one of the preferred tools in applied sciences, from signal and image processing [Rojo17dspkm]

, to computer vision 

[CGV-027], chemometrics and geosciences [CampsValls09wiley]

. Since its introduction in the 1990s through the popular support vector machines (SVMs), kernel methods have evolved into a large family of techniques that cope with many problems in addition to classification. Kernel machines have also excelled in regression, interpolation and function approximation problems 

[Rojo17dspkm], where Gaussian Processes (GPs) [Rasmussen06] and support vector regression [smola04] have provided good results in many applications. Furthermore, many kernel methods have been engineered to deal with other relevant learning problems; for example, density estimation via kernel decompositions using entropy components [Jenssen2009]

. For dimensionality reduction and feature extraction, there are a wide family of multivariate data analysis kernel methods such as kernel principal component analysis 

[Scholkopf98], kernel canonical analysis [lai2000] or kernel partial least squares [Rosipal01]. Kernels have also been exploited to estimate dependence (nonlinear associations) between random variables such as kernel mutual information [Gretton05] or the Hilbert-Schmidt Independence Criterion [GreBouSmoSch05]. Finally in the literature, we find kernel machines for data sorting [Quadrianto2008], manifold learning and alignment [Tuia16plosone], system identification [Martinez-Ramon2006], signal deconvolution and blind source separation [Rojo17dspkm].

However, understanding is more difficult than predicting, and kernel methods are still considered black-box models. Little can be said about the characteristics of the feature mapping which is only implicit in the formulation. Several approaches have been presented in the literature to explore the kernel feature mapping, and thus to understand what the kernel machine is actually learning. One way to analyze kernel machines is by directly visualizing the empirical feature maps however this is very challenging and only feasible in low-dimensional problems [Scholkopf02, Rasmussen2006]. Another approach is to study the relative relevance of the covariates (input features) on the output. This is commonly referred to as feature ranking and it typically reduces to evaluating how the function varies when an input is removed or perturbed. A second family of approaches try to do analysis through the construction of self-explanatory kernel functions. Automatic relevance determination (ARD) kernels [Rasmussen06] or multiple kernel learning [Rakotomamonjy08]

allow one to study the relevance of the feature components indirectly. While this approach has been extensively used to improve the accuracy and understanding of supervised kernel classifiers and regression methods, they only provide feature ranking and nothing is said about the geometrical properties of the feature map. In order to resolve this, two main approaches are available in the kernel methods literature. For some particular kernels one can derive the

metric induced by the kernel to give insight into the surfaces and structures [Burges98b]. Alternatively, one can study the feature map (in physically meaningful units) by learning the inverse feature mapping; a group of techniques collectively known as kernel pre-imaging [BakWesSch03, KwoTsa04]. However, current methods are computationally expensive, involve critical parameters, and very often provide unstable results.

Function derivatives is a classical way to describe and visualize some characteristics of models. Derivatives of kernel functions have been introduced before, yet mostly used in supervised learning as a form of regularization that controls fast variations of the decision function 

[wahba]. However, derivatives of the model’s function with regards to the input features for feature understanding and visualization has received less attention. A recent strategy is to derive sensitivity maps from a kernel feature map [Kjems02, gpnoise_nips2011]. The sensitivity map is related to the squared derivative of the function with respect to the input features. The idea was originally derived for SVMs in neuroimaging applications [rasmussen2011visualization], and later extended to GPs in geoscience problems [CampsValls15sens, Blix2015]. In both cases, the goal was to retrieve a feature ranking from a learned supervised model.

In this paper, we analyze the kernel function derivatives for supervised and unsupervised kernel methods with several kernel functions in different machine learning paradigms. We show the usefulness of the derivatives to study and visualize kernel models in regression, classification, density estimation, and dependence estimation with kernels. Since differentiation is a linear operator, most kernel methods have a derivative that is proportional to the derivative of the kernel function. We provide the analytic form of the first and second derivatives of the most common kernel functions with regards to the inputs, along with iterative formulas to compute the

-th order derivative of differentiable kernels, and for the radial basis function kernel in particular; where

is the number of successive derivatives. In classification problems, the derivatives can be related to the margin, and allow us to gain some insight on sampling [martino17]. In regression problems, a models function derivatives may give insight about the signal and noise characteristics that allow one to design regularization functionals. In density estimation, the second derivative (the Hessian) allows us to follow the density ridge for manifold learning [OzertemE11], whereas in dependence estimation squared derivatives (the sensitivity maps) allows one to study the most relevant points and features governing the association measure [SHSIC2017]. All in all, kernel derivatives allow us to identify both examples and features that affect the predictive function the most, and allow us to interpret the kernel model behavior in different learning applications. We show that the solutions can be expressed in closed-form for the most common kernel functions and kernel methods, they are easy to compute, and we give examples of how they can be used in practice.

The remainder of the paper is organized as follows. Section 2 briefly reviews the fundamentals of kernel functions and feature maps, and concentrates on the kernel derivatives for feature map analysis where we provide the first and second order derivatives for most of the common kernel functions. We also review the main ideas to summarize the information contained in the derivatives. Section 3 and Section 4 study popular discriminative kernel methods, such as Gaussian Processes for regression and support vector machines for classification. Section 5 analyzes the interesting case of density estimation with kernels, in particular through the use of kernel entropy component analysis for density estimation. Section 6 pays attention to the case of dependence estimation between random variables using the Hilbert-Schmidt independence criterion in cases of dependence visualization maps and data unfolding. Section 7 illustrates the applicability of kernel derivatives in the previous kernel methods on real spatio-temporal Earth system data. Section 8 provides some conclusions with some final remarks.

2 Kernel functions and the derivatives

2.1 Kernel functions and feature maps

Kernel methods rely on the notion of similarity between points in a higher (possibly infinite) dimensional Hilbert space. Let us consider a set of empirical data , whose elements are defined in a -dimensional input space, , . In supervised settings, each input feature vector is associated with a target value, which can be either discrete in the classification case, or real in the regression case, , . Kernel methods assume the existence of a Hilbert space equipped with an inner product where samples in are mapped into by means of a feature map , . The mapping function can be defined explicitly (if some prior knowledge about the problem is available) or implicitly, which is often the case in kernel methods. The similarity between the elements in can be estimated using its associated dot product via reproducing kernels in Hilbert spaces (RKHS), , such that pairs of points . So we can estimate similarities in without the explicit definition of the feature map , and hence without having access to the points in . This kernel function is required to satisfy Mercer’s Theorem [Aizerman64].

Definition 1

Reproducing kernel Hilbert spaces (RKHS) [Aronszajn_1950_9268]. A Hilbert space is said to be a RKHS if: (1) The elements of are complex or real valued functions defined on any set of elements ; And (2) for every element , is bounded.

The name of these spaces comes from the so-called reproducing property. In a RKHS , there exists a function such that

(1)

by virtue of the Riesz Representation Theorem [RieNag55]. And in particular, for any

(2)

A large class of algorithms have originated from regularization schemes in RKHS. The representer theorem gives us the general form of the solution to the common loss formed by a cost (loss, energy) term and a regularization term.

Theorem 1

(Representer Theorem) [RieNag55, Kimeldorf1971] Let be a strictly monotonic increasing function; let

be an arbitrary loss function; and let

be a RKHS with reproducing kernel . Then:

(3)

admits a space of functions defined as

(4)

which is expressed as a linear combination of kernel functions. Also note that the previous theorem states that solutions imply having access to an empirical risk term and a (often quadratic) regularizer . In the case of not having labels , alternative representer theorems can be equally defined. A generalized representer theorem was introduced in [Scholkopf01generalizedrepresenter], which generalizes Wahba’s theorem to a larger class of regularizers and empirical losses. Also, in [Gnecco09], a representer theorem for kernel principal components analysis (KPCA) was used: the theorem gives the solution as a linear combination of kernel functions centered at the input data points, and is called the representer theorem of learning theory [Cucker02], whereby the coefficients are determined by the eigendecomposition of the kernel matrix [Scholkopf98, Scholkopf01generalizedrepresenter].

2.2 Derivatives of linear expansions of kernel functions

Computing the derivatives of function can give important insights about the learned model. Interestingly, in the majority of kernel methods, the function is linear in the parameters , cf. Eq. (4) derived from the representer theorem [Kimeldorf1971] [Th. 1]. For the sake of simplicity, we will denote the partial derivative of w.r.t. the feature as , where denotes the dimension. This allows us to write the partial derivative of as:

(5)

where and . It is possible to take the second order derivative with respect to feature twice which remains linear as well with :

(6)

where . Inductively, it is easy to show that the -th partial derivative w.r.t the -th feature is also linear with and it follows the following equation:

(7)

The gradient of gives information about the slope (increase rate) of the function and reduces to

(8)

where denotes the vector differential operator, and . The Laplacian accounts for the curvature, roughness, or concavity of the function itself, and can be easily computed as the sum of all the unmixed second partial derivatives, which for kernels reduces to

(9)

where and is a column vector of ones of size . Another useful descriptor is the Hessian matrix of , which characterizes its local curvature. The Hessian is a matrix of second-order partial derivatives with respect to the features :

(10)

The equations listed above have shown that the derivative of a kernel function is linear with . Once the is computed, the problem reduces to (1) computing the derivatives for a particular kernel function, and (2) to summarize the information contained within the derivatives.

  • Derivatives of common kernel functions. Kernel methods typically use a set of positive definite kernel functions, such as the linear, polynomial (Poly), hyperbolic tangent (Tanh) , Gaussian (RBF) kernel, and the automatic relevance determination (ARD) kernel. We give the partial derivative for all of these kernels in Table 1, and the (mixed) second derivatives in Table 2. For the most widely used kernels (RBF and ARD), one can readily recognize a linear relation between the kernel derivative and the kernel function itself. It can be shown that the -th derivative of the kernel can be computed recursively using Faà di Bruno’s identity (see Appendix A for the theorem and a worked example of the RBF kernel).

  • Summarizing function derivatives. Summarizing the information contained in the derivatives is not an easy task, especially in high dimensional problems. The most obvious strategy is to use the norm of the partial derivative, that is , which summarizes the relevance of variable . A small norm implies a small change in the discriminative function with respect to the -th dimension, indicating the low importance of that feature. This approach was introduced as sensitivity maps (SMs) in [rasmussen2011visualization] for the visualization of SVM maps in neuroimaging and later exploited in GPs for ranking spectral channels in geosciences applications [Blix2015]. The SM for the -th feature, is the expected value of the squared derivative of the function with respect the input argument :

    (11)

    where

    is the probability density function (pdf) over dimension

    of the input space . In order to avoid the possibility of cancellation of the terms due to its signs, the derivatives are squared. Other unambiguous transformations like the absolute value could be equally applied. The empirical sensitivity map approximation to Eq. (11) is obtained by replacing the expected value with a summation over the available samples

    (12)

    which can be grouped together to define the sensitivity vector as .

    Alternatively, one could think of studying the relevance of the sample points. So a trivial approach consists of averaging over the features to obtain a point sensitivity:

    (13)

    which can be grouped to define the point sensitivity vector as . The information contained in is related to the robustness to changes of the decision in each point of the space.

Now we are equipped to use the derivatives and the corresponding sensitivity maps in arbitrary kernel machines that use standard kernel functions. In the following sections, we study its use in kernel methods for both supervised (regression and classification) and unsupervised (density estimation and dependence estimation) learning.

Kernel Kernel function, Partial derivative,
Linear
Poly
RBF
Tanh
ARD
Table 1: Partial derivatives for some common kernel functions: Linear, Polynomial (Poly), Radial Basis Functions (RBF), Hyperbolic tangent (Tanh), and Automatic Relevance Determination (ARD).
Kernel 2nd partial derivative, Mixed partial derivative,
Linear 0 0
Poly
RBF
Tanh
ARD
Table 2: Second derivatives for some common kernel functions.

3 Kernel regression

3.1 Gaussian Process Regression

Multiple proposals to use kernel methods in a regression framework have been done during the last few decades. Gaussian Processes (GPs) is perhaps the most successful kernel method for discriminative learning in general and regression in particular [Rasmussen06]. Standard GP regression approximates observations as the sum of some unknown latent function of the inputs plus some constant power Gaussian noise, , where . A zero mean GP prior is placed on the latent function and a Gaussian prior is used for each latent noise term , in other words , where , and is a covariance function,

, parameterized by a set of hyperparameters

(e.g. for the ARD kernel function).

If we consider a test location with corresponding output , priors induce a prior distribution between the observations and . Collecting all available data in , it is possible to analytically compute the posterior distribution over the unknown output given the test input and the available training set , =

, which is a Gaussian with the following mean and variance:

(14)
(15)

where contains the kernel similarities of the test point to all training points in , is a kernel (covariance) matrix whose entries contain the similarities between all training points, , is a scalar with the self-similarity of , and

is the identity matrix. The solution of the predictive mean for the GP model in (

14) is expressed in the same way as equation (4), where , and note that this expression is exactly the same as in other kernel regression methods like the Kernel Ridge Regression (KRR) [ShaweTaylor04] or the Relevance Vector Machine (RVM) [ShaweTaylor04]. The derivative of the mean function can be computed through equation (5) and the derivatives in Table 1.

3.2 Derivatives and sensitivity maps

Let us start by visualizing derivatives in simple 1D examples. We used GP modeling with a standard RBF kernel function to fit five regression data sets. We show in Figure  1 the first and second derivatives of the fitted GP model, as well as the point-wise sensitivities. In all cases, first derivatives are related to positive or negative slopes, while the second derivatives are related to the curvature of the function. Since the derivative is a linear operator, a composition of functions is also the composition of derivatives as can be seen in the last two functions. This could be useful for analyzing more complex composite kernels. See Appendix B for a 2D example regression experiment.

Figure 1: Different examples of functions, derivatives and sensitivity maps. Original data (red), the GP predictive function (black), high derivative values are in yellow, close-to-zero derivative values are in gray and negative derivative values are in blue.

3.3 Derivatives and regularization

We show an example of applying the derivative of the kernel function as a regularization parameter for the noise. We modeled the function with an additive white Gaussian noise (AWGN) using a GP model with RBF kernel. Different amounts of noise power

was used, giving rise to different values of the signal to noise ratio (SNR), SNR =

, SNR dB. Two different settings were explored to analyze the impact of the standard regularizer, , and the derivatives in GP modeling: (1) either using the optimal amount of regularization in (14), , or (2) assuming no regularization was needed, .

Four scenarios were explored in this experiment: , , , and , where is a matrix with entries (for definitions of gradients see equations (8) and (9)). The resulting SNR curves were then normalized in such a way that they are comparable. We explore two scenarios; the regularized and unregularized. Since the maximum SNR was subtracted from all norm values, any norm greater (lower) than zero signifies the need to regularize more (less).

Figure 2 shows the effect of the noise on the norm for different regularization terms. All four regularization functions give the user information about how noisy the signal is for the unregularized case and the remaining three regularizers not used in the regularized case give information about the noise of the signal. The graph for the regularized case has the norms of the functions below zero, except for the , when the SNR is extremely low. Since the norm of the functions are increasing as one increases the SNR, this says that there needs to be less regularization. The has a straight line because the ‘optimal’ parameter for using the norm of the weights for regularization has already been chosen. However, the norm of the first and second derivative still give us information that the problem needs to be regularized less. So both cases showcase the functionality of the first and second derivative as viable regularizers.

0.0 dB 10 dB 20 dB
0.0 dB 10 dB 20 dB
              Unregularized,               Regularized,
Figure 2: Signal-to-Noise Ratio (SNR) versus the expected normalized value of different norms () to act as regularizers. A unregularized (left) and an regularized (right) GP model was fitted. The top row shows a few examples of these fitted GP models with a different quantity of noise added. The red data points are the data with different noise levels, the true function is black and the fitted GP model is in blue. The second row shows the norm for the different regularizers. All lines were normalized in such a way they are comparable. The norm of the true signal (SNR = 50 dB) is subtracted from all points so any curve with values below zero require less regularization and any points above zero require more regularization.

4 Kernel classification

4.1 Support Vector Machine Classification

The first, more effective and influential kernel method introduced was the Support Vector Machine (SVM) [Boser92, Cortes95b, Vapnik98, Scholkopf02] classifier. Researchers and practitioners have used it to solve problems in, for instance, speech recognition [Xing17], computer vision and image processing [Cremers03, Kim05, Xu17], or channel equalization [Chen00decisionfeedback]. The binary SVM classification algorithm minimizes a weighted sum of a loss and a regularizer

where the cost function is called the ‘hinge loss’ and is defined as = , , and is the RKHS of functions generated by the kernel , and is a parameter that trades off accuracy for smoothness. The norm is generally interpreted as a roughness penalty, and can be expressed as a function of kernels, . It can be shown that the decision function for any test point is given by

(16)

where are Lagrange multipliers obtained from solving a particular quadratic programming (QP) problem, being the support vectors (SVs) those training samples with non-zero Lagrange multipliers  [Scholkopf02].

4.2 Function Derivatives and margin

Note that the SVM decision function in (16) uses a link function to decide between the two classes, which is inherited from the hinge loss used Since is not differentiable at and for the sake of analytic tractability we replaced it with the hyperbolic tangent,

, as now one can simply compute the derivative of the model by applying the chain rule:

(17)

where the leftmost term can be seen as a mask function on top of the derivative of the regression function and allows us to study the model in terms of decision and estimation separately.

Figure 3: Visualizing three examples of sensitivity maps in SVM classification. The figure has red and green points to showcase the classes with the contour map showcasing the same color. In the subsequent plots, high derivative values are in yellow and negative derivative values are in gray.

Three datasets were used to illustrate the effect of the derivative in the SVM classifier. We used a SVM with RBF kernel in all cases, and hyperparameters were tuned by 3-fold cross-validation and the results are displayed in Fig. 3. The masking function only focuses on regions along the decision boundary. However the derivative of the kernel function displays a few regions along the decision boundary along with other regions outside of the decision boundary. The composite of the derivative of the masking function and kernel function showcases a combination of the two components: the high derivative regions along the decision boundary. The two half moons and two circles examples have a clear decision boundary and the derivative of the composite function is able to capture this. However, the two ellipsoid example is less clear as the decision boundary passes through two overlapping classes. This is related to the density within the margin as the regions with less samples have a smaller slope and the regions with more samples have a higher slope, which results in wider and thinner margin, respectively. This fact could be used to define more efficient sampling procedures.

5 Kernel density estimation

The problem of density estimation is ubiquitous in machine learning and statistics and it has been widely studied via kernels [Silverman86, Girolami02, Duin76]

. Actually kernel density estimation (KDE) is a classical method for estimating a probability density function (pdf) in a non-parametric way 

[Parzen62]. In KDE, the choice of the kernel function is key to properly approximating the underlying pdf from a finite sample. The KDE kernel must be a non-negative function that integrates to one (i.e. a proper pdf), yet does not need to be positive semi-definite (PSD). KDE is versatile in that sense. However, if the kernel is PSD, there are close relations between density estimation and RKHS learning via the kernel eigendecomposition. In fact many KDE kernels are PSD, and some well-known examples include the Gaussian kernel, the Student kernel and the Laplacian kernel [KimScott2012] functions.

5.1 Density estimation with kernels

For the Parzen window expression, KDE defines the pdf as a sum of kernel functions defined on the training samples,

(18)

where is the vector of kernel evaluations between the point of interest , and all training samples (see section 3.1). KDE kernel functions have to be non-negative and integrate to one to ensure that is a valid pdf. When a point-dependent weighting, , is employed, then the above expression can be modified as , where the have to be positive and sum to one, i.e. and . In [Girolami02] a solution to find a suitable vector based on kernel principal components analysis was proposed. If the decomposition of the un-centered kernel matrix follows the form , where is orthonormal and is a diagonal matrix, then the kernel-based density estimation can be expressed as

(19)

where is the reduced version of by keeping

top eigenvectors. If we keep all the dimensions, i.e.

the solution reduces to (18). By reducing the number of components we restrict the capacity of the density estimator and hence obtain a smoother approximation of the pdf as reduces.

The retained kernel components should be selected by keeping the dimensions that maximize a sensible pdf characteristic, e.g. the variance. However, other criteria can be used to select the retained components. For instance, the kernel entropy component analysis (KECA) method uses the information potential as criterion to select the components from the eigenvector decomposition [Jenssen2009]. Note that, in this case, the decomposition method is already optimized to maximize the variance, therefore the solution will be sub-optimal. A more accurate way of finding a decomposition was presented in [Izquierdo2016] where the features are directly optimized to maximize the amount of retained information. This method was named optimized KECA (OKECA), and showed excellent performance using very few extracted components.

The relevant aspect for this paper is that, by doing , equations (18) and (19) can be cast in the general framework of kernel methods we proposed in equation (4). Through this equality the derivatives and the second derivatives (and therefore the Hessian) can be obtained in a straightforward manner using equations (5) and (6). This information can be used for different problems, such as computing the Fisher’s information matrix, optimizing vector quantization systems, or the example in the following section where we use them to find the points that belong to the principal curve of the distribution.

5.2 Derivatives and principal curves

This example illustrates the use of kernel derivatives in the KDE framework. In particular, we use the gradient and the Hessian of the pdf, to find points that belong to the principal curve along the data manifold [Hastie89]. A principal curve is defined as the curve that passes through the middle of the data. How to find this curve in practice is an important problem since multiple data description methods are based on drawing principal curves [LaparraDRR, Laparra2014, Laparra2012, OzertemE11, sasaki17a]. In [OzertemE11], they characterize the principal curve as the set of points that belong to the ridge of the density function. These points can be determined by using the gradient and the Hessian of the pdf: a point is an element of the -dimensional principal curve iff the inner product of the gradient, , and at least eigenvectors of the Hessian, , is zero, that is:

(20)

where are the top eigenvectors of the matrix . Note that applying this definition using our framework is straightforward as we can use the KDE to describe the probability density function, and equations (5) and (6), as well as formulas in Table 1, to find the gradient and the Hessian of the defined pdf with respect to the points. In Fig. 4, we show an illustrative example of this application in three different toy datasets. It is easy to see that the pdf can be easily obtained from the data points by using the OKECA method. Note how the derivative lines describe the direction to which the density changes the most. The last row shows the points of the dataset with smaller dot product between the gradient and the last eigenvector of the Hessian, see Eq. (20). Note that these points belong to the ridge of the distribution, and thus to the principal curve.

Figure 4: First row: original data points. Second, third and fourth row: probability density in gray scale (brighter means denser). Second row: derivative direction of the pdf for some data points is represented using red lines. Third row: Hessian eigenvectors for some points represented with blue lines (first eigenvector) and green lines (second eigenvector). Fourth row: points on the ridge computed using the formula proposed in [OzertemE11], different brightness of green has been computed using the Dijkstra distance over the curve dots (see text for details).

6 Kernel dependence estimation

6.1 Dependence estimation with kernel methods

Measuring dependencies (nonlinear associations) between random variables is an active field of research. The principle underlying kernel-based dependence estimation is that it allows one to define covariance and cross-covariance operators in RKHS, and derive statistics from these operators capable of measuring dependence between functions therein.

Let us consider two spaces and , on which we jointly sample observation pairs from distribution . The covariance matrix is , where is the expectation with respect to , and . A statistic that efficiently summarizes the content of the covariance matrix is its Hilbert-Schmidt norm. This quantity is zero if and only if there exists no second order dependence between and .

The nonlinear extension of the notion of covariance was proposed in [GreBouSmoSch05] to account for higher order statistics. Essentially, let us define a (possibly non-linear) mapping such that the inner product between features is given by a PSD kernel function . The feature space has the structure of a RKHS. Similarly, we define with associated kernel function . Then, it is possible to define a cross-covariance operator between these feature maps, and to compute the squared norm of the cross-covariance operator, , which is called the Hilbert-Schmidt Independence Criterion (HSIC) and can be expressed in terms of kernels [Baker73, Fukumizu04]. Given a sample dataset of size drawn from , an empirical estimator of HSIC is [GreBouSmoSch05]:

(21)

where is the trace operation, , are the kernel matrices for the input random variables and (i.e. ), respectively, and centers the data in the feature spaces and , respectively. HSIC has demonstrated excellent capabilities to detect dependence between random variables but, as for any kernel method, the learned relations are hidden behind the kernel feature mapping. To address this issue, we consider the derivatives of HSIC.

6.2 Derivatives of HSIC

HSIC empirical estimate is parameterized as a function of two random variables, so the function derivatives given in section 2 are not directly applicable. Since HSIC is a symmetric measure, the solution for the derivative of HSIC wrt will have the same form as the derivative wrt . For convenience, we can group all terms that do not explicitly depend of as = , which allows us expressing (21) simply as:

(22)

Note that the core of the solution is the same as in the previous sections ; a weighted combination of kernel similarities. However, it is important to take into account that now it is needed deriving both arguments of the kernel function with respect to entry that appears twice. By taking derivatives with regard a particular dimension of sample , i.e. , and noting that the derivative of a kernel function is a symmetric operation, i.e. , one obtains

(23)

where is the -th row of the matrix . In particular for the RBF kernel we obtain [SHSIC2017]:

(24)

where entries of matrix are (), and zeros otherwise, and where we used the symbol as the Hadamard product between matrices.

Recently [MahoneyNIPS2015] extended the notion of leverage scores for the ridge regression problem. Leverage is a measure of how points with low density neighbours are enforcing the model for passing through them. By definition, the leverage (of a regressor) is the sensitivity of the predictive function w.r.t. the outputs. There is no definition of leverage in the case of HSIC as it is not a regression model but a dependence measure. However, HSIC could be interpreted in a similar way by fixing one of the variables and taking the derivative w.r.t. the other. By this interpretation, one can think of the HSIC sensitivity as a measure of how individual points are affecting the dependence measurement, i.e. how sensitive HSIC is to the perturbations for each particular point. This interpretation allows us to link the concepts of leverage and sensitivity in kernel dependence measures.

In this case, the derivatives of HSIC report information about the directions most impacting the dependence estimate, and allow a quantitative evaluation of the measure, which can be represented in a form of a vector field of two components. As in the previous kernel methods analyzed, the derivatives here are also analytic, just involving simple matrix multiplications and a trace operation.

6.3 Visualizing kernel dependence measures

HSIC derivatives give information about the contribution of each point and feature to the dependence estimate. Figure 5 shows the directional derivative maps for three different bi-dimensional problems of variable association. We show the different components of the (sign-valued) vector field as well as its magnitude. In all problems, arrows indicate the strength of distortion to be applied to points (either in directions , , or jointly) such that the dependence is maximized. For the first example (top row), the map pushes the points into the 1-1 line and tries to collapse data into 2 different clusters along this line. In the second example (middle row), the distribution is a noisy ring: here the sensitivity map tries to collapse the data in clusters in order to maximize the dependence between the variables. In the last third experiment (bottom row), both variables are almost independent and the sensitivity map points towards some regions in the space where the dependence is maximized. In all cases, the and are orthogonal in direction and form a vector field whose intensity can be summarized in its norm (columns in the figure).

Figure 5: Visualizing the derivatives and the modulus of the directional derivative for HSIC in three toy examples.

6.4 Unfolding and Independization

We have seen that the derivatives of the HSIC function can be useful to learn about the data distribution and the variable associations. The derivatives of HSIC give information about the directions most affecting the dependence or independence measure.

Figure 6 shows an example of how the derivatives of the HSIC can be used to modify the data and achieve either maximum dependence or independence. We embedded the derivatives in a simple gradient descent scheme, in which we move samples iteratively to maximize or minimize data dependence. Departing from a sinusoid, one can easily attain dependent or independent domains.

Note that HSIC can be understood as a maximum mean discrepancy (MMD) [Gretton2012] between the joint probability measure of the involved variables and the product of their marginals, and MMD derivatives are very similar to those of HSIC provided here. The explicit use of the kernel derivatives would allow us to use gradient-descent approaches in methods that take advantage of HSIC or MMD, such as in algorithms for domain adaptation and generative modeling.

Figure 6: Modification of the input samples to maximize of minimize HSIC dependence between their dimensions (see text for details).

7 Analysis of spatio-temporal Earth data

This section introduces the use of the kernel derivatives on real datasets where we focus on the analysis of Earth system spatial-temporal data. Today data-driven research in Earth system dynamics has gained momentum and complement global modelling efforts. Much of Earth data is generated by a wide range of satellite sensors, upscaled products from in-situ observations, and model simulations with constantly improved space and time resolutions. We will illustrate the use of the kernel derivatives in Gaussian processes, principal curve estimates and dependence estimation in three selected spatio-temporal Earth observation variables of interest. We will pay attention to how using kernel derivatives may help in (1) choosing the appropriate space and time scales to analyze phenomena, (2) visualize the most informative areas of interest, and (3) detect anomalies in spatio-temporal Earth data.

7.1 Data

We will work with products contained in the Earth System Data Lab (ESDL) platform, http://earthsystemdatalab.net/. The database contains and harmonizes more than 40 variables to monitor the processes occurring in our Planet. They are grouped in three data streams: land surface, atmospheric forcings and socio-economic data. Here we focus on three climate variables which exhibit nonlinear relations in space and time. The following three variables; the gross primary productivity (GPP), root-zone soil moisture (SM), and land surface temperature (LST); are outlined below:

  • GPP is the rate of fixation of carbon through the process photosynthesis and one of the key fluxes in the global carbon cycle. Hence, it is also key to understanding the link between climate and the carbon cycle. Specifically, quantitative estimates of the spatial and temporal dynamics of GPP at regional and global scales are essential for understanding the ecosystems response to e.g. climate extremes like drought, heatwaves and other extremes and other types of disturbances that may even influence the interannual variability of the globally integrated GPP [ZSCHEISCHLERetal2014]. Here, we consider the GPP FLUXCOM (http://www.fluxcom.org/) product, computed as described in [TRAMONTANAetal2016].

  • SM plays a fundamental role for the environment and climate, as it influences hydrological and agricultural processes, runoff generation and drought development processes, and impacts the climate through atmospheric feedbacks. SM is actually a source of water for evapotranspiration over the continents, and it is involved in both the water and the energy cycles. There are two products of soil moisture in our experiments. Standard SM products carry information limited to a few centimeters below the surface ( cm), and do not allow access to the whole zone from where water can be absorbed by roots. This is why we used root-zone soil moisture (RSM) [Martens17, DORIGO2017, LIU20122] in the dependence estimation problem instead, a product from GLEAM contained in the ESDC, and that is a more sensitive variable to monitor water stress and droughts in vegetation.

  • [rgb]0,0,0LST is an essential variable within the Earth climate system as it describes processes such as the exchange of energy and water between the land surface and atmosphere, and influences the rate and timing of plant growth. The LST product contained in the ESDC is the result of an ESA project called GlobTemperature, that developed a merged LST data set from thermal infrared (geostationary and polar orbiters) and passive microwave satellite data to provide best possible coverage.

[rgb]0,0,0The data is organized in 4-dimensional objects involving (latitude,longitude) spatial coordinates , time sampling , and the variable . The data in ESDC contains a spatial resolution (high 0.083 resolution and coarser grid aggregation at 0.25) and a temporal resolution of 8 days spanning the years 2001-2011. In our experiments, we focus on the lower resolution products, during 2009 and 2010, and over Europe only. In the year 2010, a severe combination of spring and summer drought combined with a summer heat stress event affected large parts of Russia which can be observed in the three variables under study here [Flach18], and we expect that also their interrelations must be affected. We use this well known event to provide a proof of concept for our suggestion approaches to interpret regressions, principal curves, and dependence estimation.

7.2 Sensitivity analysis in GP modelling

[rgb]0,0,0Studying time-varying processes with GPs is customary. Designing a GP becomes more complicated when dealing with spatial-temporal datasets. This can be cumbersome when the final goal is to understand and visualize spatial dependencies as well as to study the relevance of the features and the samples. Sensitivity analysis can be useful for either scenario. In this experiment, we study the impact of features in the GP modeling of the GPP and LST variables during 2010. To do so, we developed GP regression models trained to predict a pixel from their neighbourhood pixels. This is analogous to geographically weighted regression [Charlton2002SpatialR] which can be used to model the local spatial relationships between these features and the outputs. The major difference is that under the GP framework, we can get sensitivity values for each of the contributing dimensions. We further split the data into subsets of spatial ‘minicubes’ which ranged in size from until size . We use a GP model on a training subset of minicubes whereby the neighbours were used as input dimensions to predict the center pixel for both GPP and LST.

Spatial Window Spatial Window Summary
Figure 7: Visualizing the sensitivity and model R error under different spatial sampling sizes for the Gross Primary Productivity (GPP) [top] and land surface temperature (LST) [bottom] for the summer of 2010 (Jun-Jul-Aug).

The figures show how the sensitivity changes according to the mean prediction of the GPP and LST for two neighbourhood spatial window sizes ( and ). Figure 7 shows spatially-explicit sensitivity maps for both settings, as well as the R-value and the average sensitivity for each GP model. For GPP, we see that sensitivities tend to become smoother as the spatial locality increases. These particular maps for GPP reach an R value of 0.93 and 0.95 for each respective window size. Unlike the small differences in goodness of fit (dynamic range of +2% in R), the sensitivity curves show a wider variation and suggest that bigger windows are more appropriate to capture smoother areas; this is expected. Actually, although we get a better model with a higher spatial window size, the sensitivity of neighbour points become more dispersed over larger areas over Europe instead of just staying within small clusters. A similar pattern of dispersion of the sensitive points is observed for the LST maps w.r.t. the spatial window size. However we notice that the sensitive regions actually change, e.g. in the Northern European region, suggesting an integrative (smoothing) effect as the dimensionality increases.

7.3 Principal Curves of the ESDC

In this experiment we analyze GPP spatial-temporal patterns for different seasons of the year 2010 using the principal curve (PC) framework in Section 4. Each sample consists of a vector with the variable value for a particular location and all the time dimensions in the season: (Jan 05 - May 05), (May 21 - Aug 08 ) , and (Aug 17 - Dec 31) For each season we have around samples of size . Figure 8 shows the results. For each data set we plot the mean GPP value of the season in each point. The location of the points that belong to the PC are plotted in green using the Dijkstra distance inside the curve (as in the toy examples in Fig. 4). The points belonging to the PC can be interpreted as the landmarks (or representers) of the whole dataset, similar to a centroid of a cluster, but here they refer to the points on the probability ridge of the data manifold (i.e. similar to the points closer to the first eigenvector in PCA). These points could be used for multiple purposes, e.g. as a summary to analyze the behaviour of the whole manifold or used for a temporal analysis of their evolution. Note that the location of the points is quite independent of the mean values, so they give different, alternative information. On the other hand, the location depends on the time of the year represented.

(Jan-May) (May-Aug) (Aug-Dec)
Figure 8: Principal curves on the ESDC. Each figure represents the results for GPP at different time periods during the 2010. In each image the mean value of the variable for each location is shown in colormap (minimum blue, maximum red), and the points that belong to the principal curves are represented in green. Different brightness of green has been computed using the Dijkstra distance over the curve dots.

Most of the GPP ‘representative’ points are scattered around the manifold which depend on the season. For instance during the colder season (Jan-May) dots are concentrated on the middle and low latitudes. Note that during this period dots in northern Germany have a similar temperature and GPP than in the North-West part of Europe. Therefore there is no need to add extra representers in these regions. Points in Morocco represent the warmer part of the manifold and Balcans area and Turkey represent the central part of the manifold. During the warmest period (May-Aug) the distribution of the dots follow an opposite direction, Southern regions loss weight while Northern parts have a more representative activity. In the case of mild temperatures (Aug-Dec), representers in more different regions are needed.

7.4 Sensitivity analysis of kernel dependence measures

HSIC is a dependence measure which can show differences in the higher-order relations between random variables. The derivatives of HSIC with respect the input features are related to the change of the measure which summarizes the relevance of the input features in the dependence. Therefore, these derivative maps can be related to the sensitivity of the involved covariates.

In this experiment, we chose to study the relation between GPP and RSM for Europe and Russia during the years 2008, 2009 and 2010. We apply the HSIC with a linear kernel and compute the sensitivity maps, which is an estimation of how much the dependency criterion changes. We take spatial segments of GPP and RM at each time stamp, and compute the HSIC value for each independently for Russia and Europe. We also computed the derivative of HSIC for the same time stamps independently for Russian and Europe. We computed the modulus to summarize the impact of each dimension to act as a proxy for the total average sensitivity. The final step involved computing the expectation between the modulus of the derivative of HSIC between Russian and Europe. Europe acts as a proxy stable environment and Russia is the one we would like to compare to. We estimated the expected value for three time periods (before: 05-01, 20-05; during: 28-05, 01-09; after: 09-09, 30-12) for each year individually. We then compared each of the values to see how the expectation changes between Europe and Russia for each period across the years. The expected value of the HSIC derivatives summarize the change of association between variables differently than the HSIC measure itself.

The experiment focuses on studying the coupling/association between RSM and GPP during the Russian drought in 2010. The HSIC algorithm captures an increased difference in dependencies of GPP and RSM for Russia relative to Europe in 2010 if we compare this relationship to the years 2008 and 2009, see Figure  9a. However, HSIC only captures instantaneous instances of dependencies and not how fast these changes occur. The derivatives of HSIC (figure 9b) allow us to quantify and capture when these changes actually occur. The gradients of HSIC do not show obvious differences in magnitude or shape across years between Russia and Europe. By taking the expected value of specific time periods of interest (before-during-after drought), we can highlight the contrast in the dependency trends between different periods with respect to their previous years, both in terms of HSIC and HSIC derivatives. We observe in Figure 9c, a change the mean value of the difference in the derivative of HSIC in Figure  9d which reveals a noticeable change in the trend for the springtime and summertime of 2010 compared to 2008 and 2009.

(a) HSIC (b) HSIC
(c) (d)
Figure 9: Each figure represents different summaries of how HSIC can be used to capture the differences in dependencies between Europe and Russia for GPP and RSM. (a) shows the HSIC value for Europe and Russia at each time stamp, (b) shows the derivative of HSIC for Europe and Russia at each time stamp, and the mean value for the difference in the (c) HSIC between Europe and Russia for different periods (Jan-May, Jun-Aug, and Sept-Dec), and (d) in the derivative of HSIC for the same periods.

8 Conclusions

Kernel methods are standard tools in pattern analysis and machine learning and have been widely adopted because of their excellent performance in many applications. However, they are still considered black-box models as the feature map is not directly accessible and predictions are difficult to interpret.

In this note, we took a modest step back to understand different kernel methods by exploiting partial derivatives of the learned function with respect to the input covariates. We build on the fact that model functions implemented in most kernel methods rely on a linear combination of kernels, and that their derivative is linear with the kernel function derivative. First we provided the explicit form of the first and second derivatives of the most common kernel functions, as well as a generic formula to compute higher order derivatives. Then we used them to analyze the most used kernel methods in different machine learning problems: GPs for regression, SVMs for classification, OKECA for density estimation, and the HSIC for dependence estimation between random variables. We showed that the derivatives involve closed-form solutions, and are easy to implement. Furthermore, we provided intuitive explanations for the meaning of the derivatives for each kernel method through illustrative toy examples, and we also give examples of how to exploit them in alternative applications. This note suggests that kernel method derivatives have a simple mathematical formulation, allow us to understand the function learned, and may provide many opportunities to design kernel machines in general.

Appendix A Higher order derivatives of kernel functions

It can be shown that the -th derivative of some kernel functions can be computed recursively using Faà di Bruno’s identity [Arbogast1800] for the multivariate case:

where the sum is over all -tuples and . It is also useful the expression for mixed derivatives:

where is the ensemble all the partitions sets in , is a particular partition set, runs over the blocks of the partition set , and is the cardinality of .

For the RBF kernel we can identify and . The derivatives for the are always the same , and the derivatives for the are: , , , for , and .

Applying the previous formula for the first derivative is:

The second derivative is:

The mixed derivative is:

Appendix B Custom Regression Function

In this example we show the behaviour of the first and second derivatives for a multivariate input. A GP model is fitted over the dataset using the RBF kernel function. The experiment uses a custom linear multivariate function with two inputs, and , as inputs:

(25)

where the coefficients and have varying values (see Fig. 10).

Raw Data, GP Model,
Figure 10: First row: The original toy data is displayed as well as the predicted GP model which presents a smoother curve. Second row: the first derivative in the , direction and combined direction (the sensitivity) respectively. Third row: the second derivative in the , direction and combined direction (the sensitivity) respectively. The yellow colored points represent the regions with positive values, the blue colored points represent the regions with negative values and the gray colored points represent the regions where the values are zero.

The GP model smooths the piece-wise continuous function which results in some additional slopes than the original formulation. This is clearly visible from the derivatives of the kernel model as the first derivative for the and components have positive values for the sensitivities of the slopes in the regions where and are equal to some constant, respectively. The second derivative for both and show the same effect except for curvature. This experiment successfully highlights the derivatives of the individual components as well as their combined sensitivity.

Appendix C Acknowledgements

This research was funded by the European Research Council (ERC) under the ERC-Consolidator Grant 2014 Statistical Learning for Earth Observation Data Analysis. project (grant agreement 647423).

References