Fast Estimation of Information Theoretic Learning Descriptors using Explicit Inner Product Spaces

by   Kan Li, et al.
University of Florida

Kernel methods form a theoretically-grounded, powerful and versatile framework to solve nonlinear problems in signal processing and machine learning. The standard approach relies on the kernel trick to perform pairwise evaluations of a kernel function, leading to scalability issues for large datasets due to its linear and superlinear growth with respect to the training data. Recently, we proposed no-trick (NT) kernel adaptive filtering (KAF) that leverages explicit feature space mappings using data-independent basis with constant complexity. The inner product defined by the feature mapping corresponds to a positive-definite finite-rank kernel that induces a finite-dimensional reproducing kernel Hilbert space (RKHS). Information theoretic learning (ITL) is a framework where information theory descriptors based on non-parametric estimator of Renyi entropy replace conventional second-order statistics for the design of adaptive systems. An RKHS for ITL defined on a space of probability density functions simplifies statistical inference for supervised or unsupervised learning. ITL criteria take into account the higher-order statistical behavior of the systems and signals as desired. However, this comes at a cost of increased computational complexity. In this paper, we extend the NT kernel concept to ITL for improved information extraction from the signal without compromising scalability. Specifically, we focus on a family of fast, scalable, and accurate estimators for ITL using explicit inner product space (EIPS) kernels. We demonstrate the superior performance of EIPS-ITL estimators and combined NT-KAF using EIPS-ITL cost functions through experiments.


No-Trick (Treat) Kernel Adaptive Filtering using Deterministic Features

Kernel methods form a powerful, versatile, and theoretically-grounded un...

Regularized Kernel Recursive Least Square Algoirthm

In most adaptive signal processing applications, system linearity is ass...

Information Theoretic Learning with Infinitely Divisible Kernels

In this paper, we develop a framework for information theoretic learning...

Sparse multiresolution representations with adaptive kernels

Reproducing kernel Hilbert spaces (RKHSs) are key elements of many non-p...

Deep SimNets

We present a deep layered architecture that generalizes convolutional ne...

The Matrix Hilbert Space and Its Application to Matrix Learning

Theoretical studies have proven that the Hilbert space has remarkable pe...

Optimizing Kernel Machines using Deep Learning

Building highly non-linear and non-parametric models is central to sever...

I Introduction

Information theoretic learning (ITL) is a framework where information theory descriptors based on non-parametric estimator of Rényi entropy replace conventional second-order statistics for the design of adaptive systems [1]. A reproducing kernel Hilbert space (RKHS) for ITL defined on a space of probability density functions (pdf’s) simplifies statistical inference for supervised or unsupervised learning. ITL criteria take into consideration the higher-order statistical behavior of the systems and signals as desired. ITL is conceptually different from other kernel methods as it is based on kernel density estimation (KDE) and thus its kernel function need not be positive definite, instead satisfying a different set of properties as detailed in [2]. Nevertheless, the estimators in both learning schemes share many similarities [3], including several positive-definite kernels such as the Gaussian kernel and the Laplacian kernel [2]. In fact, positive definiteness is preferred in ITL due to numerical stability in computation.

In the standard kernel method approach, points in the input space are mapped, using an implicit nonlinear function , into a potentially infinite-dimensional inner product space or RKHS, denoted by . The explicit representation is of secondary nature. The Mercer condition guarantees the existence of the mapping. A real valued similarity function is defined as


which is referred to as a reproducing kernel. This presents an elegant solution for classification, clustering, regression, and principal component analysis, since the mapped data points are linearly separable in the potentially infinite-dimensional RKHS, allowing classical linear methods to be applied directly on the data. However, because the actually points (functions) in the function space are inaccessible, kernel methods scale poorly to large datasets. Naive kernel methods operate on the kernel or Gram matrix, whose entries are denoted

, requiring space complexity and computational complexity for many standard operations. For online kernel adaptive filtering (KAF) algorithms [4, 5, 6, 7], this represents a rolling sum with linear or superlinear growth. There have been a continual effort to sparsify and reduce the computational load, especially for online KAF [8, 9, 10].

The two most important concepts in ITL are the information potential (IP), which is associated with Rényi’s quadratic entropy (QE), and the cross information potential (CIP) that measures dissimilarity between two density functions [3]. The estimator of IP requires summing all the elements of the kernel or Gram matrix. A straightforward computation is expensive in both storage and time, especially when the number of samples is large. Different methods have been proposed to reduce this computational burden by extracting relevant information with sufficient accuracy without processing all elements of the Gram matrix [11, 12, 13, 14].

Recently, we proposed a no-trick (NT) framework for kernel adaptive filtering (KAF) using explicit feature mappings that define a positive definite kernel for a finite-dimensional RKHS [15]. The same concept can be integrated seamlessly into ITL using a family of estimators based on separable finite-rank or degenerate kernels whose basis are sampled or constructed independently of the training data. Instead of manipulating the data through pruning or sparsification, we design a family of finite-rank explicit inner produce space (EIPS) Mercer kernels, specifically their explicit feature mappings, for fast, scalable, and accurate estimators for ITL. The Mercer theorem states:

Theorem 1 (Mercer kernel).

Let be a probability measure on , and the associated Hilbert space. Given a sequence with , and an orthogonal family of unit norm functions with , the associated Mercer kernel is



are the eigenvalues of the kernel and

its eigenfunctions, and the series’ convergence is absolute and uniform.

In practice, for simplicity, a Mercer kernel where the infinite sum in (2) can be expressed in closed form is often used, e.g., the Gaussian kernel function, and the expansion itself is either unknown or ignored. In this paper, we take an alternative approach and focus on the family of EIPS kernel functions (specifically data-independent finite-rank) shown in Fig. 1, that accelerates the computation of ITL quantities with the utmost versatility and convenience. Compared to (2) which consists of a continuous orthonormal basis of eigenfunctions, finite-rank or degenerate Mercer kernels of rank are expressed using finite series

Fig. 1: A taxonomy of kernels. Kernel functions for kernel density estimation (KDE) need not be positive definite. Here, we focus on the family of positive-definite explicit inner product space (EIPS) kernels.

Defining an EIPS allows a weighted sum of finite-rank kernel evaluations to be factorized and collapsed for later use as a consolidated feature vector. This is especially efficient when coupled with KAF using ITL cost functions such as the maximum correntropy criterion (MCC) and the minimum error entropy (MEE). Other ITL estimators such as that of Cauchy-Schwartz quadratic mutual information (QMI-CS) and Euclidean distance based quadratic mutual information (QMI-ED) also benefit from the reduced computational complexity offered by this family of fast, scalable, and accurate ITL estimators.

I-a Related Work

A related concept is the fast multipole method (FMM) [16], developed for the rapid summation of potential fields generated by a large number of sources (N-body problem in mathematical physics), in which the potential function is expanded in multipole (singular) series and local (regular) series at the expansion centers. This typically combines a far-field expansion of the kernel, in which the influence of sources and targets separates, with a hierarchical subdivision of space of sources into panels or clusters. For the Gaussian field, various factorization and space subdivision schemes include the fast Gauss transform (FGT) and the improved Gauss transform [17]. The improved FGT for KDE uses the greedy farthest-point clustering algorithm to model the space subdivsion task as a -center problem [18]. Unfortunately, their effectiveness diminishes for higher dimensions and large datasets, since Hermite expansion is used for FGT, resulting in terms for a -term truncation in dimensions, i.e., exponential growth in the accumulation of expansion products along each data dimension. The improved FGT uses multivariate Taylor series (TS) expansion to reduce the number of expansion terms to polynomial order.

Here, we take the no-trick (NT) kernel method interpretation in [15] by defining an EIPS Mercer kernel equal to the scalar or inner product of the transformed points in an higher finite-dimensional RKHS using explicit mapping , i.e., . Mercer condition guarantees the existence of the underlying mapping and universal approximation. From the inner product perspective, an EIPS kernel naturally factorizes the pairwise interaction between two feature vectors, yielding fast, scalable, and accurate solutions, without the computational overhead of clustering the sources. Compared to FMM, the EIPS approach goes further in the abstraction, by defining an equivalent positive-definite kernel (where the inner product between two points are computed using the explicitly mapped feature vectors), therefore, it is not merely an approximation method, but rather, a new, exact kernel formulation within the unifying framework of the RKHS. In this paradigm, the linear combination (sum) of the training data (source points) feature vectors is a linear function represented by a weight vector in this space. Furthermore, in applications such as KAF, we are always interested in following the embedded trajectory of the input signal (local approximation to the trajectory), so we do not need to seek expansions in other parts of the space, unlike FMM. The EIPS kernel method is both efficient and effective for low-dimensional KDE, e.g., an EIPS-ITL estimator for information quantities based on the prediction error, which is typically one dimensional for time series prediction, extracts more information than -order statical models such as [19]. Without loss of generality, we will use the simple TS expansion EIPS kernel as ITL estimator in low dimensions. For higher dimensions, we will instead use Gaussian quadrature (GQ) with subsampled grids to directly control the number of features used in the feature mapping or EIPS kernel, which has been shown to be effective for high dimensional and large data [20].

Random Fourier features (RFF) [21]

have been successfully applied for efficient kernel learning using finite-rank kernels. While RFF belong to the EIPS family (whose basis are sampled randomly and independent of the training data), for small dimensions, deterministic maps yield significantly lower error and performance variance. For higher dimensions, they can also produce inferior results compared to deterministic polynomial-exact based sampling method, e.g., for online kernel adaptive filtering

[15]. Nonetheless, they represent a simple and efficient way to construct EIPS kernels.

Low-rank approximation methods such as the Nyström method [22] (basis functions are randomly sampled from the training examples) are data dependent, making them less appealing than data-independent EIPS method. The incomplete Cholesky decomposition (ICD) is another data-dependent approximation method that has been shown to speed up the computation of information theoretic quantities with state-of-the-art ITL performances, with state-of-the-art ITL performances, by leveraging the fact that the eigenvalues of the Gram matrix diminishes rapidly and can be replaced by a lower ranked approximation [23, 14]. The symmetric positive definite matrix can be expressed as , where is an lower triangular matrix with positive diagonal entires, a special case of decomposition. Using a greedy approach, the ICD minimizes the trace (sum of eigenvalues) of the residual with an (where ) lower triangular matrix with arbitrary accuracy, where is a small positive number of choice and is a suitable matrix form. The value of , which determines the space and time complexity, and , respectively, is indirectly set by the desired precision , depending on the density of the samples. Furthermore, computing the decomposition using is not only a data-dependent batch method, but also comes with considerable computational overhead. They still require computing the kernel matrix. The EIPS-ITL estimators, on the other hand, is a full kernel approach that defines a positive-definite kernel using explicitly mapped features from data-independent basis. The feature space dimension is set directly, allowing greater control in resource allocation and simplified implementation, especially for online applications.

The rest of the paper is organized as follows. In Section II, explicit-inner-produce-space kernel construction is discussed. Information theoretic learning is reviewed in Section III, and EIPS-ITL estimators are presented. Experimental results are shown in Section IV. Finally, Section V concludes this paper.

Ii EIPS Feature Mapping Construction

To accelerate ITL estimators, we propose to map the input data to a higher finite-dimensional feature space using EIPS features. Having data-independent basis improves its versatility significantly, allowing the mapping to be predetermined and implemented online with greater efficiency. The explicit feature mapping can be constructed either deterministically, randomly, or via a combination of the two approaches (hybrid). These mappings define a new, equivalent reproducing kernel with universal approximation property [15]. Furthermore, the inner product in the finite-dimensional RKHS naturally factorizes the pairwise interactions and greatly simply the computation and storage of ITL quantities, e.g., they reduce the cost of computing all pairwise interactions for points from to and consolidate the collection of points into a single weight vector of dimension .

The popular random Fourier features [21] belong to a class of randomly constructed EIPS kernels for scaling up kernel machines. The underlying principle states:

Theorem 2 (Bochner, 1932[24]).

A continuous shift-invariant properly-scaled kernel , and , is positive definite if and only if

is the Fourier transform of a proper probability distribution.

The corresponding kernel can then be expressed in terms of its Fourier transform (a probability distribution) as


where is the Hermitian inner product , and

is an unbiased estimate of the properly scaled shift-invariant kernel

when is drawn from the probability distribution . We ignore the imaginary part of the complex exponentials to obtain a real-valued mapping.

Alternatively, the RFF approach can be viewed as performing numerical integration using randomly selected sample points. In numerical analysis, there are many polynomial-exact ways to approximate the integral with a discrete sum of judiciously selected points. For small input dimensions, deterministic feature mappings, such as Taylor series expansion, yield significantly lower error and performance variance than random maps. For data of higher dimensions, polynomial-exact deterministic features can be sampled from the distribution determined by their weights to combat the curse of dimensionality and gain direct control over the feature dimension. We have analyzed the performances of deterministic vs. random features for online kernel adaptive filtering in

[15]. In this paper, we will briefly summarize the class of deterministically constructed EIPS kernel for ITL estimators.

Ii-a Taylor Polynomial Features

This is the most straightforward deterministic feature map for EIPS based on the Gaussian kernel, where each term in the TS expansion is expressed as a sum of matching monomials in the data pair and , i.e.,


We can easily factor out the product terms that depend on and independently. The joint term in (6), , can be expressed as a power series or infinite sum using Taylor polynomials as


Using shorthand, we can factor the inner-product exponentiation as


where enumerates over all selections of coordinates (including repetitions and different orderings of the same coordinates) thus avoiding collecting equivalent terms and writing down their corresponding multinomial coefficients, i.e., as an inner product between degree monomials of the coordinates of and . Substituting this into (7) and (6) yields the following explicit feature map:


where . For TS feature approximation or EIPS kernel construction, we truncate the infinite sum to the first terms:


where the TS approximation is exact up to polynomials of degree .

In practice, the different permutations of j in each -th term of Taylor expansion, (8), can be grouped into a single feature corresponding to a distinct monomial, resulting in features of degree , and a total of features of degree at most .

Ii-A1 Precision of Taylor Series Expansion

For the Gaussian kernel, the precision of TS expansion can be defined precisely using the mean-value form of the approximation remainder.

Theorem 3 (Taylor’s Formula).

Let the function be times differentiable (where the integer ) on the open interval with continuous on the closed interval between and , the remainder of the Taylor polynomial is


for some real number between and .

Suppose we want the desired accuracy to be within in absolute error, i.e., , for all , where is the -th order Taylor polynomial. Solving for the worst case, , we have . In Section IV-A, we will illustrate this by comparing the performance of Taylor polynomial EIPS formulation with state-of-the-art ITL reduced-rank-approximation fast method using incomplete Cholesky decomposition.

Ii-B Gaussian Quadrature (GQ) Features with Subsampled Grids

A quadrature rule is a choice of points and weights to minimize the maximum error . For a fixed diameter , the sample complexity (SC) is defined as:

Definition 1.

For any , a quadrature rule has sample complexity , where is the smallest number of samples such that the rule yields a maximum error of at most .

There are many quadrature rules, without loss of generality, we focus on Gaussian quadrature (GQ), specifically the Gauss-Hermite quadrature using Hermite polynomials. In numerical analysis, GQ is an exact-polynomial approximation of a one-dimensional definite integral: , where the -point construction yields an exact result for polynomials of degree up to . While the GQ points and corresponding weights are both distribution and parameter dependent, they can be computed efficiently using orthogonal polynomials. GQ approximations are accurate for integrating functions that are well-approximated by polynomials, including all sub-Gaussian densities. Compared to random Fourier features, GQ features have a much weaker dependence on the approximation error , at a constant cost of an additional factor of (independent of the error ) [20].

To extend one-dimensional GQ to higher dimensions, grid-based quadrature rules can be constructed efficiently. A dense grid

or tensor-product construction factors the integral (

4) along the dimensions , where are the standard basis vectors, and can be approximated using one-dimensional quadrature rule. However, since the sample complexity is doubly-exponential in , a sparse grid or Smolyak quadrature is typically used [25]. Only points up to some fixed total level A is included, achieving a similar error with exponentially fewer points than a single larger quadrature rule.

The major drawback of the grid-based construction is the lack of fine tuning for the feature dimension. Since the number of samples extracted in the feature map is determined by the degree of polynomial exactness, even a small incremental change can produce a significant increase in the number of features. Subsampling according to the distribution determined by their weights is used to combat both the curse of dimensionality and the lack of detailed control over the exact feature number. There are also data-adaptive methods to choose a quadrature rule for a predefined number of samples [20], but we are focused on data-independent EIPS features.

Ii-C Universal Approximation

EIPS feature mappings such as random Fourier features, Gaussian quadrature, and Taylor polynomials are not only an approximation method, but also defines an equivalent kernel that induces a new reproducing kernel Hilbert space: a nonlinear mapping that transforms the data from the original input space to a new higher finite-dimensional RKHS where . The RKHS is not necessarily contained in the RKHS corresponding to the kernel function , e.g., Gaussian kernel. It is easy to show that the EIPS mappings discussed in this paper induce a positive-definite kernel function satisfying Mercer’s conditions.

Proposition 1 (Closure properties).

Let and be positive-definite kernels over (where ), is a positive real number, a real-valued function on , then the following functions are positive definite kernels.

  1. ,

  2. ,

  3. ,

  4. .

Since exponentials and polynomials are positive-definite kernels, under the closure properties, it is clear that the inner products of random Fourier features, Gaussian quadrature, and Taylor polynomials are all reproducing kernels. It follows that these kernels have universal approximating property: approximates uniformly an arbitrary continuous target function to any degree of accuracy over any compact subset of the input space.

Iii EIPS kernel for Information Theoretic Learning (EIPS-ITL)

ITL is a framework to adapt nonparametric systems using information quantities such as entropy and divergence [1]. ITL criteria is still directly estimated from data via Parzen kernel estimator, but it extracts more information from the data for adaptation, and yields, therefore, solutions that are more accurate than mean squared error (MSE) in non-Gaussian and nonlinear signal processing. Reproducing kernels are covariance functions explains their early role in inference problems [26, 27]

. Rényi’s quadratic entropy of a random variable

with pdf is defined as


The Parzen estimate of the pdf, given a set of independent and identically distributed (i.i.d.) data drawn from the distribution is


where is the number of data samples, and is the Gaussian kernel with kernel size


Without loss of generality, we will only consider the Gaussian kernel and related EIPS kernels in this paper.

Using the no-trick or EIPS explicit mapping , the kernel function in (13) is replaced with the inner product of the explicitly mapped points (functions) in the finite-dimensional RKHS as


where is the sample mean or centroid and is, in general, independent of the target or . Alternatively, from the RKHS paradigm, this can be viewed as a weight vector that represents or parametrizes the linear function in the EIPS, i.e., .

A nonparametric estimate of Rényi’s quadratic entropy directly from samples is


where the information potential (IP) is defined as


Using EIPS (15), the IP estimate becomes


This drastically reduces the quadratic complexity from to a linear , which only requires computing the weight vector or center once, then squaring it, i.e., scalar product with . Online update of this term is embarrassingly simple, as new sources or sample points are simply added to the existing weight vector with the appropriate normalization, i.e., .

Let be a stochastic process with being an index set. The nonlinear mapping induced by the Gaussian kernel maps the data into the feature space , where the auto-correntropy function is defined from into given by


where denotes the expectation. A sufficient condition for

is that the stochastic process must be strictly stationary on all the even moments, a stronger condition than wide sense stationarity (limited to 2 order moments). The IP is the mean squared projected data

or the expected value of correntropy over lags . A more general form of correntropy (cross-correntropy) [28] between two random variables is defined as


The sample estimate of correntropy for a finite number of data is


Using Taylor series expansion for the Gaussian kernel, correntropy can be expressed as


which involves all the even-order moments of the random variable

(where the kernel choice dictates the expansion, e.g., the sigmoidal kernel contains all the odd moments)


In fact, all learning algorithms that use nonparametric pdf estimates in the input space admit an alternative formulation as kernel methods expressed in terms of inner products. As shown above, the kernel techniques are able to extract higher order statistics of the data that should lead to performance improvements for non-Gaussian environments. Next, we show the explicit EIPS derivations of several commonly used ITL estimators.

Iii-1 EIPS Quadratic Mutual Information (QMI)

The Cauchy-Schwartz quadratic mutual information and Euclidean distance based QMI are defined, respectively, as


The above expressions consists of the following three distinct terms. The EIPS IP estimate of the joint pdf () is computed as


where uses the shift-invariant property, is due to the associative property, where with the square matrix , and is the Hadamard product operator. The EIPS IP estimate of the factorized marginal pdf () becomes


And, the EIPS generalized-cross IP estimate () is


where is due to the commutative property of summation and the fact that the transpose of a scalar is itself, i.e., .

Iii-2 EIPS Divergence and Distance Measures

The CS divergence and ED divergence are defined as


respectively, where the cross information potential (CIP) estimated can be computed as


It follows that the correntropy coefficient estimate is


Iii-a NT Kernel Adaptive Filtering using EIPS-ITL Criteria

EIPS not only facilitates the computation of ITL quantities, but also integrates seamlessly into online kernel adaptive information filters, as it did for no-trick KAF using conventional MSE criterion [15].

Iii-A1 NT Maximum Correntropy Criterion

The counterpart to the kernel least mean square (KLMS) [30] algorithm, which adopts the MSE as the cost, is the kernel maximum correntropy criterion (KMCC) filter [31]. Second-order statistics may not be suitable for all nonlinear, especially non-Gaussian, situations. The KMCC combines the simplicity of the KLMS with the higher-order statistics of the correntropy criterion. Using the NT formulation, the NT-KMCC is summarized in Alg. 1. Compared to the NT-KLMS [15], we can see that the NT-KMCC has a variable step size controlled by the prediction error.

: NT feature map
: feature space weight vector
: learning rate
for n = 1, 2,  do
Algorithm 1 NT-KMCC Algorithm

Iii-A2 EIPS Minimum Error Entropy

Given a batch of error samples, the information potential estimator using Rényi’s quadratic entropy is


The cost function for the MEE criterion is given as


The IP is smooth and differentiable, to maximum its value, one can simply move in the direction of its gradient


For online methods, especially KAF where the kernel trick introduces (super)linear complexity, the Gaussian quadratic stochastic information gradient (SIG) is typically used


Using the EIPS approach, the full (expected value or double sum) IP can be computed extremely efficiently. Using shorthand, the explicit feature mapping factorizes the double summation in the full IP gradient (35) into the following independent terms


where the four scalar terms can be summed independently. Since the errors are typically small and one dimensional, without loss of generality, we elect to use the simple Taylor series expansion EIPS mapping for .

The NT-KMEE is summarized in Alg. 2. The NT-KMEE-SIG formulation (single sum) follows trivially and can be used to further accelerate online adaptation. Similarly, the self adjusting step-size formulation [32] can be easily applied, which scales the step size by a nonnegative factor of .

: input NT feature map
: error EIPS feature map
: feature space weight vector
: learning rate
for n = 1, 2,  do
Algorithm 2 NT-KMEE Algorithm

Iv Simulation Results

Extensive comparisons between MSE and MEE techniques have already been performed in [33, 31, 34], here, we will focused on the speed of EIPS kernel framework for ITL.

Iv-a Accelerating ITL Quantities Computation

First, we evaluate the validity of the proposed method using five benchmark datasets from the UCI machine learning repository [35]

. We normalized them individually (iris, cancer, wine, yeast, and abalone) before computing the estimators: z-score followed by scaling the global extrema to

. As all ITL quantities share similar forms, without loss of generality, we computed the Cauchy-Schwartz quadratic mutual information and correntropy coefficient estimates on all possible pairs of features for each dataset (i.e., ), using the direct method, incomplete Cholesky decomposition, and the simple Taylor polynomial EIPS kernel method. The Gaussian kernel size is set at , and the desired precision for ICD is , which corresponds to a minimum of terms in the TS expansion using (11). Tables I and II summarize the results averaged over 10 independent trials. The experiments were performed using Intel Core i7-7700 (at 3.60 GHz with 16 GB of RAM) and MATLAB. In each trial, the ITL descriptors’ values and CPU times are accumulated over all feature pairs. Since ICD is data-dependent, the average reduced rank is listed in a separate column. For comparisons, we showed the performances of EIPS kernels using Taylor polynomials of -th (accurate to ) and -th order (accurate to ), corresponding to and , respectively. As demonstrated in [14], ICD is able to match the same value as the direct evaluation using Gram matrices with at least 6-digit accuracy (there is a tiny rounding error for the cancer dataset in the least significant digit after the decimal point, compared to the direct method, as the correntropy coefficients are accumulated over all possible feature pairs in each trial), in a significantly lower computation time. Remarkably, the EIPS method further outperforms the ICD’s speed by another order of magnitude (with no accumulated rounding error for the cancer dataset when using -th order TS expansion).

As discussed above, the ICD does not control the space and time complexities directly, i.e., the reduced dimension cannot be fixed a priori. The ICD is useful only when the eigenvalues of the matrix drop sufficiently fast and the original Gram matrix can be represented by a low rank approximation with sufficient accuracy. However, if this ideal condition fails to exist, e.g., if the dimensionality increases with respect to the number of samples, the ICD performance will suffer. The EIPS approach, on the other hands, defines an equivalent kernel function, as such, it is not merely an approximation method, but rather, a new, exact kernel formulation within the theoretically-grounded unifying framework of the RKHS.

Not only can we compute ITL quantities with ease and accuracy, but we can also integrate it seamlessly into online KAF algorithms using ITL cost functions, demonstrated next.

Direct Method ICD EIPS () EIPS ()
Data value time value time value time value time
(, dim.) (s) (s) (s) (s)
iris (150, 4) 1.747235 0.0719 1.747235 0.0079 8.3 1.746707 0.0009 1.747235 0.0009
wine (178, 13) 6.466733 1.2174 6.466733 0.0464 7.9 6.465304 0.0027 6.466733 0.0029
cancer (198, 32) 112.470020 9.9328 112.470021 0.2189 6.4 112.463802 0.0124 112.470020 0.0133
yeast (1484, 8) 0.296951 30.3389 0.296951 0.0661 7.4 0.297262 0.0033 0.296951 0.0043
abalone (4177, 8) 22.637017 328.6687 22.637017 0.0971 5.3 22.637014 0.0058 22.637017 0.0076
TABLE I: Average Performance of the Direct and Fast Methods: (Correntropy Coefficient).
Direct Method ICD EIPS EIPS
Data value time value time value time value time
(, dim.) (s) (s) (s) (s)
iris (150, 4) 0.086585 0.0615 0.086585 0.0081 7.8 0.086538 0.0006 0.086585 0.0006
wine (178, 13) 0.094259 0.9411 0.094259 0.0496 7.3 0.094239 0.0024 0.094259 0.0028
cancer (198, 32) 0.059147 7.0841 0.059147 0.2353 6.0 0.059141 0.0106 0.059147 0.0140
yeast (1484, 8) 0.000155 23.0459 0.000155 0.0709 5.5 0.000155 0.0044 0.000155 0.0049
abalone (4177, 8) 0.000237 217.0791 0.000237 0.1035 5.1 0.000237 0.0052 0.000237 0.0082
TABLE II: Average Performance of the Direct and Fast Methods: (Cauchy-Schwartz Quadratic Mutual Information).

Iv-B NT Kernel Adaptive Information Filtering with Error Entropy and Error Correntropy Criteria

Here we perform one-step ahead prediction on the Mackey-Glass (MG) chaotic time series [36]

, defined by the following time-delay ordinary differential equation

where , , , , discretized at a sampling period of 6 seconds using the forth-order Runge-Kutta method, with initial condition . Chaotic dynamics are extremely sensitive to initial conditions: small differences in initial conditions yields widely diverging outcomes, rendering long-term prediction intractable, in general.

The data are standardized by subtracting its mean and dividing by its standard deviation, then scaled by the resulting maximum absolute value to guarantee the sample values are within the range of

. A time-embedding or input dimension of is used. The results are averaged over 200 independent trials. In each trial, 2000 consecutive samples with random starting point in the time series are used for training, and testing consists of 200 consecutive samples located in the future.

In the first example, we compared the performances of KMCC variants, as shown in Fig. 2. We fixed the finite-dimensional RKHS dimension for the input features to using -th degree GQ rule with subsampled grids, RFFs (variants 1 and 2 in [15]), and TS expansion. For a comparable resource allocation, we also compared the CPU time with that of the popular vector-quantization sparsification method (QKMCC) with vector quantization parameter set at (where the final dictionary size is 315). The Gaussian kernel size is set at , and the learning rate is fixed at . As expected, comparing to the KLMS, the KMCC requires additional overhead to compute correntropy. The information theoretic computation using EIPS kernels (GQ, RFF1, RFF2, and TS), on the other hand, significantly outperformed the conventional KAF formulations (KLMS and KMCC) and KAF with sparsification (QKMCC) in terms of speed. Again, as is the case for the NT MSE formulations [15], the average EIPS CPU time is constant across all iterations vs. the (super)linear growth of conventional kernel methods, making EIPS kernel methods ideal for large datasets and continuous online update, e.g., streaming data.

Fig. 2: NT-Kernel Maximum Correntropy Criterion (NT-KMCC) algorithm vs. KMCC, and quantized KMCC.
Fig. 3: NT Kernel Minimum Error Entropy (KMEE) algorithms (direct vs. -th order TS expansion for IP) using full information potential (double sum) vs. linear MEE, KMEE, and quantized KMEE with single-sum stochastic information gradient (SIG).

Next, we evaluated the speed of various KMEE implementations in Fig. 3. IP gradient was computed over the most recent samples or error history. Since extensive comparisons have already been performed between random features and deterministic features for NT KAF in [15], for clarity of presentation, we focused on the GQ NT formulations. To further showcase the computational efficiency of the EIPS kernel method, we pit the NT-KMEE algorithms using full information potential (double sum) against the linear MEE, kernel MEE, and quantized kernel MEE using the much simpler, linear complexity single-sum stochastic information gradient or SIG. As expected, the direct method to compute the full, expected value of IP, NT-KMEE-GQ-Gauss(error), yielded the worst performance. Nonetheless, we see that the NT kernel method maintains constant complexity. In contrast, the CPU time for conventional kernel method such as KMEE-SIG will continue to increase as the number of training samples (update iterations) grow beyond the 2000 used in this experiment. If we combine NT kernel adaptive filtering with EIPS-ITL estimate, e.g., -th order Taylor polynomial or used in NT-KMEE-GQ-TS(error), we obtain results comparable to the linear filter with SIG (LMEE-SIG) and substantially faster time than kernel SIG methods, as shown in the bottom plot of Fig. 3. On the other hand, the LMEE-SIG performed the worst in maximizing the IP (equivalent to minimizing the error entropy), as shown in the top plot of Fig. 3. The NT-KMEE with EIPS-ITL estimate converged to the maximum IP at the same iteration step as conventional kernel methods (nonlinear rate is due to the normalization of the double sum vs. the for SIG) but using significantly lower, constant CPU time.

V Conclusion

In this paper, we proposed a family of fast, scalable, and accurate estimators for information theoretic learning using explicit inner product spaces. ITL replaces conventional second-order statistics for information theory descriptors based on non-parametric estimator of Rényi entropy. ITL is conceptually different from standard kernel methods as it is based on kernel density estimation. Although ITL kernels need not to satisfy Mercer’s condition, positive definiteness is preferred due to numerical stability in computation. An RKHS for ITL defined on a space of probability density functions simplifies statistical inference for supervised or unsupervised learning. ITL criteria take into account the higher-order statistical behavior of the systems and signals as desired. However, this comes at an increased cost of complexity. By extending the no-trick kernel method to ITL using EIPS feature mapping with constant complexity for certain problems, information extraction from the signal is improved without compromising its scalability. We outlined several methods (deterministic, random, and hybrid) to construct EIPS feature mappings. We demonstrated the superior performance of EIPS-ITL estimators and combined NT-kernel adaptive filtering using EIPS-ITL cost functions through experiments.

In the future, we will extend the EIPS framework to more advanced ITL algorithms and cost functions, such as unsupervised learning and reinforcement learning.


  • [1] J. C. Príncipe, Information Theoretic Learning: Renyi’s Entropy and Kernel Perspectives.   New York, NY, USA: Springer, 2010.
  • [2] E. Parzen, “On estimation of a probability density function and mode,” Ann. Math. Statist., vol. 33, no. 3, pp. 1065–1076, 09 1962. [Online]. Available:
  • [3] Jian-Wu Xu, A. R. C. Paiva, I. Park, and J. C. Principe, “A reproducing kernel hilbert space framework for information-theoretic learning,” IEEE Transactions on Signal Processing, vol. 56, no. 12, pp. 5891–5902, Dec 2008.
  • [4] W. Liu, J. C. Príncipe, and S. Haykin, Kernel Adaptive Filtering: A Comprehensive Introduction.   Hoboken, NJ, USA: Wiley, 2010.
  • [5] K. Li and J. C. Príncipe, “The kernel adaptive autoregressive-moving-average algorithm,” IEEE Trans. Neural Netw. Learn. Syst., vol. 27, no. 2, pp. 334–346, Feb. 2016.
  • [6]

    ——, “Biologically-inspired spike-based automatic speech recognition of isolated digits over a reproducing kernel hilbert space,”

    Frontiers in Neuroscience, vol. 12, p. 194, 2018. [Online]. Available:
  • [7] K. Li and J. C. Principe, “Functional bayesian filter,” 2019. [Online]. Available:
  • [8] B. Chen, S. Zhao, P. Zhu, and J. C. Príncipe, “Quantized kernel least mean square algorithm,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 1, pp. 22–32, 2012.
  • [9]

    K. Li and J. C. Príncipe, “Transfer learning in adaptive filters: The nearest instance centroid-estimation kernel least-mean-square algorithm,”

    IEEE Transactions on Signal Processing, vol. 65, no. 24, pp. 6520–6535, Dec 2017.
  • [10]

    K. Li and J. C. Príncipe, “Surprise-novelty information processing for gaussian online active learning (snip-goal),” in

    2018 International Joint Conference on Neural Networks (IJCNN)

    , July 2018, pp. 1–6.
  • [11] A. J. Smola and B. Schökopf, “Sparse greedy matrix approximation for machine learning,” in Proceedings of the Seventeenth International Conference on Machine Learning, ser. ICML ’00.   San Francisco, CA, USA: Morgan Kaufmann Publishers Inc., 2000, pp. 911–918. [Online]. Available:
  • [12]

    C. K. I. Williams and M. Seeger, “The effect of the input density distribution on kernel-based classifiers,” in

    Proceedings of the Seventeenth International Conference on Machine Learning, ser. ICML ’00.   San Francisco, CA, USA: Morgan Kaufmann Publishers Inc., 2000, pp. 1159–1166. [Online]. Available:
  • [13] S. Fine and K. Scheinberg, “Efficient svm training using low-rank kernel representations,” J. Mach. Learn. Res., vol. 2, pp. 243–264, Mar. 2002. [Online]. Available:
  • [14] S. Seth and J. C. Principe, “On speeding up computation in information theoretic learning,” in 2009 International Joint Conference on Neural Networks, June 2009, pp. 2883–2887.
  • [15] K. Li and J. C. Principe, “No-trick (treat) kernel adaptive filtering using deterministic features,” 2019. [Online]. Available:
  • [16] L. Greengard and V. Rokhlin, “A fast algorithm for particle simulations,” Journal of Computational Physics, vol. 73, no. 2, pp. 325 – 348, 1987.
  • [17] L. Greengard and J. Strain, “The fast gauss transform,” SIAM J. Sci. Stat. Comput., vol. 12, no. 1, pp. 79–94, Jan. 1991. [Online]. Available:
  • [18] Yang, Duraiswami, Gumerov, and Davis, “Improved fast gauss transform and efficient kernel density estimation,” in

    Proceedings Ninth IEEE International Conference on Computer Vision

    , Oct 2003, pp. 664–671 vol.1.
  • [19]

    K. Li, B. Chen, and J. C. Príncipe, “Kernel adaptive filtering with confidence intervals,” in

    The 2013 International Joint Conference on Neural Networks (IJCNN), Aug 2013, pp. 1–6.
  • [20] T. Dao, C. D. Sa, and C. Ré, “Gaussian quadrature for kernel features,” in Proceedings of the 31st International Conference on Neural Information Processing Systems, ser. NIPS’17.   USA: Curran Associates Inc., 2017, pp. 6109–6119.
  • [21] A. Rahimi and B. Recht, “Random features for large-scale kernel machines,” in Proceedings of the 20th International Conference on Neural Information Processing Systems, ser. NIPS’07.   USA: Curran Associates Inc., 2007, pp. 1177–1184.
  • [22] C. K. I. Williams and M. Seeger, “Using the nyström method to speed up kernel machines,” in Proceedings of the 13th International Conference on Neural Information Processing Systems, ser. NIPS’00.   Cambridge, MA, USA: MIT Press, 2000, pp. 661–667. [Online]. Available:
  • [23] F. R. Bach and M. I. Jordan, “Predictive low-rank decomposition for kernel methods,” in Proceedings of the 22Nd International Conference on Machine Learning, ser. ICML ’05.   New York, NY, USA: ACM, 2005, pp. 33–40. [Online]. Available:
  • [24] S. Bochner, M. Functions, S. Integrals, H. Analysis, M. Tenenbaum, and H. Pollard, Lectures on Fourier Integrals. (AM-42).   Princeton University Press, 1959. [Online]. Available:
  • [25]

    S. A. Smolyak, “Quadrature and interpolation formulas for tensor products of certain classes of functions,”

    Dokl. Akad. Nauk SSSR, vol. 148, no. 5, pp. 1042–1045, 1963.
  • [26] N. Aronszajn, “Tehory of reproducing kernels,” Trans. Amer. Math. Soc., vol. 68, pp. 337–404, 1950.
  • [27] E. Parzen, “Statistical methods on time series by Hilbert space methods,” Applied Mathematics and Statistics Laboratory, Stanford, CA, Tech. Rep. 23, 1959.
  • [28] W. Liu, P. P. Pokharel, and J. C. Principe, “Correntropy: Properties and applications in non-gaussian signal processing,” IEEE Transactions on Signal Processing, vol. 55, no. 11, pp. 5286–5298, Nov 2007.
  • [29] I. Santamaria, P. P. Pokharel, and J. C. Príncipe, “Generalized correlation funciton: definition, properties, and application to blind equalization,” IEEE Trans. Signal Process., vol. 54, no. 6, pp. 2187–2197, 2006.
  • [30] W. Liu, P. Pokharel, and J. C. Príncipe, “The kernel least-mean-square algorithm,” IEEE Trans. Signal Process., vol. 56, no. 2, pp. 543–554, 2008.
  • [31] S. Zhao, B. Chen, and J. C. Príncipe, “Kernel adaptive filtering with maximum correntropy criterion,” in The 2011 International Joint Conference on Neural Networks, July 2011, pp. 2012–2017.
  • [32] S. Han, S. Rao, D. Erdogmus, K.-H. Jeong, and J. Principe, “A minimum-error entropy criterion with self-adjusting step-size (mee-sas),” Signal Processing, vol. 87, no. 11, pp. 2733 – 2745, 2007. [Online]. Available:
  • [33] D. Erdogmus and J. C. Principe, “Generalized information potential criterion for adaptive system training,” Trans. Neur. Netw., vol. 13, no. 5, pp. 1035–1044, Sep. 2002. [Online]. Available:
  • [34] B. Chen, Z. Yuan, N. Zheng, and J. C. Príncipe, “Kernel minimum error entropy algorithm,” Neurocomput., vol. 121, pp. 160–169, Dec. 2013. [Online]. Available:
  • [35] “UCI machine learning repository.” [Online]. Available:
  • [36] M. C. Mackey and L. Glass, “Oscillation and chaos in physiological control systems,” Science, vol. 197, no. 4300, pp. 287–289, Jul. 1977.