I Introduction
Information theoretic learning (ITL) is a framework where information theory descriptors based on nonparametric estimator of Rényi entropy replace conventional secondorder statistics for the design of adaptive systems [1]. A reproducing kernel Hilbert space (RKHS) for ITL defined on a space of probability density functions (pdf’s) simplifies statistical inference for supervised or unsupervised learning. ITL criteria take into consideration the higherorder statistical behavior of the systems and signals as desired. ITL is conceptually different from other kernel methods as it is based on kernel density estimation (KDE) and thus its kernel function need not be positive definite, instead satisfying a different set of properties as detailed in [2]. Nevertheless, the estimators in both learning schemes share many similarities [3], including several positivedefinite kernels such as the Gaussian kernel and the Laplacian kernel [2]. In fact, positive definiteness is preferred in ITL due to numerical stability in computation.
In the standard kernel method approach, points in the input space are mapped, using an implicit nonlinear function , into a potentially infinitedimensional inner product space or RKHS, denoted by . The explicit representation is of secondary nature. The Mercer condition guarantees the existence of the mapping. A real valued similarity function is defined as
(1) 
which is referred to as a reproducing kernel. This presents an elegant solution for classification, clustering, regression, and principal component analysis, since the mapped data points are linearly separable in the potentially infinitedimensional RKHS, allowing classical linear methods to be applied directly on the data. However, because the actually points (functions) in the function space are inaccessible, kernel methods scale poorly to large datasets. Naive kernel methods operate on the kernel or Gram matrix, whose entries are denoted
, requiring space complexity and computational complexity for many standard operations. For online kernel adaptive filtering (KAF) algorithms [4, 5, 6, 7], this represents a rolling sum with linear or superlinear growth. There have been a continual effort to sparsify and reduce the computational load, especially for online KAF [8, 9, 10].The two most important concepts in ITL are the information potential (IP), which is associated with Rényi’s quadratic entropy (QE), and the cross information potential (CIP) that measures dissimilarity between two density functions [3]. The estimator of IP requires summing all the elements of the kernel or Gram matrix. A straightforward computation is expensive in both storage and time, especially when the number of samples is large. Different methods have been proposed to reduce this computational burden by extracting relevant information with sufficient accuracy without processing all elements of the Gram matrix [11, 12, 13, 14].
Recently, we proposed a notrick (NT) framework for kernel adaptive filtering (KAF) using explicit feature mappings that define a positive definite kernel for a finitedimensional RKHS [15]. The same concept can be integrated seamlessly into ITL using a family of estimators based on separable finiterank or degenerate kernels whose basis are sampled or constructed independently of the training data. Instead of manipulating the data through pruning or sparsification, we design a family of finiterank explicit inner produce space (EIPS) Mercer kernels, specifically their explicit feature mappings, for fast, scalable, and accurate estimators for ITL. The Mercer theorem states:
Theorem 1 (Mercer kernel).
Let be a probability measure on , and the associated Hilbert space. Given a sequence with , and an orthogonal family of unit norm functions with , the associated Mercer kernel is
(2) 
where
are the eigenvalues of the kernel and
its eigenfunctions, and the series’ convergence is absolute and uniform.
In practice, for simplicity, a Mercer kernel where the infinite sum in (2) can be expressed in closed form is often used, e.g., the Gaussian kernel function, and the expansion itself is either unknown or ignored. In this paper, we take an alternative approach and focus on the family of EIPS kernel functions (specifically dataindependent finiterank) shown in Fig. 1, that accelerates the computation of ITL quantities with the utmost versatility and convenience. Compared to (2) which consists of a continuous orthonormal basis of eigenfunctions, finiterank or degenerate Mercer kernels of rank are expressed using finite series
(3) 
Defining an EIPS allows a weighted sum of finiterank kernel evaluations to be factorized and collapsed for later use as a consolidated feature vector. This is especially efficient when coupled with KAF using ITL cost functions such as the maximum correntropy criterion (MCC) and the minimum error entropy (MEE). Other ITL estimators such as that of CauchySchwartz quadratic mutual information (QMICS) and Euclidean distance based quadratic mutual information (QMIED) also benefit from the reduced computational complexity offered by this family of fast, scalable, and accurate ITL estimators.
Ia Related Work
A related concept is the fast multipole method (FMM) [16], developed for the rapid summation of potential fields generated by a large number of sources (Nbody problem in mathematical physics), in which the potential function is expanded in multipole (singular) series and local (regular) series at the expansion centers. This typically combines a farfield expansion of the kernel, in which the influence of sources and targets separates, with a hierarchical subdivision of space of sources into panels or clusters. For the Gaussian field, various factorization and space subdivision schemes include the fast Gauss transform (FGT) and the improved Gauss transform [17]. The improved FGT for KDE uses the greedy farthestpoint clustering algorithm to model the space subdivsion task as a center problem [18]. Unfortunately, their effectiveness diminishes for higher dimensions and large datasets, since Hermite expansion is used for FGT, resulting in terms for a term truncation in dimensions, i.e., exponential growth in the accumulation of expansion products along each data dimension. The improved FGT uses multivariate Taylor series (TS) expansion to reduce the number of expansion terms to polynomial order.
Here, we take the notrick (NT) kernel method interpretation in [15] by defining an EIPS Mercer kernel equal to the scalar or inner product of the transformed points in an higher finitedimensional RKHS using explicit mapping , i.e., . Mercer condition guarantees the existence of the underlying mapping and universal approximation. From the inner product perspective, an EIPS kernel naturally factorizes the pairwise interaction between two feature vectors, yielding fast, scalable, and accurate solutions, without the computational overhead of clustering the sources. Compared to FMM, the EIPS approach goes further in the abstraction, by defining an equivalent positivedefinite kernel (where the inner product between two points are computed using the explicitly mapped feature vectors), therefore, it is not merely an approximation method, but rather, a new, exact kernel formulation within the unifying framework of the RKHS. In this paradigm, the linear combination (sum) of the training data (source points) feature vectors is a linear function represented by a weight vector in this space. Furthermore, in applications such as KAF, we are always interested in following the embedded trajectory of the input signal (local approximation to the trajectory), so we do not need to seek expansions in other parts of the space, unlike FMM. The EIPS kernel method is both efficient and effective for lowdimensional KDE, e.g., an EIPSITL estimator for information quantities based on the prediction error, which is typically one dimensional for time series prediction, extracts more information than order statical models such as [19]. Without loss of generality, we will use the simple TS expansion EIPS kernel as ITL estimator in low dimensions. For higher dimensions, we will instead use Gaussian quadrature (GQ) with subsampled grids to directly control the number of features used in the feature mapping or EIPS kernel, which has been shown to be effective for high dimensional and large data [20].
Random Fourier features (RFF) [21]
have been successfully applied for efficient kernel learning using finiterank kernels. While RFF belong to the EIPS family (whose basis are sampled randomly and independent of the training data), for small dimensions, deterministic maps yield significantly lower error and performance variance. For higher dimensions, they can also produce inferior results compared to deterministic polynomialexact based sampling method, e.g., for online kernel adaptive filtering
[15]. Nonetheless, they represent a simple and efficient way to construct EIPS kernels.Lowrank approximation methods such as the Nyström method [22] (basis functions are randomly sampled from the training examples) are data dependent, making them less appealing than dataindependent EIPS method. The incomplete Cholesky decomposition (ICD) is another datadependent approximation method that has been shown to speed up the computation of information theoretic quantities with stateoftheart ITL performances, with stateoftheart ITL performances, by leveraging the fact that the eigenvalues of the Gram matrix diminishes rapidly and can be replaced by a lower ranked approximation [23, 14]. The symmetric positive definite matrix can be expressed as , where is an lower triangular matrix with positive diagonal entires, a special case of decomposition. Using a greedy approach, the ICD minimizes the trace (sum of eigenvalues) of the residual with an (where ) lower triangular matrix with arbitrary accuracy, where is a small positive number of choice and is a suitable matrix form. The value of , which determines the space and time complexity, and , respectively, is indirectly set by the desired precision , depending on the density of the samples. Furthermore, computing the decomposition using is not only a datadependent batch method, but also comes with considerable computational overhead. They still require computing the kernel matrix. The EIPSITL estimators, on the other hand, is a full kernel approach that defines a positivedefinite kernel using explicitly mapped features from dataindependent basis. The feature space dimension is set directly, allowing greater control in resource allocation and simplified implementation, especially for online applications.
The rest of the paper is organized as follows. In Section II, explicitinnerproducespace kernel construction is discussed. Information theoretic learning is reviewed in Section III, and EIPSITL estimators are presented. Experimental results are shown in Section IV. Finally, Section V concludes this paper.
Ii EIPS Feature Mapping Construction
To accelerate ITL estimators, we propose to map the input data to a higher finitedimensional feature space using EIPS features. Having dataindependent basis improves its versatility significantly, allowing the mapping to be predetermined and implemented online with greater efficiency. The explicit feature mapping can be constructed either deterministically, randomly, or via a combination of the two approaches (hybrid). These mappings define a new, equivalent reproducing kernel with universal approximation property [15]. Furthermore, the inner product in the finitedimensional RKHS naturally factorizes the pairwise interactions and greatly simply the computation and storage of ITL quantities, e.g., they reduce the cost of computing all pairwise interactions for points from to and consolidate the collection of points into a single weight vector of dimension .
The popular random Fourier features [21] belong to a class of randomly constructed EIPS kernels for scaling up kernel machines. The underlying principle states:
Theorem 2 (Bochner, 1932[24]).
A continuous shiftinvariant properlyscaled kernel , and , is positive definite if and only if
is the Fourier transform of a proper probability distribution.
The corresponding kernel can then be expressed in terms of its Fourier transform (a probability distribution) as
(4)  
(5) 
where is the Hermitian inner product , and
is an unbiased estimate of the properly scaled shiftinvariant kernel
when is drawn from the probability distribution . We ignore the imaginary part of the complex exponentials to obtain a realvalued mapping.Alternatively, the RFF approach can be viewed as performing numerical integration using randomly selected sample points. In numerical analysis, there are many polynomialexact ways to approximate the integral with a discrete sum of judiciously selected points. For small input dimensions, deterministic feature mappings, such as Taylor series expansion, yield significantly lower error and performance variance than random maps. For data of higher dimensions, polynomialexact deterministic features can be sampled from the distribution determined by their weights to combat the curse of dimensionality and gain direct control over the feature dimension. We have analyzed the performances of deterministic vs. random features for online kernel adaptive filtering in
[15]. In this paper, we will briefly summarize the class of deterministically constructed EIPS kernel for ITL estimators.Iia Taylor Polynomial Features
This is the most straightforward deterministic feature map for EIPS based on the Gaussian kernel, where each term in the TS expansion is expressed as a sum of matching monomials in the data pair and , i.e.,
(6) 
We can easily factor out the product terms that depend on and independently. The joint term in (6), , can be expressed as a power series or infinite sum using Taylor polynomials as
(7) 
Using shorthand, we can factor the innerproduct exponentiation as
(8) 
where enumerates over all selections of coordinates (including repetitions and different orderings of the same coordinates) thus avoiding collecting equivalent terms and writing down their corresponding multinomial coefficients, i.e., as an inner product between degree monomials of the coordinates of and . Substituting this into (7) and (6) yields the following explicit feature map:
(9) 
where . For TS feature approximation or EIPS kernel construction, we truncate the infinite sum to the first terms:
(10) 
where the TS approximation is exact up to polynomials of degree .
In practice, the different permutations of j in each th term of Taylor expansion, (8), can be grouped into a single feature corresponding to a distinct monomial, resulting in features of degree , and a total of features of degree at most .
IiA1 Precision of Taylor Series Expansion
For the Gaussian kernel, the precision of TS expansion can be defined precisely using the meanvalue form of the approximation remainder.
Theorem 3 (Taylor’s Formula).
Let the function be times differentiable (where the integer ) on the open interval with continuous on the closed interval between and , the remainder of the Taylor polynomial is
(11) 
for some real number between and .
Suppose we want the desired accuracy to be within in absolute error, i.e., , for all , where is the th order Taylor polynomial. Solving for the worst case, , we have . In Section IVA, we will illustrate this by comparing the performance of Taylor polynomial EIPS formulation with stateoftheart ITL reducedrankapproximation fast method using incomplete Cholesky decomposition.
IiB Gaussian Quadrature (GQ) Features with Subsampled Grids
A quadrature rule is a choice of points and weights to minimize the maximum error . For a fixed diameter , the sample complexity (SC) is defined as:
Definition 1.
For any , a quadrature rule has sample complexity , where is the smallest number of samples such that the rule yields a maximum error of at most .
There are many quadrature rules, without loss of generality, we focus on Gaussian quadrature (GQ), specifically the GaussHermite quadrature using Hermite polynomials. In numerical analysis, GQ is an exactpolynomial approximation of a onedimensional definite integral: , where the point construction yields an exact result for polynomials of degree up to . While the GQ points and corresponding weights are both distribution and parameter dependent, they can be computed efficiently using orthogonal polynomials. GQ approximations are accurate for integrating functions that are wellapproximated by polynomials, including all subGaussian densities. Compared to random Fourier features, GQ features have a much weaker dependence on the approximation error , at a constant cost of an additional factor of (independent of the error ) [20].
To extend onedimensional GQ to higher dimensions, gridbased quadrature rules can be constructed efficiently. A dense grid
or tensorproduct construction factors the integral (
4) along the dimensions , where are the standard basis vectors, and can be approximated using onedimensional quadrature rule. However, since the sample complexity is doublyexponential in , a sparse grid or Smolyak quadrature is typically used [25]. Only points up to some fixed total level A is included, achieving a similar error with exponentially fewer points than a single larger quadrature rule.The major drawback of the gridbased construction is the lack of fine tuning for the feature dimension. Since the number of samples extracted in the feature map is determined by the degree of polynomial exactness, even a small incremental change can produce a significant increase in the number of features. Subsampling according to the distribution determined by their weights is used to combat both the curse of dimensionality and the lack of detailed control over the exact feature number. There are also dataadaptive methods to choose a quadrature rule for a predefined number of samples [20], but we are focused on dataindependent EIPS features.
IiC Universal Approximation
EIPS feature mappings such as random Fourier features, Gaussian quadrature, and Taylor polynomials are not only an approximation method, but also defines an equivalent kernel that induces a new reproducing kernel Hilbert space: a nonlinear mapping that transforms the data from the original input space to a new higher finitedimensional RKHS where . The RKHS is not necessarily contained in the RKHS corresponding to the kernel function , e.g., Gaussian kernel. It is easy to show that the EIPS mappings discussed in this paper induce a positivedefinite kernel function satisfying Mercer’s conditions.
Proposition 1 (Closure properties).
Let and be positivedefinite kernels over (where ), is a positive real number, a realvalued function on , then the following functions are positive definite kernels.

,

,

,

.
Since exponentials and polynomials are positivedefinite kernels, under the closure properties, it is clear that the inner products of random Fourier features, Gaussian quadrature, and Taylor polynomials are all reproducing kernels. It follows that these kernels have universal approximating property: approximates uniformly an arbitrary continuous target function to any degree of accuracy over any compact subset of the input space.
Iii EIPS kernel for Information Theoretic Learning (EIPSITL)
ITL is a framework to adapt nonparametric systems using information quantities such as entropy and divergence [1]. ITL criteria is still directly estimated from data via Parzen kernel estimator, but it extracts more information from the data for adaptation, and yields, therefore, solutions that are more accurate than mean squared error (MSE) in nonGaussian and nonlinear signal processing. Reproducing kernels are covariance functions explains their early role in inference problems [26, 27]
. Rényi’s quadratic entropy of a random variable
with pdf is defined as(12) 
The Parzen estimate of the pdf, given a set of independent and identically distributed (i.i.d.) data drawn from the distribution is
(13) 
where is the number of data samples, and is the Gaussian kernel with kernel size
(14) 
Without loss of generality, we will only consider the Gaussian kernel and related EIPS kernels in this paper.
Using the notrick or EIPS explicit mapping , the kernel function in (13) is replaced with the inner product of the explicitly mapped points (functions) in the finitedimensional RKHS as
(15) 
where is the sample mean or centroid and is, in general, independent of the target or . Alternatively, from the RKHS paradigm, this can be viewed as a weight vector that represents or parametrizes the linear function in the EIPS, i.e., .
A nonparametric estimate of Rényi’s quadratic entropy directly from samples is
(16) 
where the information potential (IP) is defined as
(17) 
Using EIPS (15), the IP estimate becomes
(18) 
This drastically reduces the quadratic complexity from to a linear , which only requires computing the weight vector or center once, then squaring it, i.e., scalar product with . Online update of this term is embarrassingly simple, as new sources or sample points are simply added to the existing weight vector with the appropriate normalization, i.e., .
Let be a stochastic process with being an index set. The nonlinear mapping induced by the Gaussian kernel maps the data into the feature space , where the autocorrentropy function is defined from into given by
(19)  
(20) 
where denotes the expectation. A sufficient condition for
is that the stochastic process must be strictly stationary on all the even moments, a stronger condition than wide sense stationarity (limited to 2 order moments). The IP is the mean squared projected data
or the expected value of correntropy over lags . A more general form of correntropy (crosscorrentropy) [28] between two random variables is defined as(21) 
The sample estimate of correntropy for a finite number of data is
(22) 
Using Taylor series expansion for the Gaussian kernel, correntropy can be expressed as
(23) 
which involves all the evenorder moments of the random variable
(where the kernel choice dictates the expansion, e.g., the sigmoidal kernel contains all the odd moments)
[29].In fact, all learning algorithms that use nonparametric pdf estimates in the input space admit an alternative formulation as kernel methods expressed in terms of inner products. As shown above, the kernel techniques are able to extract higher order statistics of the data that should lead to performance improvements for nonGaussian environments. Next, we show the explicit EIPS derivations of several commonly used ITL estimators.
Iii1 EIPS Quadratic Mutual Information (QMI)
The CauchySchwartz quadratic mutual information and Euclidean distance based QMI are defined, respectively, as
(24) 
(25) 
The above expressions consists of the following three distinct terms. The EIPS IP estimate of the joint pdf () is computed as
(26) 
where uses the shiftinvariant property, is due to the associative property, where with the square matrix , and is the Hadamard product operator. The EIPS IP estimate of the factorized marginal pdf () becomes
(27) 
And, the EIPS generalizedcross IP estimate () is
(28) 
where is due to the commutative property of summation and the fact that the transpose of a scalar is itself, i.e., .
Iii2 EIPS Divergence and Distance Measures
The CS divergence and ED divergence are defined as
(29)  
(30) 
respectively, where the cross information potential (CIP) estimated can be computed as
(31) 
It follows that the correntropy coefficient estimate is
(32) 
Iiia NT Kernel Adaptive Filtering using EIPSITL Criteria
EIPS not only facilitates the computation of ITL quantities, but also integrates seamlessly into online kernel adaptive information filters, as it did for notrick KAF using conventional MSE criterion [15].
IiiA1 NT Maximum Correntropy Criterion
The counterpart to the kernel least mean square (KLMS) [30] algorithm, which adopts the MSE as the cost, is the kernel maximum correntropy criterion (KMCC) filter [31]. Secondorder statistics may not be suitable for all nonlinear, especially nonGaussian, situations. The KMCC combines the simplicity of the KLMS with the higherorder statistics of the correntropy criterion. Using the NT formulation, the NTKMCC is summarized in Alg. 1. Compared to the NTKLMS [15], we can see that the NTKMCC has a variable step size controlled by the prediction error.
IiiA2 EIPS Minimum Error Entropy
Given a batch of error samples, the information potential estimator using Rényi’s quadratic entropy is
(33) 
The cost function for the MEE criterion is given as
(34) 
The IP is smooth and differentiable, to maximum its value, one can simply move in the direction of its gradient
(35) 
For online methods, especially KAF where the kernel trick introduces (super)linear complexity, the Gaussian quadratic stochastic information gradient (SIG) is typically used
(36) 
Using the EIPS approach, the full (expected value or double sum) IP can be computed extremely efficiently. Using shorthand, the explicit feature mapping factorizes the double summation in the full IP gradient (35) into the following independent terms
(37) 
where the four scalar terms can be summed independently. Since the errors are typically small and one dimensional, without loss of generality, we elect to use the simple Taylor series expansion EIPS mapping for .
Iv Simulation Results
Extensive comparisons between MSE and MEE techniques have already been performed in [33, 31, 34], here, we will focused on the speed of EIPS kernel framework for ITL.
Iva Accelerating ITL Quantities Computation
First, we evaluate the validity of the proposed method using five benchmark datasets from the UCI machine learning repository [35]
. We normalized them individually (iris, cancer, wine, yeast, and abalone) before computing the estimators: zscore followed by scaling the global extrema to
. As all ITL quantities share similar forms, without loss of generality, we computed the CauchySchwartz quadratic mutual information and correntropy coefficient estimates on all possible pairs of features for each dataset (i.e., ), using the direct method, incomplete Cholesky decomposition, and the simple Taylor polynomial EIPS kernel method. The Gaussian kernel size is set at , and the desired precision for ICD is , which corresponds to a minimum of terms in the TS expansion using (11). Tables I and II summarize the results averaged over 10 independent trials. The experiments were performed using Intel Core i77700 (at 3.60 GHz with 16 GB of RAM) and MATLAB. In each trial, the ITL descriptors’ values and CPU times are accumulated over all feature pairs. Since ICD is datadependent, the average reduced rank is listed in a separate column. For comparisons, we showed the performances of EIPS kernels using Taylor polynomials of th (accurate to ) and th order (accurate to ), corresponding to and , respectively. As demonstrated in [14], ICD is able to match the same value as the direct evaluation using Gram matrices with at least 6digit accuracy (there is a tiny rounding error for the cancer dataset in the least significant digit after the decimal point, compared to the direct method, as the correntropy coefficients are accumulated over all possible feature pairs in each trial), in a significantly lower computation time. Remarkably, the EIPS method further outperforms the ICD’s speed by another order of magnitude (with no accumulated rounding error for the cancer dataset when using th order TS expansion).As discussed above, the ICD does not control the space and time complexities directly, i.e., the reduced dimension cannot be fixed a priori. The ICD is useful only when the eigenvalues of the matrix drop sufficiently fast and the original Gram matrix can be represented by a low rank approximation with sufficient accuracy. However, if this ideal condition fails to exist, e.g., if the dimensionality increases with respect to the number of samples, the ICD performance will suffer. The EIPS approach, on the other hands, defines an equivalent kernel function, as such, it is not merely an approximation method, but rather, a new, exact kernel formulation within the theoreticallygrounded unifying framework of the RKHS.
Not only can we compute ITL quantities with ease and accuracy, but we can also integrate it seamlessly into online KAF algorithms using ITL cost functions, demonstrated next.
Direct Method  ICD  EIPS ()  EIPS ()  
Data  value  time  value  time  value  time  value  time  
(, dim.)  (s)  (s)  (s)  (s)  
iris (150, 4)  1.747235  0.0719  1.747235  0.0079  8.3  1.746707  0.0009  1.747235  0.0009 
wine (178, 13)  6.466733  1.2174  6.466733  0.0464  7.9  6.465304  0.0027  6.466733  0.0029 
cancer (198, 32)  112.470020  9.9328  112.470021  0.2189  6.4  112.463802  0.0124  112.470020  0.0133 
yeast (1484, 8)  0.296951  30.3389  0.296951  0.0661  7.4  0.297262  0.0033  0.296951  0.0043 
abalone (4177, 8)  22.637017  328.6687  22.637017  0.0971  5.3  22.637014  0.0058  22.637017  0.0076 
Direct Method  ICD  EIPS  EIPS  
Data  value  time  value  time  value  time  value  time  
(, dim.)  (s)  (s)  (s)  (s)  
iris (150, 4)  0.086585  0.0615  0.086585  0.0081  7.8  0.086538  0.0006  0.086585  0.0006 
wine (178, 13)  0.094259  0.9411  0.094259  0.0496  7.3  0.094239  0.0024  0.094259  0.0028 
cancer (198, 32)  0.059147  7.0841  0.059147  0.2353  6.0  0.059141  0.0106  0.059147  0.0140 
yeast (1484, 8)  0.000155  23.0459  0.000155  0.0709  5.5  0.000155  0.0044  0.000155  0.0049 
abalone (4177, 8)  0.000237  217.0791  0.000237  0.1035  5.1  0.000237  0.0052  0.000237  0.0082 
IvB NT Kernel Adaptive Information Filtering with Error Entropy and Error Correntropy Criteria
Here we perform onestep ahead prediction on the MackeyGlass (MG) chaotic time series [36]
, defined by the following timedelay ordinary differential equation
where , , , , discretized at a sampling period of 6 seconds using the forthorder RungeKutta method, with initial condition . Chaotic dynamics are extremely sensitive to initial conditions: small differences in initial conditions yields widely diverging outcomes, rendering longterm prediction intractable, in general.
The data are standardized by subtracting its mean and dividing by its standard deviation, then scaled by the resulting maximum absolute value to guarantee the sample values are within the range of
. A timeembedding or input dimension of is used. The results are averaged over 200 independent trials. In each trial, 2000 consecutive samples with random starting point in the time series are used for training, and testing consists of 200 consecutive samples located in the future.In the first example, we compared the performances of KMCC variants, as shown in Fig. 2. We fixed the finitedimensional RKHS dimension for the input features to using th degree GQ rule with subsampled grids, RFFs (variants 1 and 2 in [15]), and TS expansion. For a comparable resource allocation, we also compared the CPU time with that of the popular vectorquantization sparsification method (QKMCC) with vector quantization parameter set at (where the final dictionary size is 315). The Gaussian kernel size is set at , and the learning rate is fixed at . As expected, comparing to the KLMS, the KMCC requires additional overhead to compute correntropy. The information theoretic computation using EIPS kernels (GQ, RFF1, RFF2, and TS), on the other hand, significantly outperformed the conventional KAF formulations (KLMS and KMCC) and KAF with sparsification (QKMCC) in terms of speed. Again, as is the case for the NT MSE formulations [15], the average EIPS CPU time is constant across all iterations vs. the (super)linear growth of conventional kernel methods, making EIPS kernel methods ideal for large datasets and continuous online update, e.g., streaming data.
Next, we evaluated the speed of various KMEE implementations in Fig. 3. IP gradient was computed over the most recent samples or error history. Since extensive comparisons have already been performed between random features and deterministic features for NT KAF in [15], for clarity of presentation, we focused on the GQ NT formulations. To further showcase the computational efficiency of the EIPS kernel method, we pit the NTKMEE algorithms using full information potential (double sum) against the linear MEE, kernel MEE, and quantized kernel MEE using the much simpler, linear complexity singlesum stochastic information gradient or SIG. As expected, the direct method to compute the full, expected value of IP, NTKMEEGQGauss(error), yielded the worst performance. Nonetheless, we see that the NT kernel method maintains constant complexity. In contrast, the CPU time for conventional kernel method such as KMEESIG will continue to increase as the number of training samples (update iterations) grow beyond the 2000 used in this experiment. If we combine NT kernel adaptive filtering with EIPSITL estimate, e.g., th order Taylor polynomial or used in NTKMEEGQTS(error), we obtain results comparable to the linear filter with SIG (LMEESIG) and substantially faster time than kernel SIG methods, as shown in the bottom plot of Fig. 3. On the other hand, the LMEESIG performed the worst in maximizing the IP (equivalent to minimizing the error entropy), as shown in the top plot of Fig. 3. The NTKMEE with EIPSITL estimate converged to the maximum IP at the same iteration step as conventional kernel methods (nonlinear rate is due to the normalization of the double sum vs. the for SIG) but using significantly lower, constant CPU time.
V Conclusion
In this paper, we proposed a family of fast, scalable, and accurate estimators for information theoretic learning using explicit inner product spaces. ITL replaces conventional secondorder statistics for information theory descriptors based on nonparametric estimator of Rényi entropy. ITL is conceptually different from standard kernel methods as it is based on kernel density estimation. Although ITL kernels need not to satisfy Mercer’s condition, positive definiteness is preferred due to numerical stability in computation. An RKHS for ITL defined on a space of probability density functions simplifies statistical inference for supervised or unsupervised learning. ITL criteria take into account the higherorder statistical behavior of the systems and signals as desired. However, this comes at an increased cost of complexity. By extending the notrick kernel method to ITL using EIPS feature mapping with constant complexity for certain problems, information extraction from the signal is improved without compromising its scalability. We outlined several methods (deterministic, random, and hybrid) to construct EIPS feature mappings. We demonstrated the superior performance of EIPSITL estimators and combined NTkernel adaptive filtering using EIPSITL cost functions through experiments.
In the future, we will extend the EIPS framework to more advanced ITL algorithms and cost functions, such as unsupervised learning and reinforcement learning.
References
 [1] J. C. Príncipe, Information Theoretic Learning: Renyi’s Entropy and Kernel Perspectives. New York, NY, USA: Springer, 2010.
 [2] E. Parzen, “On estimation of a probability density function and mode,” Ann. Math. Statist., vol. 33, no. 3, pp. 1065–1076, 09 1962. [Online]. Available: https://doi.org/10.1214/aoms/1177704472
 [3] JianWu Xu, A. R. C. Paiva, I. Park, and J. C. Principe, “A reproducing kernel hilbert space framework for informationtheoretic learning,” IEEE Transactions on Signal Processing, vol. 56, no. 12, pp. 5891–5902, Dec 2008.
 [4] W. Liu, J. C. Príncipe, and S. Haykin, Kernel Adaptive Filtering: A Comprehensive Introduction. Hoboken, NJ, USA: Wiley, 2010.
 [5] K. Li and J. C. Príncipe, “The kernel adaptive autoregressivemovingaverage algorithm,” IEEE Trans. Neural Netw. Learn. Syst., vol. 27, no. 2, pp. 334–346, Feb. 2016.

[6]
——, “Biologicallyinspired spikebased automatic speech recognition of isolated digits over a reproducing kernel hilbert space,”
Frontiers in Neuroscience, vol. 12, p. 194, 2018. [Online]. Available: https://www.frontiersin.org/article/10.3389/fnins.2018.00194  [7] K. Li and J. C. Principe, “Functional bayesian filter,” 2019. [Online]. Available: https://arxiv.org/abs/1911.10606
 [8] B. Chen, S. Zhao, P. Zhu, and J. C. Príncipe, “Quantized kernel least mean square algorithm,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 1, pp. 22–32, 2012.

[9]
K. Li and J. C. Príncipe, “Transfer learning in adaptive filters: The nearest instance centroidestimation kernel leastmeansquare algorithm,”
IEEE Transactions on Signal Processing, vol. 65, no. 24, pp. 6520–6535, Dec 2017. 
[10]
K. Li and J. C. Príncipe, “Surprisenovelty information processing for gaussian online active learning (snipgoal),” in
2018 International Joint Conference on Neural Networks (IJCNN)
, July 2018, pp. 1–6.  [11] A. J. Smola and B. Schökopf, “Sparse greedy matrix approximation for machine learning,” in Proceedings of the Seventeenth International Conference on Machine Learning, ser. ICML ’00. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc., 2000, pp. 911–918. [Online]. Available: http://dl.acm.org/citation.cfm?id=645529.657980

[12]
C. K. I. Williams and M. Seeger, “The effect of the input density distribution on kernelbased classifiers,” in
Proceedings of the Seventeenth International Conference on Machine Learning, ser. ICML ’00. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc., 2000, pp. 1159–1166. [Online]. Available: http://dl.acm.org/citation.cfm?id=645529.756511  [13] S. Fine and K. Scheinberg, “Efficient svm training using lowrank kernel representations,” J. Mach. Learn. Res., vol. 2, pp. 243–264, Mar. 2002. [Online]. Available: http://dl.acm.org/citation.cfm?id=944790.944812
 [14] S. Seth and J. C. Principe, “On speeding up computation in information theoretic learning,” in 2009 International Joint Conference on Neural Networks, June 2009, pp. 2883–2887.
 [15] K. Li and J. C. Principe, “Notrick (treat) kernel adaptive filtering using deterministic features,” 2019. [Online]. Available: https://arxiv.org/abs/1912.04530
 [16] L. Greengard and V. Rokhlin, “A fast algorithm for particle simulations,” Journal of Computational Physics, vol. 73, no. 2, pp. 325 – 348, 1987.
 [17] L. Greengard and J. Strain, “The fast gauss transform,” SIAM J. Sci. Stat. Comput., vol. 12, no. 1, pp. 79–94, Jan. 1991. [Online]. Available: https://doi.org/10.1137/0912004

[18]
Yang, Duraiswami, Gumerov, and Davis, “Improved fast gauss transform
and efficient kernel density estimation,” in
Proceedings Ninth IEEE International Conference on Computer Vision
, Oct 2003, pp. 664–671 vol.1. 
[19]
K. Li, B. Chen, and J. C. Príncipe, “Kernel adaptive filtering with confidence intervals,” in
The 2013 International Joint Conference on Neural Networks (IJCNN), Aug 2013, pp. 1–6.  [20] T. Dao, C. D. Sa, and C. Ré, “Gaussian quadrature for kernel features,” in Proceedings of the 31st International Conference on Neural Information Processing Systems, ser. NIPS’17. USA: Curran Associates Inc., 2017, pp. 6109–6119.
 [21] A. Rahimi and B. Recht, “Random features for largescale kernel machines,” in Proceedings of the 20th International Conference on Neural Information Processing Systems, ser. NIPS’07. USA: Curran Associates Inc., 2007, pp. 1177–1184.
 [22] C. K. I. Williams and M. Seeger, “Using the nyström method to speed up kernel machines,” in Proceedings of the 13th International Conference on Neural Information Processing Systems, ser. NIPS’00. Cambridge, MA, USA: MIT Press, 2000, pp. 661–667. [Online]. Available: http://dl.acm.org/citation.cfm?id=3008751.3008847
 [23] F. R. Bach and M. I. Jordan, “Predictive lowrank decomposition for kernel methods,” in Proceedings of the 22Nd International Conference on Machine Learning, ser. ICML ’05. New York, NY, USA: ACM, 2005, pp. 33–40. [Online]. Available: http://doi.acm.org/10.1145/1102351.1102356
 [24] S. Bochner, M. Functions, S. Integrals, H. Analysis, M. Tenenbaum, and H. Pollard, Lectures on Fourier Integrals. (AM42). Princeton University Press, 1959. [Online]. Available: http://www.jstor.org/stable/j.ctt1b9s09r

[25]
S. A. Smolyak, “Quadrature and interpolation formulas for tensor products of certain classes of functions,”
Dokl. Akad. Nauk SSSR, vol. 148, no. 5, pp. 1042–1045, 1963.  [26] N. Aronszajn, “Tehory of reproducing kernels,” Trans. Amer. Math. Soc., vol. 68, pp. 337–404, 1950.
 [27] E. Parzen, “Statistical methods on time series by Hilbert space methods,” Applied Mathematics and Statistics Laboratory, Stanford, CA, Tech. Rep. 23, 1959.
 [28] W. Liu, P. P. Pokharel, and J. C. Principe, “Correntropy: Properties and applications in nongaussian signal processing,” IEEE Transactions on Signal Processing, vol. 55, no. 11, pp. 5286–5298, Nov 2007.
 [29] I. Santamaria, P. P. Pokharel, and J. C. Príncipe, “Generalized correlation funciton: definition, properties, and application to blind equalization,” IEEE Trans. Signal Process., vol. 54, no. 6, pp. 2187–2197, 2006.
 [30] W. Liu, P. Pokharel, and J. C. Príncipe, “The kernel leastmeansquare algorithm,” IEEE Trans. Signal Process., vol. 56, no. 2, pp. 543–554, 2008.
 [31] S. Zhao, B. Chen, and J. C. Príncipe, “Kernel adaptive filtering with maximum correntropy criterion,” in The 2011 International Joint Conference on Neural Networks, July 2011, pp. 2012–2017.
 [32] S. Han, S. Rao, D. Erdogmus, K.H. Jeong, and J. Principe, “A minimumerror entropy criterion with selfadjusting stepsize (meesas),” Signal Processing, vol. 87, no. 11, pp. 2733 – 2745, 2007. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0165168407001788
 [33] D. Erdogmus and J. C. Principe, “Generalized information potential criterion for adaptive system training,” Trans. Neur. Netw., vol. 13, no. 5, pp. 1035–1044, Sep. 2002. [Online]. Available: https://doi.org/10.1109/TNN.2002.1031936
 [34] B. Chen, Z. Yuan, N. Zheng, and J. C. Príncipe, “Kernel minimum error entropy algorithm,” Neurocomput., vol. 121, pp. 160–169, Dec. 2013. [Online]. Available: http://dx.doi.org/10.1016/j.neucom.2013.04.037
 [35] “UCI machine learning repository.” [Online]. Available: http://archive.ics.uci.edu/ml
 [36] M. C. Mackey and L. Glass, “Oscillation and chaos in physiological control systems,” Science, vol. 197, no. 4300, pp. 287–289, Jul. 1977.