Information Theoretic Feature Transformation Learning for Brain Interfaces

by   Ozan Özdenizci, et al.

Objective: A variety of pattern analysis techniques for model training in brain interfaces exploit neural feature dimensionality reduction based on feature ranking and selection heuristics. In the light of broad evidence demonstrating the potential sub-optimality of ranking based feature selection by any criterion, we propose to extend this focus with an information theoretic learning driven feature transformation concept. Methods: We present a maximum mutual information linear transformation (MMI-LinT), and a nonlinear transformation (MMI-NonLinT) framework derived by a general definition of the feature transformation learning problem. Empirical assessments are performed based on electroencephalographic (EEG) data recorded during a four class motor imagery brain-computer interface (BCI) task. Exploiting state-of-the-art methods for initial feature vector construction, we compare the proposed approaches with conventional feature selection based dimensionality reduction techniques which are widely used in brain interfaces. Furthermore, for the multi-class problem, we present and exploit a hierarchical graphical model based BCI decoding system. Results: Both binary and multi-class decoding analyses demonstrate significantly better performances with the proposed methods. Conclusion: Information theoretic feature transformations are capable of tackling potential confounders of conventional approaches in various settings. Significance: We argue that this concept provides significant insights to extend the focus on feature selection heuristics to a broader definition of feature transformation learning in brain interfaces.



page 1

page 8


Stochastic Mutual Information Gradient Estimation for Dimensionality Reduction Networks

Feature ranking and selection is a widely used approach in various appli...

A Review of Feature Selection Methods Based on Mutual Information

In this work we present a review of the state of the art of information ...

Interpretability of Multivariate Brain Maps in Brain Decoding: Definition and Quantification

Brain decoding is a popular multivariate approach for hypothesis testing...

Mutual Information-Based Unsupervised Feature Transformation for Heterogeneous Feature Subset Selection

Conventional mutual information (MI) based feature selection (FS) method...

Interpretability in Linear Brain Decoding

Improving the interpretability of brain decoding approaches is of primar...

Multi-Frequency Canonical Correlation Analysis (MFCCA): An Extended Decoding Algorithm for Multi-Frequency SSVEP

Stimulation methods that utilise more than one stimulation frequency hav...

Revisiting the Application of Feature Selection Methods to Speech Imagery BCI Datasets

Brain-computer interface (BCI) aims to establish and improve human and c...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Over the last decades, electroencephalogram (EEG) based brain-computer interfaces (BCIs) have shown the promise of providing a direct neural communication and control channel in paralysis, and reinforce motor restoration in stroke [1, 2]. In the design of closed-loop brain/neural interfaces for people with neuromuscular disabilities, a variety of statistical signal processing and pattern analysis approaches have been considered. For supervised neural decoding model construction, improving generalization and optimal exploitation of the information content in the extracted neural features with respect to their class conditions (i.e., labels) is essential given a finite number of training data samples. Furthermore, the number of daily training examples will be strictly limited for patients with severe neuromuscular disabilities, due to the constrained data collection times under adequate concentration and consciousness. This optimal exploitation of extracted features can be performed with various feature learning and dimensionality reduction frameworks, which enables elimination or weighting of redundant features that do not convey reliable statistical information for decoding, or avoid overfitting by reducing the constructed model complexity.

A theoretically optimal dimensionality reduction procedure given a set of training examples and a specified classifier will point to iteratively adjusting a pre-determined feature learning framework until the best cross validated classification accuracy is achieved, which are known as the

wrapper approaches. Since this is naturally unfeasible, filter approaches provide an alternative for optimizing a feature learning framework based on an optimality criterion. Specifically, feature ranking and subset selection algorithms [3], and particularly feature selection based on information theoretic criteria, where salient statistical properties of features can be exploited in the form a probabilistic dependence measure, have shown significant promise [4, 5]. Likewise, a vast body of contemporary brain interfaces rely on subject-specific feature selection methods for feature dimensionality reduction [6, 7, 8, 9, 10], particularly based on maximum mutual information criterion [11, 12, 13], which were also investigated by two extensive, recent and complementary survey studies on BCIs [14, 15].

In other respects, there exists significant evidence claiming that feature ranking by any criterion, including information theoretic criteria, being potentially sub-optimal [16, 17]. This argument can simply be extended from statistical demonstrations on any two redundant features being informative jointly, or that high correlation between features should not necessarily be interpreted as lack of feature complementarity [3]. Based on this idea, information theoretic feature projection approaches are introduced in the form of linear projections [18, 19, 20, 21, 22] or rotation matrices [23]. This feature transformation approach can be interpreted as determining a manifold on which projections of the original extracted features carry maximal mutual information with the class labels. However, these specific approaches are in need of computationally feasible practical approximations, and they are not yet considered for feature learning in brain interfaces. We argue that exploiting such an approach may provide significant insights, particularly for multi-class BCIs which are more inclined to overfitting with high-dimensional training feature spaces, and sub-optimal feature selection based dimensionality reduction confounders. From a neurophysiological standpoint, feature projection approaches align with the widely-acknowledged hypothesis that distributed networks of cortical sources are likely to generate brain responses that are associated with specific tasks [24, 25]. Hence, BCI decoder models could potentially excel from arbitrary synergies of extracted EEG features representative of various neural activities, rather than a selected subset.

In this article, we propose a general definition for information theoretic feature transformation learning, which we stochastically estimate on finite training data sets for feature extraction in brain interfaces. We present a maximum mutual information linear transformation (MMI-LinT) approach, which we previously evaluated in binary decoding

[26], and a nonlinear transformation (MMI-NonLinT) approach derived by the general definition. Furthermore, we introduce a graphical model based hierarchical multi-class decoding framework, which can be considered as an intuitively specified case of one-versus-rest binary classifiers. We argue that a hierarchical binary feature transformation learning approach in this multi-class framework is likely to outperform heuristic feature selection algorithms. We empirically assess MMI-LinT and MMI-NonLinT using EEG data recorded during a cue-based four class motor imagery BCI task [27]. Firstly, we exploit state-of-the-art methods for initial feature vector construction; common spatial patterns (CSP) [28, 29] and filter bank CSP (FBCSP) extensions [30]. Subsequently, we compare our feature learning and dimensionality reduction approach with both statistical testing based, as well as mutual information based feature ranking and selection methods explored in previous BCI studies. Finally, we discuss the significance of our results and provide insights that extends the feature selection based focus to feature transformation learning in brain interfaces.

Ii Information Theoretic Feature Transformation Learning

In this section, we introduce the information theoretic feature transformation learning objective. We discuss mutual information in Bayesian optimal classification, present the stochastic estimation approach for the objective, and introduce the linear and nonlinear transformation schemes.

Ii-a Objective Formulation

Let denote the observational finite data set consisting of

samples of a continuous valued random variable

, where is the -dimensional feature vector (e.g., pre-processed EEG data) representing the -th sample. Likewise, let denote the set of their respective class labels consisting of samples of a discrete valued random variable , where each represents the class category varying between to , with being the number of classes. The objective in the learning problem is to find a transformation that maps the -dimensional input feature space to a -dimensional transformed feature space, while maximizing the mutual information between the transformed data and corresponding class labels, based on the observational finite data set samples:


with continuous random variable

having transformed data set samples in the -dimensional feature space, denoting the parameters of the function , the mutual information between random variables and , and

the feature transform function space. We will denote the probability density for the random variable

with , and probability mass function for with . We will assume for dimensionality reduction in model training.

Ii-B Information Theoretic Bounds on Classification Error

In Bayesian optimal classification, upper and lower bounds on the probability of error in estimating a discrete valued random variable from an observational random variable can be derived by information theoretic criteria. Using the notation we provided above, for a binary classification problem, these bounds can be determined as:


with where is the predicted class label while estimating after observing a sample of , and is Shannon’s entropy. In Eq. 2, lower bound to the probability of error is given by Fano’s inequality [31], and the upper bound on Bayes error is known as the Hellman-Raviv bound [32]. Together, these inequalities claim that the lowest possible Bayes error of any given classifier providing the class label prediction can be achieved when the mutual information between the random variables and is maximized.

Ii-C Stochastic Mutual Information Gradient

Mutual information between the continuous random variable and the discrete class labels random variable is defined as: . It is important to note that estimating Eq. 1 is a challenging problem, as recently studied [33, 34]

, since it includes both continuous and discrete random variables where the entropy of a continuous random variable can have infinitely large positive or negative values, whereas the entropy of a discrete random variable is always positive. Formally, the mutual information is denoted as:


We will approach the optimization problem stochastically based on the observational data set samples and their corresponding class labels. In this context, precise estimation of mutual information is not necessary, but we aim to adaptively estimate the optimal feature transformation function parameters under maximum mutual information criterion. This approach is motivated by similar work on stochastic entropy and mutual information estimation models [35, 19].

In our adaptive algorithm, parameters will be iteratively updated based on the instantaneous estimate of the gradient of mutual information at each iteration (i.e., ), which we will refer to as the stochastic mutual information gradient. Here, in fact, we approximate the true gradient of the mutual information (i.e., ) stochastically, and perform gradient ascent parameter updates based on the instantaneous gradient estimate evaluated with the instantaneous sample and the value of at iteration . This stochastic quantity can be obtained by dropping the expectation operation over from:


such that the resulting expression (i.e., stochastic mutual information gradient at iteration ) will be expressed by:


In practice, the probability density for

is not known, hence we can non-parametrically approximate by kernel density estimations in the form of

with being the size multivariate kernel function for a dimensional vector [36]. Note that a continuously differentiable kernel is necessary for proper evaluation of the gradients. Here, the stochastic estimator in Eq. 5 is a biased estimator of the actual mutual information gradient in Eq. 4, since it is based on kernel density estimators with finite samples which are biased estimators [37]

. Going further, applying the Bayes’ Theorem, the stochastic mutual information estimate

from Eq. 5 can be expressed by:


where at each iteration can be estimated either parametrically (e.g., Gaussian) or non-parametrically through Gaussian kernel density fitting on class conditional distributions of the transformed training data, and class priors can be determined again on the training data samples.

During model training, we employ momentum stochastic gradient ascent [38]. Parameter update at iteration is determined by , which further is employed as , where is the momentum parameter and

is the step size for the gradient. Iterations are performed using all training data samples in randomized order with a batch size of one sample, and also repeated for a number of training epochs. Choice of the function

and its parameters specifies the feature transformation scheme. In this paper, we propose a linear (MMI-LinT) and a nonlinear (MMI-NonLinT) transformation modality as presented henceforth.

Ii-D Linear Feature Transformation (MMI-LinT)

In the MMI-LinT framework, transformation function is parameterized by a linear projection matrix. At each iteration , the transformation function generates samples of the random variable according to , where elements of the matrix of size are updated by the adaptive linear system. Accordingly, the stochastic mutual information gradient can be denoted as:


with one of the data set samples from during model training. Using continuously differentiable class conditional kernel density estimations or a parametric density, the gradients with respect to

can be obtained numerically. From a computational implementation perspective, this simply corresponds to backpropagation of

through a single fully-connected layer neural network. Dimensionality of linear projection

remains as a parameter to be determined, alongside the number of training epochs.

Ii-E Nonlinear Feature Transformation (MMI-NonLinT)

A nonlinear transformation function can be parameterized in various modalities. We employ a muiltilayer perceptron framework for MMI-NonLinT. Specifically in a two-layer perceptron case, which we employed in our demonstrations (c.f. Section 


), the transformation function is denoted as a combination of a linear input projection with a nonlinear activation function that outputs to the hidden layer, and a linear output layer projection. For the hidden layer nonlinearity we use a rectified linear unit (ReLU) transfer function.

Overall, by definition of the presented two-layer perceptron network, nonlinear feature transformation at iteration can be formulated as a composition of the transformation functions where denotes the hidden layer dimensionality, and . These can be represented as:


During MMI-NonLinT feature learning implementations, stochastic mutual information gradients can be estimated by backpropagating

through the multilayer perceptron, to iteratively estimate the optimal projection matrices

and . Number of nodes in the hidden layer (i.e., dimensionality of projection ) is another parameter to be determined alongside and number of training epochs.

Iii Hierarchical Multi-Class Decoding

In this section, we introduce the binary hierarchical classification scheme we employ for multi-class decoding. We present a graphical model based representation, express the Bayesian decision criterion, and provide a coherent extension for the proposed information theoretic feature learning protocol.

Iii-a Hierarchical Graphical Model

We decompose the multi-class ( class) problem into binary sub-problems. This results in a hierarchically arranged tree with one-versus-rest classifiers. In the context of BCIs, we argue that hierarchical arrangement of one-versus-rest binary sub-problems can be represented by an intuitive ordering rather than an arbitrary one. For instance in hand gesture decoding, upper hierarchical levels can discriminate the choice of hand and palm opening, whereas lower levels will be decoding power versus precision grasp type, or thumb abduction versus adduction of a specific grasp [39]. This decomposition provides an application specific multi-class decoding scheme which we demonstrate in Section IV.

Hierarchical tree representation is depicted by the graphical model in Figure 1. At each sample , overall decision is deterministically related with the states at each level. Each state variable at level represents the decision for a binary sub-problem (i.e., to representing decisions from the highest to lowest levels). Extracted features at level from observational data samples, are probabilistically related with sub-problem decisions. This hierarchical decoding approach can be interpreted as a special case of one-versus-rest multi-class decoding schemes with an intuitive ordering.

Fig. 1: Graphical representation of the decoding model. Features , extracted from observational data samples at level , are indicated with blue nodes representing the observed random variables. Decision at sample is deterministically related (dashed lines) with the states at each level. Observed random variables are probabilistically related with the states (solid lines).

Iii-B Bayesian Decision Criterion

Classification based on (i.e., extracted features from observational data samples ) is performed by maximum-a-posteriori (MAP) estimation. Relying on the graphical model and the hierarchical decomposition for level-wise feature extraction, MAP decision rule can be denoted as:


with denoting the extracted feature vector from a subset of observations only corresponding to level . This ensures feature extraction to be performed between two classes at each level. Hence at the feature extraction step, the set is split into one-versus-rest subsets based on the intuitive hierarchical ordering (c.f. Section III-C). Based on the graphical model, Eq. 9 can be denoted as:


which can further be represented by the across-level independency assumptions imposed by the graphical model as:


with the first expression in the product denoting the class conditional density of the extracted features at level , second expression in the product denoting the class priors at level , and for consistency. Eq. 11 can be evaluated for a test sample using the likelihoods based on class conditional kernel density estimations obtained with the training data for both and classes at all levels.

Iii-C Hierarchical Feature Transformation Learning

We denote a consistent notation for combining the feature transformation learning approach with the hierarchical framework. For hierarchical feature extraction, are obtained using the complete set , with corresponding binary labels based on the first level hierarchical disjunction. However, are extracted based on the set with labels , where denotes the number of data samples that correspond to the second level hierarchical disjunction. In mathematical terms; , , , and so on, with denoting the number of samples with negative labels at level that consists all samples of level . Here, the choice of for the continuing disjunction branches was arbitrary.

For the information theoretic objectives, transformation functions are obtained at every hierarchical level, based on the subset of the data samples and their corresponding binary labels at that level. Overall, this can be denoted as:


with a continuous valued random variable having transformed data set samples for level , denoting the parameters of the transformation function at level , and a binary random variable for level .

Iv Experimental Results

In this section, we implement and demonstrate feasibility of our approach using EEG data recorded during a cue-based four class motor imagery BCI task [27]. For empirical assessments, we used data set 2a of the BCI Competition IV111BCI Competition IV:[40], which was provided by the Institute of Neural Engineering, Technische Universität Graz, Austria. We compare and discuss the results with conventional feature dimensionality reduction methods accordingly in binary and multi-class decoding.

Iv-a Study Design

Nine healthy subjects (4 female; mean age = 23.112.57) participated in EEG data collection for this data set [41]. During recordings, subjects were sitting in front of a computer screen on which the cue-based BCI paradigm consisting of four motor imagery tasks was presented to them. Each subject participated in the experiment for two sessions on different days, henceforth referred to as session 1 and session 2. Each of these sessions included six runs separated by short breaks, where each run consists of 48 trials (12 for each one of the four classes), yielding a total of 288 trials per session.

At the beginning of trials, a fixation cross was displayed on the black screen. After two seconds, a cue in the form of an arrow pointing either to the up, down, right or left corresponding to the four classes (i.e., tongue, feet, right hand or left hand imagination) appeared and stayed on the screen for 1.25 seconds. This instructed the subjects to perform the desired motor imagination task, with no feedback provided. Subjects were instructed to perform motor imagery until the fixation cross disappeared, which constituted a three seconds imagery time for data processing. Afterwards, a short break was displayed on the screen and the next trial began. The order of the cues (i.e., classes) across trials were randomized.

Twenty-two electrodes placed on the scalp according to the 10-20 system were used for EEG recordings at locations: Fz, FC3, FC1, FCz, FC2, FC4, C5, C3, C1, Cz, C2, C4, C6, CP3, CP1, CPz, CP2, CP4, P1, Pz, P2, POz. Data were referenced to the left mastoid, grounded to the right mastoid, sampled at 250 Hz, and filtered with 0.5–100 Hz band pass and a 50 Hz notch filter. We did not exclude any trials or perform any electrooculography (EOG) based artifact reduction.

Iv-B EEG Signal Processing Pipeline

Three second imagery duration of trials results in a trial data matrix of 22 channels by 750 samples. Corresponding multi-class labels across trials are; tongue (class 1), both feet (class 2), right hand (class 3) and left hand (class 4). In binary hierarchical decoding, we analyzed the trials in the three one-versus-rest sub-problem levels intuitively as; (1) speech (class 1) versus motor (classes 2, 3, 4), (2) feet (class 2) versus hand (classes 3, 4), (3) right (class 3) versus left hand (class 4).

State-of-the-art discriminative spatial filtering of EEG in motor imagery paradigms highlights the common spatial patterns (CSP) algorithm [28, 29]

, which aims to identify a discriminative basis for a multichannel signal recorded under different conditions, where signal representations maximally differ in variance between these conditions (i.e., classes). In a binary case, CSP algorithm aims to solve the problem:


where and denote the class covariance matrices of the data matrix with rows denoting the number of channels. Vector indicates the discriminative spatial filter to be applied over channels. Eq. 13

can be solved by the generalized eigenvalue problem:

, which has

possible solutions. The eigenvector corresponding to the highest eigenvalue indicates a basis where variance of the class 1 data will be highest, and class 2 will be lowest. Vice versa for the lowest eigenvalue. Data pre-processing is usually performed by combining

eigenvectors as pairs of smallest and largest eigenvalues obtained, forming , and spatial filtering of the data with this matrix. Afterwards, dimensional features are calculated as the log-normalized signal variances across time-series of the CSP filtered data.

One prevalent extension of CSPs is band pass filtering of EEG in several frequency sub-bands before applying CSP, and concatenating the outputs of each sub-band specific CSP in one higher dimensional feature vector, which is known as the filter bank CSP (FBCSP) approach [30]. We exploit FBCSPs by band pass filtering EEG data for each trial and electrode in four frequency sub-bands known to be relevant for motor imagery; -band (8–12 Hz), -band (12–16 Hz), -band (16–22 Hz), and -band (22–30 Hz). We used three pairs of CSP components () from each frequency sub-band, resulting in a 24 dimensional feature vector at each trial .

Two Class - Session 1 P1 P2 P3 P4 P5 P6 P7 P8 P9 Mean
Speech vs Motor CSP 82.9 (1.3) 79.5 (1.5) 82.1 (2.2) 69.7 (1.7) 74.3 (1.0) 69.7 (1.0) 85.5 (1.1) 91.3 (0.5) 81.1 (1.2) 79.5 (1.2)
FBCSP 82.9 (0.3) 80.9 (0.8) 81.3 (1.0) 78.8 (1.9) 75.9 (1.2) 75.2 (2.0) 88.6 (1.0) 94.3 (0.4) 84.6 (0.3) 82.5 (0.9)
-Selection 81.7 (1.2) 78.4 (1.4) 84.7 (1.3) 78.3 (1.5) 70.5 (2.0) 74.0 (3.5) 86.6 (0.8) 92.2 (0.5) 83.2 (1.5) 81.0 (1.5)
SDA-Selection 80.9 (1.8) 80.6 (2.2) 82.4 (1.4) 77.2 (3.1) 69.6 (2.3) 73.2 (2.8) 87.3 (2.0) 93.2 (1.2) 82.8 (1.7) 80.8 (2.0)
mRMR-Selection 82.2 (1.3) 79.3 (1.9) 82.4 (1.7) 74.5 (1.8) 69.9 (2.0) 72.2 (1.6) 87.4 (2.4) 93.2 (1.1) 82.6 (2.9) 80.4 (1.8)
MMI-Selection 82.5 (1.5) 81.1 (1.5) 84.1 (1.5) 79.4 (3.0) 71.2 (2.5) 73.6 (2.0) 86.2 (1.7) 92.7 (0.4) 84.3 (1.2) 81.6 (1.7)
MMI-LinT 82.7 (1.6) 82.0 (0.8) 85.2 (1.4) 79.2 (1.5) 75.2 (1.3) 77.3 (2.0) 89.1 (1.4) 94.3 (0.7) 84.2 (1.4) 83.2 (1.3)
MMI-NonLinT 83.4 (1.6) 82.3 (1.4) 83.8 (0.5) 77.5 (1.1) 75.6 (1.0) 76.5 (0.9) 89.3 (1.6) 94.6 (1.0) 84.2 (1.9) 83.0 (1.2)
Feet vs Hand CSP 91.0 (1.6) 93.8 (0.9) 90.7 (1.5) 71.0 (2.4) 69.8 (2.0) 66.8 (2.4) 95.3 (0.5) 80.4 (1.8) 69.9 (2.4) 80.9 (1.7)
FBCSP 90.4 (1.6) 92.0 (0.6) 92.6 (1.6) 77.5 (2.0) 72.7 (2.1) 72.5 (1.8) 96.4 (1.2) 83.0 (1.3) 73.8 (3.0) 83.4 (1.6)
-Selection 90.5 (1.0) 90.3 (2.1) 93.8 (1.2) 75.3 (2.2) 68.4 (4.0) 67.0 (2.8) 97.5 (0.6) 82.2 (2.4) 74.6 (2.6) 82.1 (1.9)
SDA-Selection 91.5 (0.6) 90.2 (1.8) 94.8 (0.6) 73.9 (1.7) 71.0 (3.5) 74.3 (2.1) 96.5 (0.8) 81.2 (1.2) 74.3 (1.6) 83.0 (1.5)
mRMR-Selection 90.7 (1.9) 90.7 (1.0) 94.0 (1.3) 71.0 (2.2) 69.6 (4.2) 70.3 (3.9) 97.3 (1.2) 78.7 (2.3) 73.6 (0.9) 81.7 (2.1)
MMI-Selection 90.8 (1.2) 91.3 (2.0) 95.7 (1.0) 72.8 (3.0) 70.8 (1.9) 71.3 (2.6) 96.8 (1.1) 83.7 (2.9) 73.8 (2.6) 83.0 (2.0)
MMI-LinT 93.1 (1.5) 92.8 (0.7) 95.0 (0.9) 78.1 (3.0) 73.8 (2.8) 75.0 (1.5) 96.7 (0.3) 81.1 (1.9) 72.7 (1.8) 84.2 (1.6)
MMI-NonLinT 92.5 (1.0) 92.5 (0.8) 94.5 (1.1) 78.7 (2.1) 73.6 (1.2) 74.9 (3.1) 97.0 (0.5) 81.5 (0.8) 75.0 (2.5) 84.4 (1.4)
Right vs Left CSP 82.2 (1.5) 65.2 (4.9) 94.0 (1.4) 73.1 (4.1) 59.1 (2.0) 64.8 (1.6) 75.8 (3.2) 96.6 (0.9) 70.8 (2.4) 75.7 (2.4)
FBCSP 84.7 (0.4) 60.5 (3.1) 93.4 (1.7) 74.8 (2.2) 70.1 (2.1) 63.4 (1.7) 73.1 (3.5) 94.8 (1.8) 71.6 (1.5) 76.2 (2.0)
-Selection 83.8 (1.3) 59.3 (4.9) 93.6 (1.8) 70.0 (2.8) 68.7 (4.8) 59.0 (2.6) 75.2 (1.5) 94.1 (1.3) 68.4 (2.0) 74.6 (2.5)
SDA-Selection 84.5 (2.4) 58.7 (2.5) 94.1 (1.2) 70.6 (2.9) 69.8 (2.1) 63.6 (1.8) 72.9 (3.9) 94.0 (1.8) 70.8 (3.9) 75.4 (2.5)
mRMR-Selection 82.7 (2.1) 54.3 (3.2) 90.4 (1.3) 70.1 (5.1) 67.7 (2.7) 60.4 (4.1) 71.6 (3.1) 89.7 (1.7) 66.8 (3.3) 72.6 (2.9)
MMI-Selection 87.6 (1.9) 63.0 (4.8) 94.0 (2.6) 70.8 (1.3) 72.9 (2.0) 63.7 (2.2) 73.1 (0.7) 95.2 (0.9) 72.5 (2.8) 76.9 (2.1)
MMI-LinT 86.3 (1.8) 61.3 (3.4) 93.8 (1.5) 75.5 (2.4) 72.7 (5.4) 65.8 (4.6) 76.3 (2.3) 92.5 (1.5) 69.3 (1.7) 77.0 (2.7)
MMI-NonLinT 86.2 (2.6) 61.9 (3.3) 93.8 (2.7) 75.2 (4.5) 72.5 (4.3) 66.2 (2.0) 75.0 (2.5) 92.5 (1.4) 71.1 (4.7) 77.1 (3.1)
TABLE I: Session 1 binary classification accuracies (

) averaged over 5x5-fold cross validation repetitions. Values in parentheses indicate the standard deviations. Bold values indicate the highest mean accuracy across different feature learning methods.

Iv-C Feature Learning Frameworks and Classification

We assess our approach in comparison to using raw CSP or FBCSP features as a methodological baseline, coefficient of determination based statistical feature selection (-Selection) from the FBCSP feature vectors [7], stepwise discriminant analysis based selection of features (SDA-Selection) as explored by [9], conventional maximum mutual information based feature ranking and selection (MMI-Selection) for dimensionality reduction [11], and another mutual information driven approach of minimum Redundancy Maximum Relevance feature selection (mRMR-Selection) [42] as explored by [12]. Implementations of the methods are presented below.

  1. CSP: A single 8-30 Hz band pass filter was applied. No dimensionality reduction was performed for the feature vector, resulting in = = 6.

  2. FBCSP: No dimensionality reduction was performed. Likelihood density estimations were performed with feature vector dimensions = = 24.

  3. -Selection: statistics based feature ranking and selection [7] was performed for the FBCSP feature vector = 24 to reduce to = 6.

  4. SDA-Selection

    : SDA utilizes a combination of forward and backward statistical significance based selection steps: (1) weighting the training features using ordinary least squares regression (i.e., Fisher’s linear discriminant) to predict their labels, (2) starting with an empty set of selected features, the most significant input feature (

    0.05) in prediction is selected and added to the discriminant function, (3) a backward step to remove the least significant input feature from the discriminant function ( 0.05), (4) repeat until no more features satisfy the forward or backward criteria. In our implementations, the number of selected features was resulting in a maximum dimensionality of six ( 6). Algorithm was implemented based on [43, 44].

  5. mRMR-Selection: mRMR algorithm relies on a mutual information based minimal-redundancy-maximal-relevance criterion between features and labels for incremental feature selection. For mutual information computations, the original algorithm suggests a priori discretizing the continuous feature variables. Hence, we discretized features in three states based on the mean and standard deviation across samples [42]. Number of selected features was chosen as 6, in consistency with the other methods ( = 24, = 6).

  6. MMI-Selection: Based on maximum mutual information ranking, selections are also performed in pairs by nature of CSP (i.e., high/low eigenvector projection pair of any ranking based selected feature is also selected) [11]. We investigated MMI based selection of either 2, 4, or 6 features ( = 24, ). Only highest decoding accuracies across these three are reported. Feature selection dimensionalities higher than 6 were not considered due to lower accuracies since they tend to fail in kernel density estimations and prone to overfitting.

  7. MMI-LinT: Dimensionality reduction of the FBCSP feature vector ( = 24) to two dimensions ( = 2) is performed based on Section II-D. Number of training epochs were 20, with 0.01 gradient step size, and the momentum parameter for optimization being 0.9.

  8. MMI-NonLinT: Dimensionality reduction of the FBCSP feature vector ( = 24) to two dimensions ( = 2) is performed based on Section II-E. Number of nodes in the hidden layer were chosen to be 30, the number of training epochs were 20, gradient step size 0.01, and the momentum parameter for optimization being 0.9.

To evaluate Eq. 11 for multi-class hierarchical decoding, class priors were assumed to be uniform, and the class conditional densities were derived by multivariate Gaussian kernel density estimation with bandwidth sizes determined by Silverman’s rule [45]. Analogously, we demonstrate the feasibility of our approach for binary decoding level-wise. Here, classification was based on MAP estimation over two class labels using Gaussian kernel density estimation of likelihoods, which can be interpreted as the kernel density classifier. In comparison to inheriting assumptions (e.g., Gaussianity of likelihoods for linear discriminant analysis which are widely favored for BCIs [6, 46]), the kernel density classifier is not parametrically restricted besides the innate choice of kernels. However, the vulnerability may arise from unstability in high dimensional regions where there is little training data.

Iv-D Binary Classification Results

We report binary decoding accuracies at hierarchical sub-problems as: (1) speech versus motor, (2) feet versus hand, (3) right hand versus left hand (see Table I). To demonstrate the feasibility of our approach in binary decoding, which we also previously studied in [26], we only present these results on the session 1 data sets using 5-fold cross validation, which were repeated 5 times. Across most of the subjects and on average, MMI-LinT and MMI-NonLinT outperforms the baseline and feature selection frameworks in decoding.

Paired t-tests for statistical significance of performance (

0.05) between the best performing feature learning approach and the other methods at each level are performed. For speech versus motor, MMI-LinT revealed significant difference from CSP ( = 0.006), - ( = 0.002), SDA- and mRMR-Selection ( = 0.001), as well as MMI-Selection ( = 0.02). However no significant difference to FBCSP ( = 0.18), and MMI-NonLinT ( = 0.42) was observed. For feet versus hand, MMI-NonLinT revealed significant difference from CSP ( = 0.009), FBCSP and SDA-Selection ( = 0.02), - ( = 0.04), and mRMR-Selection ( = 0.01), but not MMI-Selection ( = 0.10), or MMI-LinT ( = 0.51). For right versus left, MMI-NonLinT revealed significant differences to - and SDA-Selection ( = 0.02), as well as mRMR-Selection ( = 0.001).

Four Class Decoding P1 P2 P3 P4 P5 P6 P7 P8 P9 Mean
Within Session 1 (5x5 Fold CV) CSP 70.2 (0.5) 62.0 (1.8) 76.4 (1.8) 48.2 (2.5) 40.3 (1.3) 42.7 (2.0) 74.0 (1.2) 76.9 (1.7) 53.7 (0.5) 60.4 (1.4)
FBCSP 68.9 (2.0) 59.6 (2.1) 75.3 (2.1) 55.2 (2.5) 46.1 (2.4) 45.4 (1.8) 73.8 (2.5) 79.5 (2.2) 56.6 (1.3) 62.2 (2.1)
-Selection 67.6 (2.1) 58.6 (1.6) 78.5 (1.4) 49.7 (2.0) 43.0 (3.7) 43.0 (1.8) 72.2 (1.2) 76.8 (2.0) 56.1 (1.4) 60.6 (1.9)
SDA-Selection 70.3 (2.3) 59.5 (2.5) 77.0 (1.0) 52.1 (1.8) 44.8 (1.8) 45.3 (1.2) 73.3 (1.4) 80.4 (0.8) 56.6 (1.7) 62.1 (1.5)
mRMR-Selection 69.7 (3.7) 56.8 (1.4) 74.3 (2.9) 50.8 (4.8) 42.3 (3.4) 43.8 (2.8) 73.9 (4.7) 73.4 (1.5) 53.8 (3.1) 59.8 (3.1)
MMI-Selection 69.3 (3.5) 59.7 (2.3) 77.9 (2.4) 52.8 (1.7) 44.5 (3.8) 44.5 (2.2) 73.6 (1.7) 77.2 (0.9) 57.4 (1.9) 61.8 (2.2)
MMI-LinT 74.2 (1.8) 64.1 (2.3) 79.6 (2.2) 55.2 (2.0) 48.4 (2.2) 50.5 (2.5) 77.1 (2.4) 79.2 (0.8) 57.7 (2.4) 65.1 (2.0)
MMI-NonLinT 72.2 (2.0) 62.9 (1.3) 79.7 (1.7) 55.3 (3.7) 48.8 (3.0) 48.8 (4.4) 76.6 (1.5) 78.9 (1.9) 57.9 (1.8) 64.5 (2.3)
Within Session 2 (5x5 Fold CV) CSP 66.8 (1.6) 57.2 (0.3) 69.3 (1.1) 59.7 (1.9) 40.8 (1.1) 38.0 (3.1) 81.3 (0.8) 72.9 (2.4) 83.3 (1.1) 63.2 (1.4)
FBCSP 70.2 (1.4) 55.8 (0.5) 75.1 (2.4) 60.6 (3.1) 39.4 (1.1) 38.8 (1.1) 81.7 (1.5) 70.4 (1.3) 82.9 (1.2) 63.8 (1.5)
-Selection 69.5 (1.5) 54.3 (1.3) 78.3 (1.8) 57.5 (1.9) 41.8 (3.1) 40.8 (3.0) 80.0 (1.7) 66.1 (1.6) 82.5 (1.6) 63.4 (1.9)
SDA-Selection 70.9 (1.3) 56.5 (2.0) 77.7 (1.5) 57.4 (2.3) 42.7 (2.3) 40.6 (4.0) 80.9 (2.1) 67.8 (2.1) 84.7 (1.4) 64.3 (2.1)
mRMR-Selection 68.1 (3.1) 52.5 (1.8) 76.5 (2.7) 56.8 (2.8) 40.4 (4.1) 42.2 (4.1) 76.2 (1.4) 63.4 (3.2) 82.8 (1.9) 62.1 (2.7)
MMI-Selection 70.9 (2.9) 55.6 (0.8) 78.7 (2.4) 58.6 (4.1) 43.5 (2.9) 41.9 (2.1) 82.3 (1.1) 68.4 (1.6) 86.2 (2.0) 65.1 (2.2)
MMI-LinT 72.4 (2.4) 60.9 (1.8) 79.6 (2.3) 65.9 (2.2) 44.2 (2.7) 48.4 (3.3) 81.1 (1.6) 72.2 (3.5) 84.4 (1.5) 67.6 (2.3)
MMI-NonLinT 72.0 (1.4) 61.4 (1.7) 80.4 (1.2) 65.0 (2.6) 44.9 (1.7) 49.4 (2.2) 82.2 (1.2) 71.8 (1.3) 83.6 (1.3) 67.8 (1.6)
Across Sessions CSP 65.6 (3.4) 49.5 (3.7) 66.1 (3.2) 44.2 (6.1) 28.7 (2.7) 41.0 (1.9) 63.2 (1.0) 66.0 (2.9) 59.7 (9.3) 53.7 (3.8)
FBCSP 67.9 (0.3) 47.7 (1.7) 69.2 (1.2) 48.8 (13.5) 37.1 (3.9) 38.2 (1.5) 71.0 (2.2) 67.7 (3.4) 56.0 (4.4) 55.9 (3.5)
-Selection 62.0 (0.7) 48.2 (1.5) 68.4 (8.8) 45.5 (1.5) 35.2 (0.3) 37.8 (1.5) 64.6 (9.8) 64.2 (6.9) 56.4 (6.1) 53.5 (4.1)
SDA-Selection 66.7 (2.9) 40.8 (15.9) 68.2 (4.2) 43.2 (3.2) 36.6 (1.7) 44.6 (3.7) 70.1 (5.9) 60.2 (0.3) 57.6 (5.4) 54.2 (4.8)
mRMR-Selection 67.0 (0.5) 43.8 (4.9) 68.9 (2.7) 46.2 (5.4) 33.3 (4.4) 39.2 (1.0) 61.6 (5.7) 57.6 (0.5) 57.6 (6.4) 52.8 (3.5)
MMI-Selection 65.5 (2.7) 49.1 (2.7) 69.3 (7.6) 46.2 (3.4) 38.0 (2.9) 40.1 (0.7) 67.7 (5.9) 65.6 (4.9) 57.3 (2.9) 55.4 (3.7)
MMI-LinT 68.8 (5.4) 47.4 (4.7) 74.3 (2.9) 44.8 (2.5) 37.9 (0.5) 46.2 (4.4) 71.0 (10.6) 67.4 (3.9) 56.6 (3.9) 57.1 (4.3)
MMI-NonLinT 67.9 (4.7) 45.8 (11.8) 73.6 (0.0) 46.2 (2.5) 38.2 (3.4) 47.2 (2.5) 72.4 (6.6) 63.4 (3.7) 58.2 (4.7) 56.9 (4.4)
TABLE II: Four class classification accuracies () averaged over 5x5-fold cross validation repetitions in within session analyses, and averaged over two session-to-session decoding accuracies in across sessions analyses (i.e., model training on session 1 and testing on session 2, and vice versa). Values in parentheses indicate the standard deviations. Bold values indicate the highest mean accuracies across different feature learning methods.

Iv-E Multi-Class Classification Results

Multi-class classification based on the hierarchical decoding approach was performed as: (1) 5x5-fold cross validation on session 1 data, (2) 5x5-fold cross validation on session 2 data, (3) two across sessions analyses (i.e., training on session 1 and testing on session 2 data, and vice versa). Our results demonstrate that MMI-LinT and MMI-NonLinT outperforms other methods in multi-class decoding, where the problem is highly prone to overfitting of high-dimensional features or heuristic feature selection algorithms (see Table II). Highest mean decoding accuracy for within session 1 and across sessions analyses are observed with MMI-LinT ( and ), and for within session 2 analyses is observed with MMI-NonLinT (). Figure 2 depicts the four class decoding confusion matrices between actual and predicted class labels of these best performing feature learning approaches.

Paired t-tests between the proposed and the other methods for within and across sessions analyses are performed. For within session 1, MMI-LinT revealed significant difference from FBCSP with = 0.004, as well as all the other methods with = 0.001. Similarly MMI-NonLinT revealed significant difference from FBCSP with = 0.003, SDA-Selection with = 0.002, as well as all the other methods with = 0.001. For within session 2, MMI-LinT revealed significant difference from CSP and SDA-Selection ( = 0.01), -Selection and FBCSP ( = 0.004), mRMR- ( = 0.001) and MMI-Selection ( = 0.04). Likewise MMI-NonLinT revealed significant difference from CSP ( = 0.01), FBCSP ( = 0.005), - and mRMR-Selection ( = 0.001), SDA- ( = 0.009) and MMI-Selection ( = 0.03). For across sessions, MMI-LinT versus -, SDA- and mRMR-Selection ( = 0.01) showed significant differences. Similarly MMI-NonLinT versus - ( = 0.03), mRMR- ( = 0.009) and SDA-Selection ( = 0.001) revealed significant differences. The other paired comparisons with respect our methods did not show significant differences in across sessions analyses ( 0.05).

Figure 3 presents the dimensionality reduction method results from Table II, as well as a marked summary of these significance levels. We excluded CSP and FBCSP results from Figure 3 since they were performed as baselines with no dimensionality reduction, and were usually statistically outperformed. For all analyses, we did not observe any significant differences by varying for MMI-LinT or MMI-NonLinT.

Fig. 2: Predicted versus actual class decoding accuracies shown in range, averaged across subjects and cross validation repetitions, for the four class problem. Results are computed for the feature learning protocols that produced the highest mean accuracies in Table II; within session 1 with MMI-LinT (), within session 2 with MMI-NonLinT (), across sessions with MMI-LinT (). All values are rounded to the nearest hundredth.
Fig. 3: Accuracies in range across subjects for the dimensionality reduction methods in Table II

. Central line mark represents the median across subjects. The upper and lower edges of the box represent the first and third quartiles. Upper and lower ends of the dashed lines represent the extreme data points. Starred marks indicate presence of a statistically significant difference across subjects. Significance levels: *

0.05, ** 0.01, *** 0.001.

V Discussion

We formulate a general definition for information theoretic feature transformation learning that we discuss to be Bayesian optimal for classification, and not based on feature selection heuristics. Derived by this definition, we present a linear and a nonlinear feature transformation framework. We evaluate the proposed approaches in decoding with respect to conventional CSP and FBCSP derived initial feature vectors as a baseline, statistical testing oriented feature ranking and selection methods ( and SDA), as well as information theoretic feature ranking and selection methods (mRMR and MMI). For multi-class problems, we introduce a graphical model based hierarchical decoding framework, which can be considered as intuitively structured one-versus-rest classifiers. We believe that this hierarchical binary feature transformation learning approach is likely to expand conventional multi-class BCIs. Binary and multi-class decoding results on a four class motor imagery BCI task demonstrate statistically significant performance increases by feature transformation learning, with regards to state-of-the-art feature selection methods.

In discriminative model learning, feature selection is a sub-optimal approach towards the ultimate objective of maximizing mutual information by feature transformations. However, estimating this objective in Eq. 1 is challenging since it is simultaneously based on multiple continuous and discrete random variables. A related line of work tackles the problem of finding global solutions to a similar objective in mutual information based feature selection contexts [47, 48]. There also exists some recent work on estimating mutual information for such discrete-continuous mixtures [33, 34]

. One recent paper suggests measuring joint entropy among multiple variables in the reproducing kernel Hilbert space, thus enabling estimation of mutual information between discrete and continuous variables without explicit probability density function estimation

[49]. Alternatively in this study, we propose a stochastic approximation to the problem, which was also previously studied with the same objective, using various non-parametric entropy estimation schemes [18, 19, 20, 21].

Proposed feature transformation learning approach can be interpreted as determining a manifold on which projections/transformations of the original extracted features carry maximal mutual information with their corresponding class labels, where this projection ideally provides an information theoretic upper bound with respect to any maximum mutual information based feature ranking and selection criteria. Consistently, any MMI feature selection algorithm can be seen as a constrained version of MMI-LinT with sparse orthonormal matrix linear projections. Hereby, we provide a broader definition which is likely to overcome potential shortcomings of feature selection. However, it is important to highlight the main drawback of the proposed method, that it does not maintain the direct neurophysiologically interpretable nature of feature ranking and selection. Feature transformations exploit synergies across initially constructed feature vectors, hence losing physical meanings. For instance in MMI-LinT, obtained features correspond to a combined measure of weighing across initial feature vectors. Nevertheless, this aligns with the hypothesis on the existence of large-scale cortical networks representative of specific tasks [24, 25].

Stochastic mutual information gradients rely on estimating class conditional densities at each iteration. Here, a parametric (e.g., Gaussian) feature transformation choice would force the transformed data samples to follow a specified distribution, which may be restrictive when estimating mutual information [11, 15]. Alternatively, kernel density estimations can performed over the two-dimensional transformed feature domain. Note that this approach is not equivalent to estimating high-dimensional raw EEG feature distributions with discretized kernels. Therefore these estimates in the transformed domain does not provide crude approximations over EEG features.

Commonly, BCI user intent inference pipelines contain subsequent pre-processing, feature extraction and selection steps. Instead of feature selection, the proposed method can simply replace this dimensionality reduction step as a stochastic MMI transformation estimator module. At training time, batch-wise iterative computations involve class conditional kernel density estimations, calculation of the gradient of Eq. 6, and parameter updates for a specified number of epochs. Computational complexity increases linearly with the number of training data samples for a specific number of classes. At test time, computations simply include applying the transformation function (e.g., a single matrix multiplication in MMI-LinT).

A natural multi-class extension for CSP can be performed by combining pairwise CSP analyses for one-versus-rest classifiers as in our hierarchical approach, or directly generating features using multi-class labels (e.g., joint approximate diagonalization of class covariances) [50, 51]. Our feature transformation learning formulation is also capable of directly learning with multi-class labels. However, we exploited a hierarchical decoding model for better level-wise binary feature learning, and reported our results in this framework for comparisons. Furthermore, the hierarchical graphical model based approach allows incorporating useful level transition priors for the BCI system [39]. Notably, our approach demonstrated a more significant advantage especially in the multi-class scenarios (i.e., Table I to Table II). We believe this is an expected result given that one-versus-rest multi-class decoding can combine level-wise confounders. This is particularly interesting to observe the deficiencies and/or redundancies in features selections at pairwise comparisons which can accumulate.

The proposed approach did not reveal highly significant performance differences in some across sessions comparisons. There was also a drop in across sessions accuracies with respect to within session results, due to the challenging nature of the session-to-session transfer learning problem. This is an important observation to be emphasized regarding the practicality of the current approach, which can potentially be restricted by the amount of sessions (two) considered in our current experiments. We believe the current lack of generalizability can be a result of the across sessions unstability of EEG and the transformations we learn, which are based on single session-specific EEG data. One further exploration could be to exploit longitudinal BCI recordings performed over various sessions/days, and investigate the practicality of the our approach when multiple sessions’ data are available for model training. Moreover, in such settings, one can explicitly impose session-invariance constraints to the feature transformation problem. This can be tackled in an adversarial learning framework which we are currently exploring based on our preliminary works

[52, 53], where additional session-invariance constraints by an antagonistic objective regularizes feature learning pipelines. Another potential future direction is to consider information theoretic metric learning methods [54, 55]. This can be performed by learning distance metrics for transforms based on data covariance matrices (e.g., Mahalanobis distance) that utilizes a mutual information based cost.

Generalization and optimal exploitation of the information content in the extracted features with respect to their class labels is essential for discriminative model learning. We addressed the significance of this issue in the design of brain/neural interfaces. Given the significant evidence claiming that feature selection being potentially sub-optimal in model learning [3, 16, 17], we argue that a feature transformation learning approach should be of important use in BCIs.

Vi Conclusion

This work addresses the potential confounders caused by heuristic feature ranking and selection based dimensionality reduction methods that are widely used for brain interfaces. We extend this focus with a novel information theoretic feature transformation concept. We formulate a general definition for the feature learning problem, and present a linear and a nonlinear feature transformation approach derived by this definition. We further introduce a graphical model based, hierarchical binary feature transformation learning and decoding framework for multi-class scenarios. We empirically demonstrate that stochastic, mutual information based feature transformation learning significantly outperforms state-of-the-art feature selection heuristics, and yields significant insights for the growing field of neural interfaces.


  • [1] J. R. Wolpaw et al., “Brain-computer interfaces for communication and control,” Clinical Neurophysiology, vol. 113, no. 6, pp. 767–791, 2002.
  • [2] N. Birbaumer and L. G. Cohen, “Brain–computer interfaces: communication and restoration of movement in paralysis,” The Journal of Physiology, vol. 579, no. 3, pp. 621–636, 2007.
  • [3] I. Guyon and A. Elisseeff, “An introduction to variable and feature selection,”

    Journal of Machine Learning Research

    , vol. 3, no. Mar, pp. 1157–1182, 2003.
  • [4] R. Battiti, “Using mutual information for selecting features in supervised neural net learning,” IEEE Transactions on Neural Networks, vol. 5, no. 4, pp. 537–550, 1994.
  • [5] N. Kwak and C. H. Choi, “Input feature selection by mutual information based on parzen window,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 12, pp. 1667–1671, 2002.
  • [6] D. Garrett et al., “Comparison of linear, nonlinear, and feature selection methods for EEG signal classification,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 11, no. 2, pp. 141–144, 2003.
  • [7] K.-R. Müller et al., “Machine learning techniques for brain-computer interfaces,” Biomed. Tech, vol. 49, no. 1, pp. 11–22, 2004.
  • [8] T. N. Lal et al., “Support vector channel selection in BCI,” IEEE Transactions on Biomedical Engineering, vol. 51, no. 6, pp. 1003–1010, 2004.
  • [9] D. J. Krusienski et al., “Toward enhanced P300 speller performance,” Journal of Neuroscience Methods, vol. 167, no. 1, pp. 15–21, 2008.
  • [10] R. Tomioka and K.-R. Müller, “A regularized discriminative framework for EEG analysis with application to brain–computer interface,” NeuroImage, vol. 49, no. 1, pp. 415–432, 2010.
  • [11] K. K. Ang et al., “Mutual information-based selection of optimal spatial–temporal patterns for single–trial EEG-based BCIs,” Pattern Recognition, vol. 45, no. 6, pp. 2137–2144, 2012.
  • [12] C. Mühl et al., “EEG-based workload estimation across affective contexts,” Frontiers in Neuroscience, vol. 8, p. 114, 2014.
  • [13] R. Jenke et al., “Feature extraction and selection for emotion recognition from EEG,” IEEE Transactions on Affective Computing, vol. 5, no. 3, pp. 327–339, 2014.
  • [14] F. Lotte et al., “A review of classification algorithms for EEG-based brain–computer interfaces,” Journal of Neural Engineering, vol. 4, no. 2, p. R1, 2007.
  • [15] ——, “A review of classification algorithms for EEG-based brain-computer interfaces: A 10-year update,” Journal of Neural Engineering, vol. 15, no. 3, p. 031005, 2018.
  • [16] D. Erdoğmuş et al., “Information theoretic feature selection and projection,” in Speech, Audio, Image and Biomedical Signal Processing using Neural Networks, 2008, pp. 1–22.
  • [17] K. Torkkola, “Information-theoretic methods,” in Feature Extraction.   Springer, 2008, pp. 167–185.
  • [18] ——, “Feature extraction by non-parametric mutual information maximization,” Journal of Machine Learning Research, vol. 3, no. Mar, pp. 1415–1438, 2003.
  • [19] B. Chen et al., “Adaptive filtering under maximum mutual information criterion,” Neurocomputing, vol. 71, no. 16, pp. 3680–3684, 2008.
  • [20] H. Zhang et al., “An information theoretic linear discriminant analysis method,” in International Conference on Pattern Recognition, 2010, pp. 4182–4185.
  • [21] L. Faivishevsky and J. Goldberger, “Dimensionality reduction based on non-parametric mutual information,” Neurocomputing, vol. 80, pp. 31–37, 2012.
  • [22] Z. Nenadic, “Information discriminant analysis: Feature extraction with an information-theoretic objective,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 8, pp. 1394–1407, 2007.
  • [23] K. E. Hild et al., “Feature extraction using information–theoretic learning,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 9, pp. 1385–1392, 2006.
  • [24] D. Mantini et al., “Electrophysiological signatures of resting state networks in the human brain,” Proceedings of the National Academy of Sciences, vol. 104, no. 32, pp. 13 170–13 175, 2007.
  • [25] S. L. Bressler and V. Menon, “Large-scale brain networks in cognition: emerging methods and principles,” Trends in Cognitive Sciences, vol. 14, no. 6, pp. 277–290, 2010.
  • [26] O. Özdenizci et al., “Information theoretic feature projection for single-trial brain-computer interfaces,” in IEEE 27th International Workshop on Machine Learning for Signal Processing, 2017, pp. 1–6.
  • [27] G. Pfurtscheller and C. Neuper, “Motor imagery and direct brain-computer communication,” Proceedings of the IEEE, vol. 89, no. 7, pp. 1123–1134, 2001.
  • [28] H. Ramoser et al., “Optimal spatial filtering of single trial EEG during imagined hand movement,” IEEE Transactions on Rehabilitation Engineering, vol. 8, no. 4, pp. 441–446, 2000.
  • [29] B. Blankertz et al., “Optimizing spatial filters for robust EEG single-trial analysis,” IEEE Signal Processing Magazine, vol. 25, no. 1, pp. 41–56, 2008.
  • [30] K. K. Ang et al., “Filter bank common spatial pattern (FBCSP) in brain-computer interface,” in IEEE International Joint Conference on Neural Networks, 2008, pp. 2390–2397.
  • [31] R. M. Fano, Transmission of information: A statistical theory of communications, 1961.
  • [32] M. Hellman and J. Raviv, “Probability of error, equivocation, and the Chernoff bound,” IEEE Transactions on Information Theory, vol. 16, no. 4, pp. 368–372, 1970.
  • [33] B. C. Ross, “Mutual information between discrete and continuous data sets,” PloS one, vol. 9, no. 2, p. e87357, 2014.
  • [34] W. Gao, S. Kannan, S. Oh, and P. Viswanath, “Estimating mutual information for discrete-continuous mixtures,” in Advances in Neural Information Processing Systems, 2017, pp. 5986–5997.
  • [35] D. Erdoğmuş et al., “Online entropy manipulation: Stochastic information gradient,” IEEE Signal Processing Letters, vol. 10, no. 8, pp. 242–245, 2003.
  • [36] J. C. Principe et al., “Information theoretic learning,” Unsupervised Adaptive Filtering, vol. 1, pp. 265–319, 2000.
  • [37] E. Parzen, “On estimation of a probability density function and mode,” The Annals of Mathematical Statistics, vol. 33, no. 3, pp. 1065–1076, 1962.
  • [38] N. Qian, “On the momentum term in gradient descent learning algorithms,” Neural Networks, vol. 12, no. 1, pp. 145–151, 1999.
  • [39] O. Özdenizci et al., “Hierarchical graphical models for context-aware hybrid brain-machine interfaces,” in 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2018.
  • [40] M. Tangermann et al., “Review of the BCI competition IV,” Frontiers in Neuroscience, vol. 6, p. 55, 2012.
  • [41] C. Brunner et al., “BCI Competition 2008 – Graz data set A,” Institute for Knowledge Discovery (Laboratory of Brain-Computer Interfaces), Graz University of Technology, vol. 16, 2008.
  • [42] H. Peng et al., “Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 8, pp. 1226–1238, 2005.
  • [43] R. I. Jennrich, “Stepwise discriminant analysis,” in Statistical Methods for Digital Computers, K. Enslein et al., Eds.   New York: John Wiley & Sons, 1977, pp. 76–95.
  • [44] N. R. Draper and H. E. Smith,

    Applied Regression Analysis

    .   New York: John Wiley & Sons, 1981.
  • [45] B. W. Silverman, Density Estimation for Statistics and Data Analysis.   Chapman & Hall, 1986.
  • [46] K.-R. Müller et al., “Linear and nonlinear methods for brain-computer interfaces,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 11, no. 2, pp. 165–169, 2003.
  • [47] I. Rodriguez-Lujan et al., “Quadratic programming feature selection,” Journal of Machine Learning Research, vol. 11, pp. 1491–1516, 2010.
  • [48] X. V. Nguyen et al., “Effective global approaches for mutual information based feature selection,” in Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2014, pp. 512–521.
  • [49] S. Yu et al., “Multivariate extension of matrix-based renyi’s alpha-order entropy functional,” arXiv preprint arXiv:1808.07912, 2018.
  • [50] G. Dornhege et al., “Boosting bit rates in noninvasive EEG single-trial classifications by feature combination and multiclass paradigms,” IEEE Transactions on Biomedical Engineering, vol. 51, no. 6, pp. 993–1002, 2004.
  • [51] M. Grosse-Wentrup and M. Buss, “Multiclass common spatial patterns and information theoretic feature extraction,” IEEE Transactions on Biomedical Engineering, vol. 55, no. 8, pp. 1991–2000, 2008.
  • [52] O. Özdenizci et al.

    , “Adversarial deep learning in EEG biometrics,”

    IEEE Signal Processing Letters, vol. 26, no. 5, pp. 710–714, 2019.
  • [53]

    ——, “Transfer learning in brain-computer interfaces with adversarial variational autoencoders,” in

    9th International IEEE EMBS Conference on Neural Engineering, 2019.
  • [54] J. V. Davis et al., “Information-theoretic metric learning,” in International Conference on Machine Learning, 2007, pp. 209–216.
  • [55] G. Niu et al., “Information-theoretic semi-supervised metric learning via entropy regularization,” Neural Computation, vol. 26, no. 8, pp. 1717–1762, 2014.