Multilinear Class-Specific Discriminant Analysis

There has been a great effort to transfer linear discriminant techniques that operate on vector data to high-order data, generally referred to as Multilinear Discriminant Analysis (MDA) techniques. Many existing works focus on maximizing the inter-class variances to intra-class variances defined on tensor data representations. However, there has not been any attempt to employ class-specific discrimination criteria for the tensor data. In this paper, we propose a multilinear subspace learning technique suitable for applications requiring class-specific tensor models. The method maximizes the discrimination of each individual class in the feature space while retains the spatial structure of the input. We evaluate the efficiency of the proposed method on two problems, i.e. facial image analysis and stock price prediction based on limit order book data.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

12/14/2018

Probabilistic Class-Specific Discriminant Analysis

In this paper we formulate a probabilistic model for class-specific disc...
12/14/2018

Class Mean Vector Component and Discriminant Analysis for Kernel Subspace Learning

The kernel matrix used in kernel methods encodes all the information req...
04/04/2020

Weighted Fisher Discriminant Analysis in the Input and Feature Spaces

Fisher Discriminant Analysis (FDA) is a subspace learning method which m...
11/05/2014

Tensor object classification via multilinear discriminant analysis network

This paper proposes a multilinear discriminant analysis network (MLDANet...
04/04/2010

Multilinear Biased Discriminant Analysis: A Novel Method for Facial Action Unit Representation

In this paper a novel efficient method for representation of facial acti...
08/13/2019

Null Space Analysis for Class-Specific Discriminant Learning

In this paper, we carry out null space analysis for Class-Specific Discr...
09/06/2019

Quantized Fisher Discriminant Analysis

This paper proposes a new subspace learning method, named Quantized Fish...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Over the past two decades, several subspace learning techniques have been proposed for computer vision and pattern recognition problems. The aim of subspace learning is to find a set of bases that optimizes a given objective function (or criterion) enhancing properties of interest in the learnt subspace. The obtained projection can be subsequently used either as a means of preprocessing or as a classifier. One of the methods used as a preprocessing step is Principal Component Analysis (PCA) (

[1]), which finds a projection maximizing the data dispersion. While PCA retains most spectral information, it is an unsupervised method that does not utilize labeling information to increase class discrimination. Linear Discriminant Analysis (LDA) ([2]) is one of the most widely used discriminant learning techniques due to its success in many applications ([3, 4, 5]). In LDA, each class is assumed to follow a unimodal distribution and is represented by the corresponding class mean vector. LDA optimizes the ratio between inter-class and intra-class scatters. Several extensions have been proposed in order to relax these two assumptions ([6, 7, 8]). An important characteristic of LDA is that the maximal dimensionality of the learnt subspace is limited by the number of classes forming the problem at hand. For problems where the objective is to discriminate one class from all other alternatives, i.e. for binary problems like face verification, this might not be an optimal choice for class discrimination.

To tackle the latter limitation of LDA, class-specific discriminant analysis techniques were proposed ([9, 10, 11, 12]). In the class-specific setting, a unimodal distribution is still assumed for the class of interest (hereafter noted as positive class), and the objective is to determine class-specific projections discriminating the samples forming the positive class from the rest samples (forming the negative class) in the subspace. By defining suitable out-of-class and in-class scatter matrices, the maximal subspace dimensionality is limited by the cardinality of the positive class, leading to better class discrimination and classification ([9, 11, 13]). Various extensions have been proposed to utilize class-specific formulation. For example, ([13]) proposed a solution to optimize both the class representation and the projection; in addition, approximate and incremental learning solutions were proposed in ([12, 14]).

While being able to overcome the limitation in subspace dimensionality of LDA, there is yet a limitation in the existing Class-Specific Discriminant Analysis (CSDA) methods. These methods are defined on vector data. Since many types of data are naturally represented by (high-order) matrices, generally referred as tensors, exploiting vector-based learning approaches might lead to the loss of spatial information being available on the data. For example, a grayscale image is naturally represented as a matrix (i.e. second order tensor), a color image is represented as a third order tensor and a multi-dimensional time series is represented as a third order tensor. Vectorizing such high-order tensors results to high-dimensional vectors, leading to high computational costs and the small sample size problem ([15]). In order to address such issues, generalizations of many linear subspace learning methods to multilinear ones have been proposed, including MPCA ([16]) and CMDA ([17]) as the multilinear extensions of PCA, GTDA ([18]) and DATER ([19]) as the multilinear extensions of LDA.

With the potential advantage of using tensorial data representations in (binary) verification problems, in this work, we propose to extend the class-specific discrimination criterion for tensor-based learning and formulate the Multilinear Class-Specific Discriminant Analysis (MCSDA) method. Moreover, we provide a time complexity analysis for the proposed method and compare it with its vector counterparts. We conducted experiments in two problems involving data naturally represented in a tensor form, i.e. facial image analysis and stock price prediction based on limit order book data. Experimental results show that the proposed MCSDA is able to outperform related tensor-based and vector-based methods and to compare favourably with recent methods.

The rest of the paper is organized as follows. Section 2 introduces the notations used throughout the paper, as well as related prior works. In section 3, we formulate the proposed MCSDA method and provide our analysis on its time complexity. Section 4 presents our experimental analysis, and conclusions are drawn in Section 5.

Ii Notations and prior work

We start by introducing the notations used throughout the paper and related definitions from multilinear algebra. In addition, previous works in discriminant analysis utilizing multi-class and class-specific criteria are briefly reviewed.

Ii-a Multilinear Algebra Concepts

In this paper, we denote scalar values by either low-case or upper-case characters , vectors by low-case bold-face characters , matrices by upper-case bold-face characters and tensors by calligraphic capital characters . A tensor is a multilinear matrix with modes, and is defined as , where denotes the dimension in mode-. The entry in the th index in mode- for is denoted as .

Definition 1 (Mode- Fiber and Mode- Unfolding)

The mode- fiber of a tensor is a vector of -dimensional, given by fixing every index but . The mode- unfolding of , also known as mode- matricization, transforms the tensor to matrix , which is formed by arranging the mode- fibers as columns. The shape of is with .

Definition 2 (Mode- Product)

The mode- product between a tensor and a matrix is another tensor of size and denoted by . The element of is defined as .

With the definition of mode- product and mode- unfolding, the following equation holds

(1)

For convenience, we denote by .

Ii-B Linear Discriminant Analysis

Let us denote by a set of -dimensional vectors, each of which has an associated class label () belonging to the label set . is the number of samples in class . Let denote the th sample of class . The mean vector of class is calculated as . The mean vector of the entire set is . Linear Discriminant Analysis (LDA) seeks an othonormal projection matrix that maps each sample to a lower -dimensional feature space in which samples from different classes are highly discriminated. is obtained by maximizing the ratio between the inter-class and intra-class variances in the feature space ([20]), i.e.

(2)

where denotes the between-class scatter matrix and denotes the within-class scatter matrix. By maximizing in (2), the dispersion between the data and the corresponding class mean is minimized while the dispersion between each class mean and the total mean is maximized in the projected subspace. The columns of

are formed by the eigenvectors corresponding to the

largest eigenvalues of

.

Ii-C Class-Specific Discriminant Analysis

While LDA seeks to project all data samples to a common subspace where the data samples between classes are expected to be highly discriminated, class-specific discriminant analysis (CSDA) learns a subspace discriminating the class of interest from everything else. For a -class classification problem, application of CSDA leads to the determination of different discriminant subspaces in an One-versus-Rest manner, where is the dimensionality of the th subspace that discriminates samples of class from the rest.

Let us denote , the positive and negative labels, respectively. The optimal mapping is obtained by maximizing the following criterion

(3)

where is the out-of-class distance and is the in-class distance, respectively. That is the positive class is assumed to be unimodal and the optimal projection matrix maps the positive class vectors as close as possible to the positive class mean while keeping the negative samples far away from in the subspace. in (3) can be expressed as

(4)

with

(5)

denoting the out-of-class and in-class scatter matrices, respectively. The solution of (4) is obtained by the eigenvectors corresponding to the largest eigenvalues of .

The optimal dimensionality may vary for each class. For classes that are already highly discriminated from the others, fewer dimensions may be needed as compared to classes that are densely mixed with other classes. Since the rank of is at most ( is the number of samples from positive class), the dimensionality of the learnt subspace can be at most .

Ii-D Multilinear Discriminant Analysis

Several works have extended multi-class discriminant analysis criterion in order to utilize the natural tensor representation of the input data ([21, 22, 19, 18, 17]). We denote the set of tensor samples as , each with an associated class label () belonging to the label set . The mean tensor of class is calculated as and the total mean tensor is . MDA seeks a set of projection matrices that map to , with the subspace projection defined as

(6)

Similar to LDA, the set of optimal projection matrices are obtained by maximizing ratio between the inter-class and intra-class distances, measured in the tensor subspace

(7)

where

(8)
(9)

are, respectively, the between-class and within-class distances.

An iterative approach is usually employed to solve the optimization problem in (7). For example ([17]) proposed CMDA algorithm that assumes orthogonal constraints on each projection matrix and optimizes (7) by iteratively solving the following trace ratio problem for each mode-

(10)

where

(11)
(12)

are the between-class and within-class scatter matrices in mode-.

CMDA first initializes with all ones. At each iteration, the algorithm sequentially updates by maximizing (10) while keeping the rest projection matrices fixed.

Iii Multilinear Class-Specific Discriminant Analysis

In this section, we formulate the proposed multilinear version of CSDA, called Multilinear Class-Specific Discriminant Analysis (MCSDA). MCSDA finds a set of projection matrices that map -dimensional tensor space to a smaller tensor as defined in (6). The objective function of MCSDA is to find a tensor subspace in which the distances of the negative samples from the positive mean tensor are maximized and the distances of the positive samples from it are minimized.

Let us denote by the mean tensor of the positive class. The out-of-class and in-class distances are defined as follows

(13)

The MCSDA criterion is then expressed as

(14)

As in case of MDA, the objective in (14) exposes a dependency between each . We therefore optimize (14) by applying an iterative optimization process. In order to optimize for each , and need to be expressed as functions of . This can be done for by utilizing the relation in (1), i.e.

(15)

where denotes after unfolding the projected tensor in mode-. Let us denote the out-of-class scatter matrix in mode-, which is defined as

(16)

Then in (15) is expressed as . In a similar manner, the in-class distance calculated in mode- is expressed as with

(17)

Finally, the class-specific criterion with respect to

(18)

MCSDA starts by initializing with ones. At each iteration, it updates by maximizing (18), while keeping the rest projection matrices fixed. A detailed description of the MCSDA optimization process is presented in Algorithm 1.

Iii-a Complexity Discussion

It is clear that the number of parameters of the tensor version is much lower compared to the vector version. Suppose the dimensionality of each tensor sample is , which corresponds to a vectorized sample in . Given the tensor subspace is , the number of parameters for MCSDA is equal to . The corresponding CSDA model projects each vectorized sample from to , requiring parameters. In order better understand the difference between the two cases, let us consider the following example. For an image of size pixels projected to a scalar, the tensor model learns parameters, while the vector model learns parameters.

Regarding time complexity, let us denote the total number of elements in input space and the total number of elements in the learnt subspace. The solution of CSDA involves the following steps:

  • Calculation of and defined in (5) having time complexity of .

  • Calculation of requires matrix inversion of and matrix multiplication between and , having time complexity of .

  • Eigenvalue decomposition of having time complexity of .

Thus the total time complexity of CSDA is

(19)

Input: Training tensor and respective labels ; Subspace dimensionality ; maximum iteration and threshold .

1:Initialize with
2:for  to  do
3:     for  to  do
4:         Calculate according to (16) using
5:         Calculate according to (17) using
6:         Update by solving (18)      
7:     end for
8:     if   then
9:         Terminate      
10:     end if
11:end for

Output: Projection matrices

Algorithm 1 MCSDA

MCSDA employs an iterative process parameterized by the terminating threshold and the number of maximum iteration . At each iteration, MCSDA requires the following computation steps:

  • Calculation of and requires the projection of to having time complexity of .

  • Calculation of requires matrix inversion of and matrix multiplication between and , having time complexity of .

  • Eigenvalue decomposition of having time complexity of .

Hence, the computational cost to update of MCSDA is . Let be the number of maximum iteration, the maximum cost of computation of MCSDA is

(20)

Due to the iterative nature of MCSDA, it is not straightforward to compare the time complexity of MCSDA with that of CSDA. Our experiments showed that with the maximum number of iteration set to , MCSDA already achieves good performance. In addition, for frequently encountered data, the number of tensor modes ranging from to . For example, grayscale images, EEG multichannel data or time-series financial data has while RGB images has or video data has . Comparing the first two terms of (19) and (20) and noting the fact that the dimensions of the projected space are usually much smaller than the input, it is easy to see that . Comparing the second term of (19) and (20), it is also clear that .

To conclude, the solution of the vector model is more costly in terms of computation as compared to the tensor model. Moreover, the vector approach with becomes impractical when scales to the order of thousands or more, which is usually the case. In contrast, the tensor approach with is scalable with high-dimensional input.

Iv Experiments

In this section, we provide experiments conducted in order to evaluate the effectiveness of the proposed MCSDA and compare it with related discriminant analysis methods, namely vector-based Class-Specific Discriminant Analysis (CSDA) and Mulitilinear Discriminant Analysis (MDA) ([17]). It should be noted that the class-specific methods model classes as binary problems, we therefore conducted the experiments in which one-vs-rest MDA classifiers are learned. We performed the benchmark in three publicly available datasets coming from two application domains: face verification and stock price prediction based on limit order book data. Detailed description of the datasets and the corresponding experimental settings are provided in the following subsections.

Since all the competing methods are subspace methods, after learning the optimal projection matrices, one can train any classifier on the data representations in the discriminant subspace to boost the performance. For example, the distance between training sample and each class mean vectors can be used as the training data for SVM classifier. In the test phase, a test sample is projected to the discriminant subspace and distances between test sample and each class mean are used as feature vector fed to the learnt SVM classifier, similar to ([13]). Since the goal of this paper is to directly compare the discrimination ability of MCSDA, compared to that of CSDA and MDA, we do not train any other classifier in the discriminant space. In the test phase, the similarity score is calculated as the inverse of the distance between the test sample and the positive mean in the discriminant space. The similarity scores are used to evaluate the performance of each algorithm, based on different metrics as described next.

Iv-a Facial Image Datasets

Since tensor is a natural representation of image data, we employ two facial image datasets, namely ORL and CMU PIE, with different sizes to compare the performance of the tensor-based and vector-based methods. The ORL dataset ([23]) consists of facial images depicting persons ( images each). The images were taken at different times with different conditions in terms of lighting, facial expressions (smile/neutral) and facial details (open/closed eyes, with/without glasses). All of the images were captured in frontal position with a tolerance of rotation and tilting up to degrees. The CMU PIE dataset ([24]) consists of individuals with facial images in total. The images were captured with different camera positions and flashes under different pose, illumination and expression. All images in 5 near frontal positions () of 8 individuals () were used in our experiments. Moreover, all images used in our experiments are in grayscale format.

Using the above datasets, we formulate multiple face verification problems. That is, a class-specific model is learned for a person of interest, either using class-specific or the multi-class (in this case binary) criterion. During the test phase, image a test image is presented and the model decides whether the image depicts the person of interest or not ([9, 12, 14]). We measure the performance of each model by calculating the Average Precision (AP) metric. This process is applied multiple times (equal to the number of persons in each dataset) and the performance of each approach is obtained by calculating the mean Average Precision (mAP) over all sub-experiments. We applied multiple experiments based on five different train/test split sizes, where percent of the data is randomly selected for training and the rest for testing with . For each value of , 5 experiments were repeated and the average result is reported.

Regarding the preprocessing step, all facial images were cropped and resized to pixels. For tensor-based approaches, we keep the projected dimension of both mode- and mode- equal, ranging from to . The maximum number of iterations is set to and the terminating threshold . To ensure stability, we regularized in MDA, in MCSDA and

in CSDA by adding a scaled version of the identity matrix (using a value of

). We also investigated the case when additional information is available by generating HOG images ([25]) from the original images and concatenating the original image and its HOG image to form a -mode tensors of size . The results from the enriched version are denoted by CSDA-H, MCSDA-H and MDA-H.

Iv-B Limit Order Book Dataset

In addition to image data, (multi-dimensional) time-series, like limit order book (LOB) data, also have a natural representation as tensors of two modes. In our experiments, a recently introduced LOB dataset, called FI-2010 ([26]), was used. FI-2010 collected order information of 5 Finnish stocks in 10 consecutive days, resulting in more than 4 millions of order events. For every consecutive order events a -dimensional feature vector is extracted and a corresponding label is defined indicating the prospective change (increase, decrease or stationary) of the mid-price after the next order events. For the vector models, each sample is of size dimension, representing information from most recent order events. In order to take into account more information in the recent past, our tensor models exploits a tensor sample of size , representing information from most recent order events.

We followed the standard day-based anchored cross-validation sets provided by the database with 9 folds in total. For the tensor-based models, we varied the projected dimension of the first mode from to with a step of and the second mode from to . The values of and were the same as those used in the face verification experiments. Since FI-2010 is an unbalanced dataset with the mid-price remaining stationary most of the time, we cross-validated based on average score per class and also report the corresponding accuracy, average precision per class, average recall per class. Since our experimental protocol is the same with that used in ([27]) for the Bag-of-Words (BoF) and Neural Bag-of-Words (N-BoF) models, we directly report their results. In addition, we report the baseline results from the database ([26]

) using Single Layer Feed-forward Network (SLFN) and Ridge Regression (RR).

Iv-C Results

The results from 2 facial datasets are presented in Table I and Table II. Moreover, the last column of Tables 1 and 2 shows the relative computation time () of each method measured on the same machine (normalized with respect to the computation time of the proposed MCSDA method). Comparing the vector model and the tensor model utilizing class-specific criterion, it is clear that CSDA slightly outperforms the proposed MCSDA. However, as can be seen, the computational time (normalized with respect to the training time of MCSDA) of CSDA is higher (by one or two orders of ten). The computational efficiency of the proposed MCSDA over CSDA becomes more significant when the dimension of the input scales up. While the number of elements in the input doubles, computation time of MCSDA-H scales favourably while CSDA-H requires approximately times more computation compared to CSDA. The result justifies our analysis in the time complexity discussion section above. Comparing the two tensor-based approaches, the proposed MCSDA outperforms MDA in most of the configurations of , while their computational times are similar. Regarding the exploitation of enriched information, we can observe that all competing methods achieved some improvements. The benefit of additional information is marginal when the training data is small but clearly visible when % of the data is used for training for the tensor-based methods. In contrast, the benefit of additional information for the vector-based model is very small.

The results for stock prediction using LOB data are presented in Table III

. While the performance of MCSDA was not better than its vector counterpart in the above face verification experiments, MCSDA outperforms all competing methods in the stock prediction problem, including the more complex neural network-based bag-of-words model N-BoF (

[27]).

The difference in the relative performance between the vector-based and tensor-based variants in the two different application domains can be explained by looking into the optimal dimensionality of the subspaces obtained for both CSDA and MCSDA. In the two image verification problems, the optimal dimensionality of the subspace obtained for MCSDA is equal to dimensions for both ORL and CMU PIE datasets. For CSDA, the optimal subspace dimensionality is equal to dimensions for ORL and

dimensions for CMU PIE. This result shows that in the CSDA case, the number of parameters is much higher compared to its tensor counterpart. In facial images, several visual cues are usually necessary to perform the verification. Since the vector approach estimates many more parameters, more visual cues can be captured, which leads to better performance, compared to MCSDA. However, this comes with a much higher computational cost.

In the stock prediction problem, the difference between the number of estimated parameters for MCSDA and CSDA is small. Particularly, over folds the average number of parameters estimated for MCSDA is approximately equal to , while for CSDA is slightly over and equal to . Since multilinear class-specific projection (MCSDA) can perform the projection along temporal mode (mode-) too, MCSDA can potentially capture important temporal cues required to predict future movements in stock price. The experiment in stock price prediction problem shows the potential of multilinear techniques in general, and MCSDA in particular, in exploiting the multilinear structure of the time-series data.

CSDA
MDA
MCSDA
CSDA-H
MDA-H
MCSDA-H
TABLE I: Performance (mAP) on ORL dataset
CSDA
MDA
MCSDA
CSDA-H
MDA-H
MCSDA-H
TABLE II: Performance (mAP) on CMU PIE dataset
Accuracy Precision Recall F1
RR
SLFN
CSDA
MDA
MCSDA
BoF
N-BoF
TABLE III: Performance on FI-2010 dataset

V Conclusions

In this paper, we proposed a tensor subspace learning method that utilizes the intrinsic tensor representation of the data together with the class-specific discrimination criterion. We provided a theoretical discussion of the time complexity of the proposed method, compared with its vector counterpart. Experimental results show that the MCSDA is computationally efficient and scalable with performance close to its vector counterpart in face verification problems, while outperformed other competing methods in a stock price prediction problem problem based on Limit Order Book data.

References

  • [1] S. Wold, K. Esbensen, and P. Geladi, “Principal component analysis,” Chemometrics and intelligent laboratory systems, vol. 2, no. 1-3, pp. 37–52, 1987.
  • [2] R. O. Duda, P. E. Hart, and D. G. Stork, “Pattern classification. 2nd,” Edition. New York, p. 55, 2001.
  • [3] A. Iosifidis, A. Tefas, and I. Pitas, “Activity-based person identification using fuzzy representation and discriminant learning,” IEEE Transactions on Information Forensics and Security, vol. 7, no. 2, pp. 530–542, 2012.
  • [4] A. Iosifidis, A. Tefas, N. Nikolaidis, and I. Pitas, “Multi-view human movement recognition based on fuzzy distances and linear discriminant analysis,” Computer Vision and Image Understanding, vol. 116, no. 3, pp. 347–360, 2012.
  • [5]

    C.-X. Ren, Z. Lei, D.-Q. Dai, and S. Z. Li, “Enhanced local gradient order features and discriminant analysis for face recognition,”

    IEEE transactions on cybernetics, vol. 46, no. 11, pp. 2656–2669, 2016.
  • [6] M. Zhu and A. M. Martinez, “Subclass discriminant analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 8, pp. 1274–1286, 2006.
  • [7] A. Iosifidis, A. Tefas, and I. Pitas, “On the optimal class representation in linear discriminant analysis,” IEEE transactions on neural networks and learning systems, vol. 24, no. 9, pp. 1491–1497, 2013.
  • [8] A. Iosifidis, A. Tefas, and I. Pitas, “Kernel reference discriminant analysis,” Pattern Recognition Letters, vol. 49, pp. 85–91, 2014.
  • [9] G. Goudelis, S. Zafeiriou, A. Tefas, and I. Pitas, “Class-specific kernel-discriminant analysis for face verification,” IEEE Transactions on Information Forensics and Security, vol. 2, no. 3, pp. 570–587, 2007.
  • [10] S. Zafeiriou, G. Tzimiropoulos, M. Petrou, and T. Stathaki, “Regularized kernel discriminant analysis with a robust kernel for face recognition and verification,” IEEE transactions on neural networks and learning systems, vol. 23, no. 3, pp. 526–534, 2012.
  • [11] S. R. Arashloo and J. Kittler, “Class-specific kernel fusion of multiple descriptors for face verification using multiscale binarised statistical image features,” IEEE Transactions on Information Forensics and Security, vol. 9, no. 12, pp. 2100–2109, 2014.
  • [12] A. Iosifidis and M. Gabbouj, “Class-specific kernel discriminant analysis revisited: Further analysis and extensions,” IEEE Transactions on Cybernetics, 2016.
  • [13] A. Iosifidis, A. Tefas, and I. Pitas, “Class-specific reference discriminant analysis with application in human behavior analysis,” IEEE Transactions on Human-Machine Systems, vol. 45, no. 3, pp. 315–326, 2015.
  • [14] A. Iosifidis and M. Gabbouj, “Scaling up class-specific kernel discriminant analysis for large-scale face verification,” IEEE Transactions on Information Forensics and Security, vol. 11, no. 11, pp. 2453–2465, 2016.
  • [15] L. Chen, M. Liao, M. Ko, J. Lin, and G. Yu, “A new lda-based face recognition system wich can solve the small sample size problem,” Pattern Recognition, vol. 33, no. 10, pp. 1713–1726, 2000.
  • [16] H. Lu, K. N. Plataniotis, and A. N. Venetsanopoulos, “Mpca: Multilinear principal component analysis of tensor objects,” IEEE Transactions on Neural Networks, vol. 19, no. 1, pp. 18–39, 2008.
  • [17] Q. Li and D. Schonfeld, “Multilinear discriminant analysis for higher-order tensor data classification,” IEEE transactions on pattern analysis and machine intelligence, vol. 36, no. 12, pp. 2524–2537, 2014.
  • [18] D. Tao, X. Li, X. Wu, and S. J. Maybank, “General tensor discriminant analysis and gabor features for gait recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 10, 2007.
  • [19] S. Yan, D. Xu, Q. Yang, L. Zhang, X. Tang, and H.-J. Zhang, “Discriminant analysis with tensor representation,” in Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, vol. 1, pp. 526–532, IEEE, 2005.
  • [20] M. Welling, “Fisher linear discriminant analysis,” Department of Computer Science, University of Toronto, vol. 3, no. 1, 2005.
  • [21] H. Kong, E. K. Teoh, J. G. Wang, and R. Venkateswarlu, “Two-dimensional fisher discriminant analysis: forget about small sample size problem [face recognition applications],” in Acoustics, Speech, and Signal Processing, 2005. Proceedings.(ICASSP’05). IEEE International Conference on, vol. 2, pp. ii–761, IEEE, 2005.
  • [22] J. Ye, R. Janardan, and Q. Li, “Two-dimensional linear discriminant analysis,” in Advances in neural information processing systems, pp. 1569–1576, 2005.
  • [23] F. S. Samaria and A. C. Harter, “Parameterisation of a stochastic model for human face identification,” in Applications of Computer Vision, 1994., Proceedings of the Second IEEE Workshop on, pp. 138–142, IEEE, 1994.
  • [24] T. Sim, S. Baker, and M. Bsat, “The cmu pose, illumination, and expression (pie) database,” in Automatic Face and Gesture Recognition, 2002. Proceedings. Fifth IEEE International Conference on, pp. 53–58, IEEE, 2002.
  • [25] N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, vol. 1, pp. 886–893, IEEE, 2005.
  • [26] A. Ntakaris, M. Magris, J. Kanniainen, M. Gabbouj, and A. Iosifidis, “Benchmark dataset for mid-price prediction of limit order book data,” arXiv preprint arXiv:1705.03233, 2017.
  • [27] N. Passalis, A. Tsantekidis, A. Tefas, J. Kanniainen, M. Gabbouj, and A. Iosifidis, “Time-series classification using neural bag-of-features,” in European Signal Processing Conference (EUSIPCO), Kos, Greece, 2017.