I Introduction
Over the past two decades, several subspace learning techniques have been proposed for computer vision and pattern recognition problems. The aim of subspace learning is to find a set of bases that optimizes a given objective function (or criterion) enhancing properties of interest in the learnt subspace. The obtained projection can be subsequently used either as a means of preprocessing or as a classifier. One of the methods used as a preprocessing step is Principal Component Analysis (PCA) (
[1]), which finds a projection maximizing the data dispersion. While PCA retains most spectral information, it is an unsupervised method that does not utilize labeling information to increase class discrimination. Linear Discriminant Analysis (LDA) ([2]) is one of the most widely used discriminant learning techniques due to its success in many applications ([3, 4, 5]). In LDA, each class is assumed to follow a unimodal distribution and is represented by the corresponding class mean vector. LDA optimizes the ratio between interclass and intraclass scatters. Several extensions have been proposed in order to relax these two assumptions ([6, 7, 8]). An important characteristic of LDA is that the maximal dimensionality of the learnt subspace is limited by the number of classes forming the problem at hand. For problems where the objective is to discriminate one class from all other alternatives, i.e. for binary problems like face verification, this might not be an optimal choice for class discrimination.To tackle the latter limitation of LDA, classspecific discriminant analysis techniques were proposed ([9, 10, 11, 12]). In the classspecific setting, a unimodal distribution is still assumed for the class of interest (hereafter noted as positive class), and the objective is to determine classspecific projections discriminating the samples forming the positive class from the rest samples (forming the negative class) in the subspace. By defining suitable outofclass and inclass scatter matrices, the maximal subspace dimensionality is limited by the cardinality of the positive class, leading to better class discrimination and classification ([9, 11, 13]). Various extensions have been proposed to utilize classspecific formulation. For example, ([13]) proposed a solution to optimize both the class representation and the projection; in addition, approximate and incremental learning solutions were proposed in ([12, 14]).
While being able to overcome the limitation in subspace dimensionality of LDA, there is yet a limitation in the existing ClassSpecific Discriminant Analysis (CSDA) methods. These methods are defined on vector data. Since many types of data are naturally represented by (highorder) matrices, generally referred as tensors, exploiting vectorbased learning approaches might lead to the loss of spatial information being available on the data. For example, a grayscale image is naturally represented as a matrix (i.e. second order tensor), a color image is represented as a third order tensor and a multidimensional time series is represented as a third order tensor. Vectorizing such highorder tensors results to highdimensional vectors, leading to high computational costs and the small sample size problem ([15]). In order to address such issues, generalizations of many linear subspace learning methods to multilinear ones have been proposed, including MPCA ([16]) and CMDA ([17]) as the multilinear extensions of PCA, GTDA ([18]) and DATER ([19]) as the multilinear extensions of LDA.
With the potential advantage of using tensorial data representations in (binary) verification problems, in this work, we propose to extend the classspecific discrimination criterion for tensorbased learning and formulate the Multilinear ClassSpecific Discriminant Analysis (MCSDA) method. Moreover, we provide a time complexity analysis for the proposed method and compare it with its vector counterparts. We conducted experiments in two problems involving data naturally represented in a tensor form, i.e. facial image analysis and stock price prediction based on limit order book data. Experimental results show that the proposed MCSDA is able to outperform related tensorbased and vectorbased methods and to compare favourably with recent methods.
The rest of the paper is organized as follows. Section 2 introduces the notations used throughout the paper, as well as related prior works. In section 3, we formulate the proposed MCSDA method and provide our analysis on its time complexity. Section 4 presents our experimental analysis, and conclusions are drawn in Section 5.
Ii Notations and prior work
We start by introducing the notations used throughout the paper and related definitions from multilinear algebra. In addition, previous works in discriminant analysis utilizing multiclass and classspecific criteria are briefly reviewed.
Iia Multilinear Algebra Concepts
In this paper, we denote scalar values by either lowcase or uppercase characters , vectors by lowcase boldface characters , matrices by uppercase boldface characters and tensors by calligraphic capital characters . A tensor is a multilinear matrix with modes, and is defined as , where denotes the dimension in mode. The entry in the th index in mode for is denoted as .
Definition 1 (Mode Fiber and Mode Unfolding)
The mode fiber of a tensor is a vector of dimensional, given by fixing every index but . The mode unfolding of , also known as mode matricization, transforms the tensor to matrix , which is formed by arranging the mode fibers as columns. The shape of is with .
Definition 2 (Mode Product)
The mode product between a tensor and a matrix is another tensor of size and denoted by . The element of is defined as .
With the definition of mode product and mode unfolding, the following equation holds
(1) 
For convenience, we denote by .
IiB Linear Discriminant Analysis
Let us denote by a set of dimensional vectors, each of which has an associated class label () belonging to the label set . is the number of samples in class . Let denote the th sample of class . The mean vector of class is calculated as . The mean vector of the entire set is . Linear Discriminant Analysis (LDA) seeks an othonormal projection matrix that maps each sample to a lower dimensional feature space in which samples from different classes are highly discriminated. is obtained by maximizing the ratio between the interclass and intraclass variances in the feature space ([20]), i.e.
(2) 
where denotes the betweenclass scatter matrix and denotes the withinclass scatter matrix. By maximizing in (2), the dispersion between the data and the corresponding class mean is minimized while the dispersion between each class mean and the total mean is maximized in the projected subspace. The columns of
are formed by the eigenvectors corresponding to the
largest eigenvalues of
.IiC ClassSpecific Discriminant Analysis
While LDA seeks to project all data samples to a common subspace where the data samples between classes are expected to be highly discriminated, classspecific discriminant analysis (CSDA) learns a subspace discriminating the class of interest from everything else. For a class classification problem, application of CSDA leads to the determination of different discriminant subspaces in an OneversusRest manner, where is the dimensionality of the th subspace that discriminates samples of class from the rest.
Let us denote , the positive and negative labels, respectively. The optimal mapping is obtained by maximizing the following criterion
(3) 
where is the outofclass distance and is the inclass distance, respectively. That is the positive class is assumed to be unimodal and the optimal projection matrix maps the positive class vectors as close as possible to the positive class mean while keeping the negative samples far away from in the subspace. in (3) can be expressed as
(4) 
with
(5) 
denoting the outofclass and inclass scatter matrices, respectively. The solution of (4) is obtained by the eigenvectors corresponding to the largest eigenvalues of .
The optimal dimensionality may vary for each class. For classes that are already highly discriminated from the others, fewer dimensions may be needed as compared to classes that are densely mixed with other classes. Since the rank of is at most ( is the number of samples from positive class), the dimensionality of the learnt subspace can be at most .
IiD Multilinear Discriminant Analysis
Several works have extended multiclass discriminant analysis criterion in order to utilize the natural tensor representation of the input data ([21, 22, 19, 18, 17]). We denote the set of tensor samples as , each with an associated class label () belonging to the label set . The mean tensor of class is calculated as and the total mean tensor is . MDA seeks a set of projection matrices that map to , with the subspace projection defined as
(6) 
Similar to LDA, the set of optimal projection matrices are obtained by maximizing ratio between the interclass and intraclass distances, measured in the tensor subspace
(7) 
where
(8) 
(9) 
are, respectively, the betweenclass and withinclass distances.
An iterative approach is usually employed to solve the optimization problem in (7). For example ([17]) proposed CMDA algorithm that assumes orthogonal constraints on each projection matrix and optimizes (7) by iteratively solving the following trace ratio problem for each mode
(10) 
where
(11) 
(12) 
are the betweenclass and withinclass scatter matrices in mode.
CMDA first initializes with all ones. At each iteration, the algorithm sequentially updates by maximizing (10) while keeping the rest projection matrices fixed.
Iii Multilinear ClassSpecific Discriminant Analysis
In this section, we formulate the proposed multilinear version of CSDA, called Multilinear ClassSpecific Discriminant Analysis (MCSDA). MCSDA finds a set of projection matrices that map dimensional tensor space to a smaller tensor as defined in (6). The objective function of MCSDA is to find a tensor subspace in which the distances of the negative samples from the positive mean tensor are maximized and the distances of the positive samples from it are minimized.
Let us denote by the mean tensor of the positive class. The outofclass and inclass distances are defined as follows
(13) 
The MCSDA criterion is then expressed as
(14) 
As in case of MDA, the objective in (14) exposes a dependency between each . We therefore optimize (14) by applying an iterative optimization process. In order to optimize for each , and need to be expressed as functions of . This can be done for by utilizing the relation in (1), i.e.
(15) 
where denotes after unfolding the projected tensor in mode. Let us denote the outofclass scatter matrix in mode, which is defined as
(16) 
Then in (15) is expressed as . In a similar manner, the inclass distance calculated in mode is expressed as with
(17) 
Finally, the classspecific criterion with respect to
(18) 
MCSDA starts by initializing with ones. At each iteration, it updates by maximizing (18), while keeping the rest projection matrices fixed. A detailed description of the MCSDA optimization process is presented in Algorithm 1.
Iiia Complexity Discussion
It is clear that the number of parameters of the tensor version is much lower compared to the vector version. Suppose the dimensionality of each tensor sample is , which corresponds to a vectorized sample in . Given the tensor subspace is , the number of parameters for MCSDA is equal to . The corresponding CSDA model projects each vectorized sample from to , requiring parameters. In order better understand the difference between the two cases, let us consider the following example. For an image of size pixels projected to a scalar, the tensor model learns parameters, while the vector model learns parameters.
Regarding time complexity, let us denote the total number of elements in input space and the total number of elements in the learnt subspace. The solution of CSDA involves the following steps:

Calculation of and defined in (5) having time complexity of .

Calculation of requires matrix inversion of and matrix multiplication between and , having time complexity of .

Eigenvalue decomposition of having time complexity of .
Thus the total time complexity of CSDA is
(19) 
MCSDA employs an iterative process parameterized by the terminating threshold and the number of maximum iteration . At each iteration, MCSDA requires the following computation steps:

Calculation of and requires the projection of to having time complexity of .

Calculation of requires matrix inversion of and matrix multiplication between and , having time complexity of .

Eigenvalue decomposition of having time complexity of .
Hence, the computational cost to update of MCSDA is . Let be the number of maximum iteration, the maximum cost of computation of MCSDA is
(20) 
Due to the iterative nature of MCSDA, it is not straightforward to compare the time complexity of MCSDA with that of CSDA. Our experiments showed that with the maximum number of iteration set to , MCSDA already achieves good performance. In addition, for frequently encountered data, the number of tensor modes ranging from to . For example, grayscale images, EEG multichannel data or timeseries financial data has while RGB images has or video data has . Comparing the first two terms of (19) and (20) and noting the fact that the dimensions of the projected space are usually much smaller than the input, it is easy to see that . Comparing the second term of (19) and (20), it is also clear that .
To conclude, the solution of the vector model is more costly in terms of computation as compared to the tensor model. Moreover, the vector approach with becomes impractical when scales to the order of thousands or more, which is usually the case. In contrast, the tensor approach with is scalable with highdimensional input.
Iv Experiments
In this section, we provide experiments conducted in order to evaluate the effectiveness of the proposed MCSDA and compare it with related discriminant analysis methods, namely vectorbased ClassSpecific Discriminant Analysis (CSDA) and Mulitilinear Discriminant Analysis (MDA) ([17]). It should be noted that the classspecific methods model classes as binary problems, we therefore conducted the experiments in which onevsrest MDA classifiers are learned. We performed the benchmark in three publicly available datasets coming from two application domains: face verification and stock price prediction based on limit order book data. Detailed description of the datasets and the corresponding experimental settings are provided in the following subsections.
Since all the competing methods are subspace methods, after learning the optimal projection matrices, one can train any classifier on the data representations in the discriminant subspace to boost the performance. For example, the distance between training sample and each class mean vectors can be used as the training data for SVM classifier. In the test phase, a test sample is projected to the discriminant subspace and distances between test sample and each class mean are used as feature vector fed to the learnt SVM classifier, similar to ([13]). Since the goal of this paper is to directly compare the discrimination ability of MCSDA, compared to that of CSDA and MDA, we do not train any other classifier in the discriminant space. In the test phase, the similarity score is calculated as the inverse of the distance between the test sample and the positive mean in the discriminant space. The similarity scores are used to evaluate the performance of each algorithm, based on different metrics as described next.
Iva Facial Image Datasets
Since tensor is a natural representation of image data, we employ two facial image datasets, namely ORL and CMU PIE, with different sizes to compare the performance of the tensorbased and vectorbased methods. The ORL dataset ([23]) consists of facial images depicting persons ( images each). The images were taken at different times with different conditions in terms of lighting, facial expressions (smile/neutral) and facial details (open/closed eyes, with/without glasses). All of the images were captured in frontal position with a tolerance of rotation and tilting up to degrees. The CMU PIE dataset ([24]) consists of individuals with facial images in total. The images were captured with different camera positions and flashes under different pose, illumination and expression. All images in 5 near frontal positions () of 8 individuals () were used in our experiments. Moreover, all images used in our experiments are in grayscale format.
Using the above datasets, we formulate multiple face verification problems. That is, a classspecific model is learned for a person of interest, either using classspecific or the multiclass (in this case binary) criterion. During the test phase, image a test image is presented and the model decides whether the image depicts the person of interest or not ([9, 12, 14]). We measure the performance of each model by calculating the Average Precision (AP) metric. This process is applied multiple times (equal to the number of persons in each dataset) and the performance of each approach is obtained by calculating the mean Average Precision (mAP) over all subexperiments. We applied multiple experiments based on five different train/test split sizes, where percent of the data is randomly selected for training and the rest for testing with . For each value of , 5 experiments were repeated and the average result is reported.
Regarding the preprocessing step, all facial images were cropped and resized to pixels. For tensorbased approaches, we keep the projected dimension of both mode and mode equal, ranging from to . The maximum number of iterations is set to and the terminating threshold . To ensure stability, we regularized in MDA, in MCSDA and
in CSDA by adding a scaled version of the identity matrix (using a value of
). We also investigated the case when additional information is available by generating HOG images ([25]) from the original images and concatenating the original image and its HOG image to form a mode tensors of size . The results from the enriched version are denoted by CSDAH, MCSDAH and MDAH.IvB Limit Order Book Dataset
In addition to image data, (multidimensional) timeseries, like limit order book (LOB) data, also have a natural representation as tensors of two modes. In our experiments, a recently introduced LOB dataset, called FI2010 ([26]), was used. FI2010 collected order information of 5 Finnish stocks in 10 consecutive days, resulting in more than 4 millions of order events. For every consecutive order events a dimensional feature vector is extracted and a corresponding label is defined indicating the prospective change (increase, decrease or stationary) of the midprice after the next order events. For the vector models, each sample is of size dimension, representing information from most recent order events. In order to take into account more information in the recent past, our tensor models exploits a tensor sample of size , representing information from most recent order events.
We followed the standard daybased anchored crossvalidation sets provided by the database with 9 folds in total. For the tensorbased models, we varied the projected dimension of the first mode from to with a step of and the second mode from to . The values of and were the same as those used in the face verification experiments. Since FI2010 is an unbalanced dataset with the midprice remaining stationary most of the time, we crossvalidated based on average score per class and also report the corresponding accuracy, average precision per class, average recall per class. Since our experimental protocol is the same with that used in ([27]) for the BagofWords (BoF) and Neural BagofWords (NBoF) models, we directly report their results. In addition, we report the baseline results from the database ([26]
) using Single Layer Feedforward Network (SLFN) and Ridge Regression (RR).
IvC Results
The results from 2 facial datasets are presented in Table I and Table II. Moreover, the last column of Tables 1 and 2 shows the relative computation time () of each method measured on the same machine (normalized with respect to the computation time of the proposed MCSDA method). Comparing the vector model and the tensor model utilizing classspecific criterion, it is clear that CSDA slightly outperforms the proposed MCSDA. However, as can be seen, the computational time (normalized with respect to the training time of MCSDA) of CSDA is higher (by one or two orders of ten). The computational efficiency of the proposed MCSDA over CSDA becomes more significant when the dimension of the input scales up. While the number of elements in the input doubles, computation time of MCSDAH scales favourably while CSDAH requires approximately times more computation compared to CSDA. The result justifies our analysis in the time complexity discussion section above. Comparing the two tensorbased approaches, the proposed MCSDA outperforms MDA in most of the configurations of , while their computational times are similar. Regarding the exploitation of enriched information, we can observe that all competing methods achieved some improvements. The benefit of additional information is marginal when the training data is small but clearly visible when % of the data is used for training for the tensorbased methods. In contrast, the benefit of additional information for the vectorbased model is very small.
The results for stock prediction using LOB data are presented in Table III
. While the performance of MCSDA was not better than its vector counterpart in the above face verification experiments, MCSDA outperforms all competing methods in the stock prediction problem, including the more complex neural networkbased bagofwords model NBoF (
[27]).The difference in the relative performance between the vectorbased and tensorbased variants in the two different application domains can be explained by looking into the optimal dimensionality of the subspaces obtained for both CSDA and MCSDA. In the two image verification problems, the optimal dimensionality of the subspace obtained for MCSDA is equal to dimensions for both ORL and CMU PIE datasets. For CSDA, the optimal subspace dimensionality is equal to dimensions for ORL and
dimensions for CMU PIE. This result shows that in the CSDA case, the number of parameters is much higher compared to its tensor counterpart. In facial images, several visual cues are usually necessary to perform the verification. Since the vector approach estimates many more parameters, more visual cues can be captured, which leads to better performance, compared to MCSDA. However, this comes with a much higher computational cost.
In the stock prediction problem, the difference between the number of estimated parameters for MCSDA and CSDA is small. Particularly, over folds the average number of parameters estimated for MCSDA is approximately equal to , while for CSDA is slightly over and equal to . Since multilinear classspecific projection (MCSDA) can perform the projection along temporal mode (mode) too, MCSDA can potentially capture important temporal cues required to predict future movements in stock price. The experiment in stock price prediction problem shows the potential of multilinear techniques in general, and MCSDA in particular, in exploiting the multilinear structure of the timeseries data.
CSDA  

MDA  
MCSDA  
CSDAH  
MDAH  
MCSDAH 
CSDA  

MDA  
MCSDA  
CSDAH  
MDAH  
MCSDAH 
Accuracy  Precision  Recall  F1  
RR  
SLFN  
CSDA  
MDA  
MCSDA  
BoF  
NBoF 
V Conclusions
In this paper, we proposed a tensor subspace learning method that utilizes the intrinsic tensor representation of the data together with the classspecific discrimination criterion. We provided a theoretical discussion of the time complexity of the proposed method, compared with its vector counterpart. Experimental results show that the MCSDA is computationally efficient and scalable with performance close to its vector counterpart in face verification problems, while outperformed other competing methods in a stock price prediction problem problem based on Limit Order Book data.
References
 [1] S. Wold, K. Esbensen, and P. Geladi, “Principal component analysis,” Chemometrics and intelligent laboratory systems, vol. 2, no. 13, pp. 37–52, 1987.
 [2] R. O. Duda, P. E. Hart, and D. G. Stork, “Pattern classification. 2nd,” Edition. New York, p. 55, 2001.
 [3] A. Iosifidis, A. Tefas, and I. Pitas, “Activitybased person identification using fuzzy representation and discriminant learning,” IEEE Transactions on Information Forensics and Security, vol. 7, no. 2, pp. 530–542, 2012.
 [4] A. Iosifidis, A. Tefas, N. Nikolaidis, and I. Pitas, “Multiview human movement recognition based on fuzzy distances and linear discriminant analysis,” Computer Vision and Image Understanding, vol. 116, no. 3, pp. 347–360, 2012.

[5]
C.X. Ren, Z. Lei, D.Q. Dai, and S. Z. Li, “Enhanced local gradient order features and discriminant analysis for face recognition,”
IEEE transactions on cybernetics, vol. 46, no. 11, pp. 2656–2669, 2016.  [6] M. Zhu and A. M. Martinez, “Subclass discriminant analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 8, pp. 1274–1286, 2006.
 [7] A. Iosifidis, A. Tefas, and I. Pitas, “On the optimal class representation in linear discriminant analysis,” IEEE transactions on neural networks and learning systems, vol. 24, no. 9, pp. 1491–1497, 2013.
 [8] A. Iosifidis, A. Tefas, and I. Pitas, “Kernel reference discriminant analysis,” Pattern Recognition Letters, vol. 49, pp. 85–91, 2014.
 [9] G. Goudelis, S. Zafeiriou, A. Tefas, and I. Pitas, “Classspecific kerneldiscriminant analysis for face verification,” IEEE Transactions on Information Forensics and Security, vol. 2, no. 3, pp. 570–587, 2007.
 [10] S. Zafeiriou, G. Tzimiropoulos, M. Petrou, and T. Stathaki, “Regularized kernel discriminant analysis with a robust kernel for face recognition and verification,” IEEE transactions on neural networks and learning systems, vol. 23, no. 3, pp. 526–534, 2012.
 [11] S. R. Arashloo and J. Kittler, “Classspecific kernel fusion of multiple descriptors for face verification using multiscale binarised statistical image features,” IEEE Transactions on Information Forensics and Security, vol. 9, no. 12, pp. 2100–2109, 2014.
 [12] A. Iosifidis and M. Gabbouj, “Classspecific kernel discriminant analysis revisited: Further analysis and extensions,” IEEE Transactions on Cybernetics, 2016.
 [13] A. Iosifidis, A. Tefas, and I. Pitas, “Classspecific reference discriminant analysis with application in human behavior analysis,” IEEE Transactions on HumanMachine Systems, vol. 45, no. 3, pp. 315–326, 2015.
 [14] A. Iosifidis and M. Gabbouj, “Scaling up classspecific kernel discriminant analysis for largescale face verification,” IEEE Transactions on Information Forensics and Security, vol. 11, no. 11, pp. 2453–2465, 2016.
 [15] L. Chen, M. Liao, M. Ko, J. Lin, and G. Yu, “A new ldabased face recognition system wich can solve the small sample size problem,” Pattern Recognition, vol. 33, no. 10, pp. 1713–1726, 2000.
 [16] H. Lu, K. N. Plataniotis, and A. N. Venetsanopoulos, “Mpca: Multilinear principal component analysis of tensor objects,” IEEE Transactions on Neural Networks, vol. 19, no. 1, pp. 18–39, 2008.
 [17] Q. Li and D. Schonfeld, “Multilinear discriminant analysis for higherorder tensor data classification,” IEEE transactions on pattern analysis and machine intelligence, vol. 36, no. 12, pp. 2524–2537, 2014.
 [18] D. Tao, X. Li, X. Wu, and S. J. Maybank, “General tensor discriminant analysis and gabor features for gait recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 10, 2007.
 [19] S. Yan, D. Xu, Q. Yang, L. Zhang, X. Tang, and H.J. Zhang, “Discriminant analysis with tensor representation,” in Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, vol. 1, pp. 526–532, IEEE, 2005.
 [20] M. Welling, “Fisher linear discriminant analysis,” Department of Computer Science, University of Toronto, vol. 3, no. 1, 2005.
 [21] H. Kong, E. K. Teoh, J. G. Wang, and R. Venkateswarlu, “Twodimensional fisher discriminant analysis: forget about small sample size problem [face recognition applications],” in Acoustics, Speech, and Signal Processing, 2005. Proceedings.(ICASSP’05). IEEE International Conference on, vol. 2, pp. ii–761, IEEE, 2005.
 [22] J. Ye, R. Janardan, and Q. Li, “Twodimensional linear discriminant analysis,” in Advances in neural information processing systems, pp. 1569–1576, 2005.
 [23] F. S. Samaria and A. C. Harter, “Parameterisation of a stochastic model for human face identification,” in Applications of Computer Vision, 1994., Proceedings of the Second IEEE Workshop on, pp. 138–142, IEEE, 1994.
 [24] T. Sim, S. Baker, and M. Bsat, “The cmu pose, illumination, and expression (pie) database,” in Automatic Face and Gesture Recognition, 2002. Proceedings. Fifth IEEE International Conference on, pp. 53–58, IEEE, 2002.
 [25] N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, vol. 1, pp. 886–893, IEEE, 2005.
 [26] A. Ntakaris, M. Magris, J. Kanniainen, M. Gabbouj, and A. Iosifidis, “Benchmark dataset for midprice prediction of limit order book data,” arXiv preprint arXiv:1705.03233, 2017.
 [27] N. Passalis, A. Tsantekidis, A. Tefas, J. Kanniainen, M. Gabbouj, and A. Iosifidis, “Timeseries classification using neural bagoffeatures,” in European Signal Processing Conference (EUSIPCO), Kos, Greece, 2017.
Comments
There are no comments yet.