I Introduction
HighFrequency Trading (HFT) is a form of automated trading that relies on the rapid, subtle changes of the markets to buy or sell assets. The main characteristic of HFT is high speed and shortterm investment horizon. Different from longterm investors, highfrequency traders profit from a low margin of the price changes with large volume within a relatively short time. This requires the ability to observe the dynamics of the market to predict prospective changes and act accordingly. In quantitative analysis, mathematical models have been employed to simulate certain aspects of the financial market in order to predict asset price, stock trends, etc. The performance of traditional mathematical models relies heavily on handcrafted features. With recent advances in computational power, more and more machine learning models have been introduced to predict financial market behaviors. Popular machine learning methods in HFT include regression analysis
[1, 2, 3, 4, 5], multilayer feed forward network [6], [7], [8], convolutional neural network
[9][10], [11], [12].With a large volume of data and the erratic behaviors of the market, neural networkbased solutions have been widely adopted to learn both the suitable representation of the data and the corresponding classifiers. This resolves the limitation in handcrafted models. All kinds of deep architectures have been proposed, ranging from traditional multilayer feedforward models
[6], [7], [8] to Convolutional Neural Network (CNN) [9], Recurrent Neural Network (RNN) [10], [11], [12][13], [14], [15]. For example, in [9]a CNN with both 2D and 1D convolution masks was trained to predict stock price movements. On a similar benchmark HFT dataset, an RNN with Long ShortTerm Memory Units (LSTM)
[12] and or a Neural BagofFeatures (NBoF) [16] network generalizing the (discriminant) BagofFeature model (BoF) [17] were proposed to perform the same prediction task.Tensor representation offers a natural representation of the timeseries data, where time corresponds to one of the tensor orders. Therefore, it is intuitive to investigate machine learning models that utilize tensor representations. In traditional vectorbased models, the features are extracted from the timeseries representation and form an input vector to the model. The preprocessing step to convert a tensor representation to a vector representation might lead to the loss of temporal information. That is, the learned classifiers might fail to capture the interactions between spatiotemporal information due to vectorization. Because many neural networkbased solutions, such as CNN or RNN, learn the data directly in the tensor form, this could explain why many neural network implementations outperform traditional vectorbased models with handcrafted features. With advances in mathematical tools and algorithms dealing with tensor input, many multilinear discriminant techniques, as well as tensor regression models have been proposed for image and video classification problems such as [18], [19], [20], [21], [22], [23], [24], [25], [26], [27]. However, there are few works investigating the performance of the tensorbased multilinear methods in financial problems [28]. Different from neural network methodology which requires heavy tuning of network topology and parameters, the beauty of tensorbased multilinear techniques is that the objective function is straightforward to interpret and very few parameters are required to tune the model. In this work, we propose to use two multilinear techniques based on the tensor representation of timeseries financial data to predict the midprice movement based on information obtained from Limit Order Book (LOB) data. Specifically, the contribution of this paper is as follows

We investigate the effectiveness of tensorbased discriminant techniques, particularly Multilinear Discriminant Analysis (MDA) in a largescale prediction problem of midprice movement with highfrequency limit order book data.

We propose a simple regression classifier that operates on the tensor representation, utilizing both the current and past information of the stock limit order book to boost the performance of the vectorbased regression technique. Based on the observation of the learning dynamics of the proposed algorithm, efficient scheme to select the best model’s state is also discussed.
The rest of the paper is organized as follows. Section 2 reviews the midprice movement prediction problem given the information collected from LOB as well as related methods that were proposed to tackle this problem. In Section 3, MDA and our proposed tensor regression scheme are presented. Section 4 shows the experimental analysis of the proposed methods compared with existing results on a largescale dataset. Finally, conclusions are drawn in Section 5.
Ii HighFrequency Limit Order Data
In finance, a limit order placed with a bank or a brokerage is a type of trade order to buy or sell a set amount of assets with a specified price. There are two types of limit order: a buy limit order and a sell limit order. In a sell limit order (ask), a minimum sell price and the corresponding volume of assets are specified. For example, a sell limit order of shares with a minimum prize of $ per share indicates that the investors wish to sell the share with maximum prize of $ only. Similarly, in a buy limit order (bid), a maximum buy price and its respective volume must be specified. The two types of limit orders consequently form two sides of the LOB, the bid and the ask side. LOB aggregates and sorts the order from both sides based on the given price. The best bid price at the time instance is defined as the highest available price that the buyer is willing to pay per share. The best ask price is, in turn, the lowest available price at a time instance that a seller is willing to sell per share. The LOB is sorted so that best bid and ask price is on top of the book. The trading happens through a matching mechanism based on several conditions. When the best bid price exceeds the best ask price, i.e. , the trading happens between the two investors. In addition to executions, the order can disappear from the order book by cancellations.
Given the availability of LOB data, several problems can be formulated, such as price trend prediction, order flow distribution estimation or detection of anomalous events that cause turbulence in the price change. One of the popular tasks given the availability of LOB data is to predict the midprice movements, i.e. to classify whether the midprice increases, decreases or remains stable based on a set of measurements. The midprice is a quantity defined as the mean between the best bid price and the best ask price at a given time, i.e.
(1) 
which gives a good estimate of the price trend.
The LOB dataset [29] used in this paper, referred as FI2010, was collected from different Finnish stocks (Kesko, Outokumpu, Sampo, Rautaruukki and Wartsila) in different industrial sectors. The collection period is from 1st of June to 14th of June 2010, producing order data of working days. The provided data was extracted based on event inflow [30] which aggregates to approximately million events. Each event contains information from the top orders from each side of the LOB. Since each order consists of a price (bid or ask) and a corresponding volume, each order event is represented by a dimensional vector. In [29], a dimensional feature vector was extracted for every events, leading to feature vector samples. For each feature vector, FI2010 includes an associated label which indicates the movement of midprice (increasing, decreasing, stationary) in the next
order events. In order to avoid the effect of different scales from each dimension, the data was standardized using zscore normalization
(2) 
Given the largescale of FI2010, many neural network solutions have been proposed to predict the prospective movement of the midprice. In [9], a CNN that operates on the raw data was proposed. The network consists of layers with an input layer of size , which contains dimensional vector representation of
consecutive events. The hidden layers contain both 2D and 1D convolution layers as well as max pooling layer. In
[12], an RNN architecture with LSTM units that also operates on a similar raw data representation was proposed with separate normalization schemes for order prices and volumes. Beside conventional deep architecture, an NBoF classifier [16] was proposed for the problem of the midprice prediction. The NBoF network in [16] was trained on consecutive dimensional feature vectors which contain order information from most recent order events and predicted the movements in the next order events.It should be noted that all of the above mentioned neural network solutions utilized not only information from the current order events but also information from the recent past. We believe that the information of the recent order events plays a significant role in modeling the dynamics of the midprice. The next section presents MDA classifier and our proposed regression model that take into account the contribution of past order information.
Iii Tensorbased Multilinear Methods for Financial Data
Before introducing the classifiers to tackle midprice prediction problem, we will start with notations and concepts used in multilinear algebra.
Iiia Multilinear Algebra Concepts
In this paper, we denote scalar values by either lowcase or uppercase characters , vectors by lowercase boldface characters , matrices by uppercase boldface characters and tensor as calligraphic capitals . A tensor with modes and dimension in the mode is represented as . The entry in the th index in mode for is denoted as .
Definition 1 (Mode Fiber and Mode Unfolding)
The mode fiber of a tensor is a vector of dimensional, given by fixing every index but . The mode unfolding of , also known as mode matricization, transforms the tensor to matrix , which is formed by arranging the mode fibers as columns. The shape of is with .
Definition 2 (Mode Product)
The mode product between a tensor and a matrix is another tensor of size and denoted by . The element of is defined as .
With the definition of mode product and mode unfolding, the following equation holds
(3) 
For convenience, we denote by .
IiiB Multilinear Discriminant Analysis
MDA is the extended version of the Linear Discriminant Analysis (LDA) which utilizes the Fisher criterion [31] as the optimal criterion of the learned subspace. Instead of seeking an optimal vector subspace, MDA learns a tensor subspace in which data from different classes are separated by maximizing the interclass distances and minimizing the intraclass distances. The objective function is thus maximizing the ratio between interclass distances and intraclass distances in the projected space. Formally, let us denote the set of tensor samples as , each with an associated class label . In addition, denotes the th sample from class and denotes the number of samples in class . The mean tensor of class is calculated as and the total mean tensor is .
MDA seeks a set of projection matrices that map to , with the subspace projection defined as
(4) 
The set of optimal projection matrices are obtained by maximizing the ratio between interclass and intraclass distances, measured in the tensor subspace . Particularly, MDA maximizes the following criterion
(5) 
where
(6) 
and
(7) 
are respectively interclass distance and intraclass distance. The subscript in (6) and (7) denotes the Frobenius norm. measures the total square distances between each class mean and the global mean after the projection while measures the total square distances between each sample and its respective mean tensor. By maximizing (5), we are seeking a tensor subspace in which the dispersion of data in the same class is minimum while the dispersion between each class is maximum. Subsequently, the classification can then be performed by simply selecting the class with a minimum distance between a test sample to each class mean in the discriminant subspace. Since the projection in (4) exposes a dependancy between each mode, each cannot be optimized independently. An iterative approach is usually employed to solve the optimization in (5) [[27], [26], [32]. In this work, we propose to use the CMDA algorithm [32] that assumes orthogonal constraints on each projection matrix and solves (5) by iteratively solving a trace ratio problem for each mode. Specifically, and can be calculated by unfolding the tensors in mode as follows
(8) 
and
(9) 
where in (8) and (9) denotes the trace operator. By utilizing the identity in (3), and are further expressed as
(10) 
and
(11) 
where and in (10) and (11) denote the interclass and intraclass scatter matrices in mode. The criterion in (5) can then be converted to a trace ratio problem with respect to while keeping other projection matrices fixed as
(12) 
With the orthogonality constraint of , the solution of (12) is given by eigenvectors corresponding to
largest eigenvalues of
. Usually, a positive is added to the diagonal of as a regularization, which also enables stable computation in case is not a full rank matrix. In the training phase, after randomly initializes , CMDA algorithm iteratively goes through each mode , optimizes the Fisher ratio with respect to while keeping other projection matrices fixed. The algorithm terminates when the changes in below a threshold or the specified maximum iteration reached. In the test phase, the class with a minimum distance between the class mean and the test sample in the tensor subspace is assigned to the test sample.IiiC Weighted Multichannel Timeseries Regression
For the FI2010 dataset, in order to take into account the past information one could concatenate dimensional feature vectors corresponding to the most recent order events to form a mode tensor sample, i.e. a matrix . For example, a training tensor sample of size contains information of most recent order events in the FI2010 dataset. columns represent information at timeinstances with the th column contains the latest order information. Each of the rows encode the temporal evolution of the features (or channels) through time. Generally, given mode tensor that belong to classes indicated by the class label , the proposed Weighted Multichannel Timeseries Regression (WMTR) learns the following mapping function
(13) 
where and are learnable parameters. The function in (13) maps each input tensor to a dimensional (target) vector. One way to interpret is that maps dimensional representation of each timeinstance to a dimensional (sub)space while combines the contribution of each timeinstance into a single vector, by using a weighted average approach. In order to deal with unbalanced datasets, such as FI2010, the parameters , of the WMTR model are determined by minimizing the following weighted least square criterion
(14) 
where is the corresponding target of the th sample with all elements equal to except the th element, which is set equal to . and are predefined regularization parameters associated with and . We set the value of the predefined weight equal to , i.e. inversely proportional to the number of training samples belonging to the class of sample , so that errors in smaller classes contribute more to the loss. The weight of each class is controlled by parameter : the smaller , the more contribution of the minor classes in the loss. The unweighted least square criterion is a special case of (14) when , i.e. .
We solve (14) by applying an iterative optimization process that alternatively keeps one parameter fixed while optimizing the other. Specifically, by fixing we have the following minimization problem
(15) 
where , and is a diagonal matrix with the . By solving , we obtain the solution of (15) as
(16) 
where
is the identity matrix of the appropriate size.
Similarly, by fixing , we have the following regression problem with respect to
(17) 
where , and is a diagonal matrix with . Similar to , optimal is obtained by solving for the stationary point of (17), which is given as
(18) 
The above process is formed by two convex problems, for which each processing step obtains the global optimum solution. Thus, the overall process is guaranteed to reach a local optimum for the combined regression criterion. The algorithm terminates when the changes in and are below a threshold, or the maximum number of iterations is reached. In the test phase, in (13) maps a test sample to the feature space, and the class label is inferred by the index of the maximum element of the projected test sample.
Usually, multilinear methods (including multilinear regression ones) are randomly initialized. This means that, in our case, one would randomly initialize the parameters in in order to define the optimal regression values stored in on the first iteration. However, since for WMTR when applied to LOB data, the values of encode the contribution of each timeinstance in the overall regression, we chose to initialize it as . That is, the first iteration of WMTR corresponds to the vectorbased regression using only the representation for the current timeinstance. After obtaining this mapping, the optimal weighted average of all timeinstances is determined by solving for .
Iv Experiments
Iva Experiment Setting
We conducted extensive experiments on the FI2010 dataset to compare the performance of the multilinear methods, i.e. MDA and the proposed WMTR, with that of the other existing methods including LDA, Ridge Regression (RR), Singlehidden Layer Feed Forward Network (SLFN), BoF and NBoF. In addition, we also compared WMTR with its unweighted version, denoted by MTR, to illustrate the effect of weighting in the learned function. Regarding the train/test evaluation protocol, we followed the anchored forward crossvalidation splits provided by the database
[29]. Specifically, there are 9 folds for crossvalidation based on the day basis; the training set increases by one day for each consecutive fold and the day following the last day used for training is used for testing, i.e. for the first fold, data from the first day is used for training and data from the second day is used for testing; for the second fold, data from the first and second day is used as for training and data from the third day used for testing; for the last fold, data from the first 9 days is used for training and the th day is used for testing.Regarding the input representation of the proposed multilinear techniques, MDA and WMTR both accept input tensor of size , which contains information from consecutive order events with the last column contains information from the last order events. For LDA, RR and SLFN, each input vector is of size , which is the last column of the input from MDA and WMTR, representing the most current information of the stock. The label of both tensor input and vector input is the movement of the midprice in the next order events, representing the future movement that we would like to predict. Since we followed the same experimental protocol as in [29] and [16], we directly report the result of RR, SLFN, BoF, NBoF in this paper.
The parameter settings of each model are as follows. For WMTR, we set maximum number of iterations to , the terminating threshold to ; and with . For MTR, all paramter settings were similar to WMTR except . For MDA, the number of maximum iterations and terminating threshold were set similar to WMTR, the projected dimensions of the first mode is from to with a step of while for the second mode from to with a step of . In addition, a regularization amount was added to the diagonal of .
IvB Performance Evaluation
Accuracy  Precision  Recall  F1  
RR  
SLFN  
LDA  
MDA  
MTR  
WMTR  
BoF  
NBoF  

It should be noted that FI2010 is a highly unbalanced dataset with most samples having a stationary midprice. Therefore we use average score per class [33] as a performance measure to select model parameters since
expresses a tradeoff between precision and recall. More specifically, for each crossvalidation fold, the competing methods are trained with all combinations of the abovementioned parameter settings on the training data. We selected the learned model that achieved the highest
score on the training set and reported the performance on the test set. In addition to , the corresponding average precision per class, average recall per class and accuracy are also reported. Accuracy measures the percentage the predicted labels that match the ground truth. Precision is the ratio between true positives over the number of samples predicted as positive, and recall is the ratio between true positive over the total number of true positives and false negatives.is the harmonic mean between precision and recall. For all measures, higher values indicate better performance.
Table 1 shows the average performance with standard deviation over all 9 folds of the competing methods. Comparing two discriminant methods, i.e. LDA and MDA, it is clear that MDA significantly outperforms LDA on all performance measures. This is due to the fact that MDA operates on the tensor input, which could hold both current and past information as well as the temporal structure of the data. The improvement of tensorbased approaches over vectorbased approach is consistent also in case of regression (WMTR vs RR). Comparing multilinear techniques with NBoF, MDA and WMTR perform much better than NBoF in terms of
, accuracy and precision while recall scores nearly match. WMTR outperforming MTR in large margin suggests that weighting is important for the highly unbalanced dataset such as FI2010. Overall, MDA and WMTR are the leading methods among the competing methods in this midprice prediction problem.IvC WMTR analysis
Figure 1 shows the dynamic of the learning process of WMTR on the training data of the first fold. There is one interesting phenomenon that can be observed during the training process. In the first iterations, all performance measures improve consistently. After the th iteration, score drops a little then remains stable while accuracy continues to improve. This phenomenon can be observed in every parameter setting. Since WMTR minimizes the squared error between the target label and the predicted label, constant improvement before converging observed from the training accuracy is expected. The drop in score after some iterations can be explained as follows: in the first iterations, WMTR truly learns the generating process behind the training samples; however, at a certain point, WMTR starts to overfit the data and becomes bias towards the dominant class. The same phenomenon was observed from MTR with a more significant drop in
since without weight MTR overfits the dominant class severely. Figure 2 shows the training dynamic of MTR with similar parameter setting except for the class weight in the loss function. Due to this behavior, in order to select the best learned state of WMTR and MTR, we measured
score on the training data at each iteration and selected the model’s state which produced the best . The question is whether the selected model performs well on the test data? Figure 3 and Figure 4 plots accuracy and of WMTR and MTR measured on the training set and the test set at each iteration. It is clear that the learned model that produced best during training also performed best on the test data. The margin between training and testing performance is relatively small for both WMTR and MTR which shows that our proposed algorithm did not suffer from overfitting. Although the behaviors of WMTR and MTR are similar, the best model learned from MTR is biased towards the dominant class, resulting in inferior performance as shown in the experimental result.V Conclusions
In this work, we have investigated the effectiveness of multilinear discriminant analysis in dealing with financial data prediction based on Limit Order Book data. In addition, we proposed a simple bilinear regression algorithm that utilizes both current and past information of a stock to boost the performance of traditional vectorbased regression. Experimental results showed that the proposed methods outperform their counterpart exploiting vectorial representations, and outperform existing solutions utilizing (possibly deep) neural network architectures.
Vi Acknowledgement
This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie SkłodowskaCurie grant agreement No 675044 “BigDataFinance”.
References
 [1] B. Zheng, E. Moulines, and F. Abergel, “Price jump prediction in limit order book,” 2012.
 [2] L. G. Alvim, C. N. dos Santos, and R. L. Milidiu, “Daily volume forecasting using high frequency predictors,” in Proceedings of the 10th IASTED International Conference, vol. 674, p. 248, 2010.

[3]
P.F. Pai and C.S. Lin, “A hybrid arima and support vector machines model in stock price forecasting,”
Omega, vol. 33, no. 6, pp. 497–505, 2005.  [4] B. Detollenaere and C. D’hondt, “Identifying expensive trades by monitoring the limit order book,” Journal of Forecasting, vol. 36, no. 3, pp. 273–290, 2017.
 [5] E. Panayi, G. W. Peters, J. Danielsson, and J.P. Zigrand, “Designating market maker behaviour in limit order book markets,” Econometrics and Statistics, 2016.
 [6] J. Levendovszky and F. Kia, “Prediction basedhigh frequency trading on financial time series,” Periodica Polytechnica. Electrical Engineering and Computer Science, vol. 56, no. 1, p. 29, 2012.

[7]
J. Sirignano, “Deep learning for limit order books,” 2016.
 [8] S. Galeshchuk, “Neural networks performance in exchange rate prediction,” Neurocomputing, vol. 172, pp. 446–452, 2016.
 [9] A. Tsantekidis, N. Passalis, A. Tefas, J. Kanniainen, M. Gabbouj, and A. Iosifidis, “Forecasting stock prices from the limit order book using convolutional neural networks,” in IEEE Conference on Business Informatics (CBI), Thessaloniki, Greece, 2017.
 [10] M. Dixon, “High frequency market making with machine learning,” 2016.
 [11] M. Rehman, G. M. Khan, and S. A. Mahmud, “Foreign currency exchange rates prediction using cgp and recurrent neural network,” IERI Procedia, vol. 10, pp. 239–244, 2014.
 [12] A. Tsantekidis, N. Passalis, A. Tefas, J. Kanniainen, M. Gabbouj, and A. Iosifidis, “Using deep learning to detect price change indications in financial markets,” in European Signal Processing Conference (EUSIPCO), Kos, Greece, 2017.
 [13] A. Sharang and C. Rao, “Using machine learning for medium frequency derivative portfolio trading,” arXiv preprint arXiv:1512.06228, 2015.
 [14] J. Hallgren and T. Koski, “Testing for causality in continuous time bayesian network models of highfrequency data,” arXiv preprint arXiv:1601.06651, 2016.
 [15] J. Sandoval and G. Hernández, “Computational visual analysis of the order book dynamics for creating highfrequency foreign exchange trading strategies,” Procedia Computer Science, vol. 51, pp. 1593–1602, 2015.
 [16] N. Passalis, A. Tsantekidis, A. Tefas, J. Kanniainen, M. Gabbouj, and A. Iosifidis, “Timeseries classification using neural bagoffeatures,” in European Signal Processing Conference (EUSIPCO), Kos, Greece, 2017.
 [17] A. Iosifidis, A. Tefas, and I. Pitas, “Discriminant bag of words based representation for human action recognition,” Pattern Recognition Letters, vol. 49, pp. 185–192, 2014.
 [18] A. Shashua and A. Levin, “Linear image coding for regression and classification using the tensorrank principle,” in Computer Vision and Pattern Recognition, 2001. CVPR 2001. Proceedings of the 2001 IEEE Computer Society Conference on, vol. 1, pp. I–I, IEEE, 2001.
 [19] J. Yang, D. Zhang, A. F. Frangi, and J.y. Yang, “Twodimensional pca: a new approach to appearancebased face representation and recognition,” IEEE transactions on pattern analysis and machine intelligence, vol. 26, no. 1, pp. 131–137, 2004.

[20]
K. Liu, Y.Q. Cheng, and J.Y. Yang, “Algebraic feature extraction for image recognition based on an optimal discriminant criterion,”
Pattern Recognition, vol. 26, no. 6, pp. 903–911, 1993. 
[21]
H. Kong, E. K. Teoh, J. G. Wang, and R. Venkateswarlu, “Twodimensional fisher discriminant analysis: forget about small sample size problem [face recognition applications],” in
Acoustics, Speech, and Signal Processing, 2005. Proceedings.(ICASSP’05). IEEE International Conference on, vol. 2, pp. ii–761, IEEE, 2005.  [22] J. Ye, R. Janardan, and Q. Li, “Twodimensional linear discriminant analysis,” in Advances in neural information processing systems, pp. 1569–1576, 2005.
 [23] X. He, D. Cai, and P. Niyogi, “Tensor subspace analysis,” in Advances in neural information processing systems, pp. 499–506, 2006.
 [24] D. Cai, X. He, and J. Han, “Subspace learning based on tensor analysis,” tech. rep., 2005.
 [25] M. A. O. Vasilescu and D. Terzopoulos, “Multilinear subspace analysis of image ensembles,” in Computer Vision and Pattern Recognition, 2003. Proceedings. 2003 IEEE Computer Society Conference on, vol. 2, pp. II–93, IEEE, 2003.
 [26] S. Yan, D. Xu, Q. Yang, L. Zhang, X. Tang, and H.J. Zhang, “Discriminant analysis with tensor representation,” in Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, vol. 1, pp. 526–532, IEEE, 2005.
 [27] D. Tao, X. Li, X. Wu, and S. J. Maybank, “General tensor discriminant analysis and gabor features for gait recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 10, 2007.
 [28] Q. Li, Y. Chen, L. L. Jiang, P. Li, and H. Chen, “A tensorbased information framework for predicting the stock market,” ACM Transactions on Information Systems (TOIS), vol. 34, no. 2, p. 11, 2016.
 [29] A. Ntakaris, M. Magris, J. Kanniainen, M. Gabbouj, and A. Iosifidis, “Benchmark dataset for midprice prediction of limit order book data,” arXiv preprint arXiv:1705.03233, 2017.
 [30] X. Li, H. Xie, R. Wang, Y. Cai, J. Cao, F. Wang, H. Min, and X. Deng, “Empirical analysis: stock market prediction via extreme learning machine,” Neural Computing and Applications, vol. 27, no. 1, pp. 67–78, 2016.
 [31] M. Welling, “Fisher linear discriminant analysis,” Department of Computer Science, University of Toronto, vol. 3, no. 1, 2005.
 [32] Q. Li and D. Schonfeld, “Multilinear discriminant analysis for higherorder tensor data classification,” IEEE transactions on pattern analysis and machine intelligence, vol. 36, no. 12, pp. 2524–2537, 2014.
 [33] D. M. Powers, “Evaluation: from precision, recall and fmeasure to roc, informedness, markedness and correlation,” 2011.
Comments
There are no comments yet.