Matrix factorization methods, which decompose a matrix into a product of factors, are extensively employed to understand complex data. Factors are often useful for highlighting and interpreting special observations (outliers), clusters of similar observations, groups of related variables, and crossed relationships between observations and variables.
where is a data matrix, is the score matrix containing the projection of the objects onto the principal components (PCs) sub-space, is the loading matrix containing the linear combination of the variables represented in each of the PCs, and is the matrix of residuals. PCA satisfies:
stands for the Frobenius norm. Depending on the area of application, loading vectors are constrained to unit length, in order to leave the data variance in the scores to ease interpretation. One interesting property of PCA is that loading vectors can be computed simultaneously or sequentially with exactly the same parameter estimates. This property is a consequence of the components being orthogonal, and while that leads to nice mathematical properties, it seldom reflects the underlying biological or chemical reality.
When used for interpretation by exploring the PCs, PCA has a major shortcoming: The PCs are linear combinations of all the variables, and often combine unrelated sources of variance Camacho . On the other hand, for easier interpretation, it is desirable to find factorizations that correspond to a limited number of original variables. This can be achieved by means of rotation Jolliffe02 or sparse methods like sparse principal component analysis (SPCA) Jolliffe2003 ; Zou2006 . Another approach is to constrain loadings to agree with the structure of the correlation matrix, like in the group-wise principal component analysis (GPCA) GPCA .
While previous methods focus on the mode of the variables (columns) of the data, PCA shows exactly the same limitation for interpreting the mode of the observations: i) score vectors are typically non-sparse, and ii) unrelated observations can provide very similar scores. Extensions that apply the sparsity idea to both modes already exist, like some variants of co-clustering coclustering or the penalized matrix decomposition (PMD) PMD .
In this paper, we introduce the cross-product penalized component analysis (XCAN). XCAN is a matrix factorization based on a loss function that allows a trade-off between variance maximization and structural preservation, aimed at solving the aforementioned problems. The result is a flexible modeling approach that can be used for data exploration in a large variety of problems. We will demonstrate its use with applications from different disciplines.
The rest of the paper is organized as follows. Section 2 introduces the methods on which XCAN is based. Section 3 presents the XCAN algorithm. Section 4 illustrates XCAN through four case studies, including simulated as well as real data from different application fields. Conclusions and future work are discussed in Section 5.
2 Related methods
The proposed XCAN method is inspired by previous developments, notably (i) the SPCA framework based on the lasso (least absolute shrinkage and selection operator), (ii) extensions of SPCA to constrain both modes of the factorization, like co-clustering or the PMD, and (iii) GPCA. From SPCA, we inherit the approach of defining a set of meta-parameters to define the loss function as a trade-off. This trade-off is between captured variance and structural penalties, following the GPCA strategy. The new loss function is intended to reflect both the structure among observations and among variables, in a similar way as in co-clustering or PMD.
2.1 Sparse PCA
The SPCA idea is grounded on the work of several authors more than two decades ago, as described in Mackey2008 . There are various versions of SPCA, most based on the modification of the PCA loss in eq. (2) by including sparsity-inducing constraints or penalties with the or norms Richt2012 . The -norm of a vector refers to the number of non-zero elements in the vector, and the -norm of a vector computes the sum of the absolute values of the vector entries. The application of the -norm in a regression setting was originally called the least absolute shrinkage and selection operator (lasso) (Tibshirani1994, ). In this section, we focus on the lasso versions of SPCA for its widespread use in model interpretability Rasmussen2012 .
The SCoTLASS algorithm (Jolliffe2003, ) incorporates the lasso in the PCA calibration as follows:
where is the resulting sparse loading, and and refer to the and norms, respectively. To obtain successive components, the SCoTLASS optimization constrains the second and further sparse loadings to be orthogonal to the rest.
The SCoTLASS criterion is very computational demanding Hastie:2015:SLS:2834535 and has numerical limitations. For this reason, Zou et al. Zou2006 introduced an alternative formulation to generate sparse components based on regularized regression, using a criterion close to the (naive555The regular elastic net makes use of a scaling factor between the lasso and the ridge penalty.) elastic net (Zou2005, ), which is a combination of both the lasso and the ridge (-norm) penalties:
where we distinguish between sparse loadings (sometimes also referred to as weights in similar modeling frameworks) and orthonormal loadings , represents the th column vector in , with the number of components. The solution proposed for eq. (4) is a biconvex optimization where sparse weights and orthogonal loadings are obtained using an alternating approach. The algorithm is simultaneous, in the sense that all components are computed in the same alternating iteration. An alternative sequential variant is defined in Sjostrand2012 . A particular solution of SPCA in eq. (4) is when , which is the most popular choice when the number of columns in the data is much higher than the number of rows. Then, the sparse loadings can be computed by soft-thresholding, simplifying and improving the efficiency of the computation: .
2.2 Extensions of sparsity to both modes
Witten et al. PMD propose a new sparse algorithm referred to as the Penalized Matrix Decomposition (PMD). It can be used to constrain both the number of observations and variables contributing to each factor using soft-thresholding. The PMD follows:
The corresponding pseudo-singular value is then obtained as:
After each component is obtained, projection deflation is performed as:
The authors show that this solution, when only applying sparsity to the loadings, is connected with SCoTLASS and SPCA PMD .
A similar approach was introduced in coclustering with the goal of performing coclustering:
where represents the th column vector in , and the same penalty variable is used in both modes. This loss is used within an alternating optimization, which, unlike PDM, produces all components in a single run.
2.3 Group-wise PCA
GPCA limits each component to represent the variability of a single group of variables, which in turn allows to interpret each component independently (provided the variables are not overlapping). GPCA starts with the identification of a set of (possibly overlapping) groups of correlated variables. This is achieved by applying thresholding in a pseudo-correlation matrix of the data. In the original formulation of GPCA, the MEDA approach (Missing-data for Exploratory Data analysis) Camacho2011missing was implemented to obtain .
Besides, GPCA may be simpler to use in practice than sparse methods based on the lasso, because by inspecting we can often identify suitable values for the threshold or even when the GPCA model is not appropriate at all. In comparison, the main challenge when using sparse methods is to find suitable values for meta-parameters like (3), and (4), and (5) or (8). This advantage, however, comes at a price. While the capability to reflect the structure in the map is an appealing property, GPCA is an “all or nothing” approach, meaning that a variable is either in a group or not, and different GPCA models can be obtained for very similar values of the threshold.
3 Cross-product Penalized Component Analysis (XCAN)
With XCAN, we would like to make the most of the advantages of sparse methods in one or the two modes and combine that with the idea behind GPCA. Like PMD, XCAN factorizes a matrix into three matrices:
The loss function for the XCAN factorization is as follows:
where the first part is the actual model, with constrained to be diagonal, and and are defined to constrain the structure of the model. The meta-parameters and control the level of these penalties. We define the penalties as follows:
where and are the th column vectors in and , respectively, is the total number of XCAN components (XCs), is the Hadamard (element-wise) division. and denote maps of relationship between variables and observations, respectively, and are given as inputs. To avoid numerical problems in the divisions, values below a threshold in and are set to that threshold. In this paper, we fixed the value of this threshold to 0.01 in all experiments.
3.1 Cross-product matrices and rationale
We will generally refer to and as cross-product matrices, because several of their possible definitions can be computed as cross-products, and we expect them to be symmetric. This is also the reason for the name XCAN, where “X” stands for “cross”. For instance, may be set to the correlation matrix of and to the correlation matrix of , respectively. can also be set to any map used in GPCA.
The rationale behind the definition of the loss in (10)-(11) is as follows. and contain the correlation structure in the variables and observations, respectively. Values close to 0 in those matrices identify unrelated variables or observations. Using an element-wise division, we prevent unrelated elements to be part of the same component. That way, we obtain sparse components that agree with the structure enforced by the input cross-product matrices.
The choice of matrices and is principal in XCAN, and should be carefully done taking into account the goal of the analysis, in a similar way to when selecting the data pre-processing. For instance, it is customary to mean-center data for some types of analyses, but not for others (e.g., for spectra). Although not compulsory, using matrices and that are consistent with the data pre-processing is expected to provide more coherent results. By taking that into account, we use the following definition of cross-product in our experiments in this paper, inspired by Pearson’s correlation:
The advantage of these definitions is that they do not require data to be column-wise or row-wise mean-centered, like correlation matrices do.
Cross-product matrices can be conveniently post-processed to modify the behavior of XCAN, which adds flexibility to the modeling approach. For instance, a possible post-processing operation is thresholding. As discussed before, GPCA was defined with the suitable property that its meta-parameter (i.e., the threshold) can be set upon visual inspection of the map . Similarly, we can inherit this idea in XCAN by thresholding and , in order to discard minor correlations from the analysis. This results in useful means to further impose sparsity, as in GPCA. We can also use thresholding in such a way that only positive correlations are kept in and . This can be useful to derive loadings and scores vectors in XCAN where all non-zero elements have the same sign, something in line with non-negativity constraints. In the examples, we will use this thresholding idea or non-negativity constraints, when suitable.
Since cross-product matrices can be considered as a way to include structural penalties in both modes of the data in XCAN, we can also use them in different ways, e.g., as in chemometrics literature, to impose smoothness, to connect different data sets of same individuals or variables (data fusion) or to include apriori information into a model. Studying those applications in detail is out of the scope of this paper. We will, however, show an application of XCAN by incorporating the class labels of the samples into the model.
3.2 Algorithmic Approach
The XCAN model is fitted to the data by solving for all components simultaneously using gradient-based all-at-once optimization. To constrain the vectors in and to unit length, like in the SVD, a suitable way is to redefine the loss as:
If is set to a sufficiently large value, this additional term in the loss will serve the purpose of normalizing factors and . In our experiments in Section 4, we set .
We solve eq. (15) by computing the partial derivatives of the loss function with respect to and (as given in the Appendix), constructing the gradient, and then using a gradient-based optimization algorithm. In our experiments, we use the Poblano Toolbox DuKoAc10 , that has several unconstrained gradient-based optimization algorithms such as the nonlinear conjugate gradient (NCG), and limited-memory BFGS (L-BFGS) NoWr06 . When non-negativity constraints are desired, we can also use the limited-memory BFGS with bound constraints (LBFGS-B)666We use the implementation at https://github.com/stephenbeckr/L-BFGS-B-C..
It should be noted that the XCAN optimization can have many local minima. In the experiments, we initialized the algorithm using the PCA solution, but ideally, multiple random starts would help with the local minima problem.
We start with a simulated experiment to demonstrate the properties and flexibility of XCAN. We simulate three data sets , and , each with 5 observations and 5 variables. Each data set is generated with high correlation between the variables, using the simuleMV tool CAMACHO201740 . We, then, construct as follows:
denotes noise randomly drawn from the normal distribution with 0 mean and unit variance. Finally,follows eq. (13) and eq. (14).
Matrices and for one of the simulations are shown in Figure 1. We can see that variables (Figure 1(a)) are approximately grouped in two groups of 5 and observations (Figure 1(b)) describe three major groups.
Figure 2 shows the result of applying XCAN with three components and for different values of the meta-parameters and . The figure shows an upper bar plot with the scores, computed as , and a lower bar plot with the loadings . Figure 2(a) shows regular PCA, since the structural penalties are deactivated, which is used as a baseline. In such setting, each loading/score vector contains information about all variables/observations, respectively. Figures 2(b), 2(c) and 2(d) show the application of XCAN with structural penalties in the loadings, scores, and both loadings and scores, respectively. In all cases, the XCAN model works as expected, and the variance remains reasonably close to the PCA model. This shows that the penalties meet the true data structure at a minor price in terms of explained variance.
In order to see the performance of XCAN with different number of components, we have also compared the models with one, two, and three XCs in the example, with the structural penalties activated in loadings and scores. Results can be inspected by comparing Figures 3 and 2(d). We observe that the XCAN model of the simulated data is very stable.
As discussed before, we can apply thresholding in XCAN. If we set all values in and with magnitude less than 0.5, i.e., within the interval , to zero, we obtain the matrices in Figure 4, where most of the spurious correlations are discarded. Figure 5 compares 3-component XCAN models with and without thresholding in the cross-product matrices. We can see that thresholding can be effective in terms of imposing sparsity. In the current example, we can achieve a sparser model using lower values of the meta-parameters and capture higher variance.
4.2 Animal Data
This case study makes use of a toy data set created to illustrate the co-clustering algorithm in coclustering . The data contains 34 observations, most of them representing animals, each with 17 attributes, including binary and continuous attributes. The data values are non-negative. Binary attributes are used to describe if an animal has (1) or does not have (0) a specific feature. For instance, in the feature ‘Carnivore’, ‘Lion’ contains a 1 and ‘Cow’ a 0.
Scores and loadings plots for the first two PCs of PCA are shown in Figure 6, for auto-scaled data, that is, mean-centered across the animals mode and scaled to unit variance within the attributes. The first PC is dominated by birds, e.g., ‘Eagle’, ‘Blackbird’ or ‘Chicken’, which share features like ‘Wings’, ‘Feathers’ or ‘Has a beak’. Birds are small, and for this reason they are located opposite to big animals, in particular, to the ‘House’, which was included in the data set as an outlier. The second PC contrasts ‘Dangerous’ and ‘Extinct’ animals, most notably the ‘T. Rex’, with those ‘Domesticized’ and ‘Eaten by Caucasians’, which are also correlated with ‘Breathe under water’. Both components are interpretable, but still complex in the sense that they mix different concepts: birds + small vs big, dangerous vs domesticated + fish. As such, this data set is a perfect example of the limitations of PCA reported in the introduction.
To perform the XCAN analysis, we auto-scaled the data. Since the goal is to understand differences among individuals and group them as in (co-)clustering, it makes sense to center the data so that we study the variability among the set of individuals, instead of with respect to some center of coordinates. Scaling seems to equalize the influence of the variables in the model. Since we want each component to be a cluster of similar (and not antagonist) individuals, we constrain to be non-negative. In particular, every entry in with a value below 0.5 is hard thresholded to 0, i.e., values within the interval are thresholded. That way, components will be extra sparse and yield only positive (or only negative) scores. In case a component contains negative scores we simply change its sign. For this analysis, we are not concerned with the signs of the loadings, but we still want loadings to be extra sparse, so we threshold the entries of using the threshold value of 0.5, i.e., setting every entry with a value within the interval to 0. This allows negative correlations of -0.5 or lower. Resulting cross-product matrices are shown in Figure 7.
The 9-component XCAN model is shown in Figure 8. We can see that all XCs are sparse in both scores and loadings, and all scores are non-negative, as expected. The model extracts meaningful components, that can be interpreted as co-clusters. The first component represents wild birds, with their corresponding features. Interestingly, ‘Chicken’ is not included in the component. The reason for that can be seen in Figure 9, which represents a zoom of matrix : ‘Chicken’ and ‘Eagle’, even though they share the features in the first component, they are completely different animals according to matrix , showing a correlation close to 0. Thus, XCAN does not place them in the same component. Since ‘Eagle’ is strongly correlated with the other birds, the model selects the former among the group of scores of the first component. The second component represents domesticated animals consumed for food. The third component focuses on the feature of extinct, but only two (‘T.Rex’ and ‘Neanderthal’) out of the five extinct animals (including ‘Mamouth’, ‘Sabre Tiger’ and ‘Triceratops’) show relevant scores. The reason for that is similar to before and also apparent in Figure 9. ’Neanderthal’ and ’Triceratops’ are uncorrelated, and for this reason they cannot share the same component. The rest of the components are more or less self-explanatory.
The combination of sparsity and grouping in components 1 and 3 is coherent with GPCA, but not with PCA/SPCA models, where one would expect that all individuals with feathers and wings, or those extinct, should be placed in the same component. If, for some reason, we are interested only in sparsity but not on the grouping characteristic of XCAN, then traditional sparse methods should be used for the analysis. With XCAN, we explicitly avoid, within a component, objects or features that are not associated in the cross-product matrices.
In conclusion, we observe that the XCAN model performs exactly as expected: components are sparse and reflect accurately the structure in the cross-product matrices.
4.3 Real Data
4.3.1 Vast Challenge for Cybersecurity
The increase in the number of cybersecurity incidents, coupled with the shortage of specialized professionals, has created a need for efficient data analysis tools to support the detection, triaging and analysis of incidents SIEMMarket . In particular, anomaly-based Intrusion Detection Systems (IDS) Garcia09 are fundamental resources to unveil new attack strategies. A large number of intrusion detection approaches based on PCA have been proposed in the last two decades Lakhina2004a ; Delimargas2014 ; Callegari2014 ; Aiello2016 ; MSNM2016 . In a recent paper theron_vizsec_2017 , the multivariate detection approach was extended to GPCA.
With GPCA, we can identify anomalies in the data following a straightforward approach. components simplify the interpretation by reducing the number of variables to examine. Components can be interpreted one at a time, following a very similar workflow to the one security analysts use in traditional software tools. Since XCAN inherits the features of GPCA, we will explore its performance in the cybersecurity domain. In comparison to GPCA, XCAN can also impose sparsity in the rows, which in the cybersecurity domain typically corresponds to time-resolved data. This is useful to speed up the analysis of an incident, so that the analyst can focus on a few points in time to troubleshoot the problem.
The data of the present case study comes from the VAST 2012 2nd mini-challenge VAST2012 and was captured in a corporate network during a time frame of two days. During that time period, a botnet compromised the network, causing performance problems and the emergence of spyware. The raw data is parsed in a total of 265 features at 1 minute intervals, yielding a 2345 x 265 data matrix. More details can be found in MBDA . Cross-product matrices following eq. (13) and eq. (14) are shown in Figure 10. The plots illustrate that applying a group-wise constrained model is reasonable, both in observations and features, since the cross-product matrices contain squares of high correlation.
We used the following strategy for the application of XCAN in this problem domain. Cross-product matrices were thresholded to positive values above 0.7 () and 0.3 (). Data was auto-scaled for similar reasons as in the previous case study. The mean centering is useful to detect anomalies as deviations from the mean. The scaling normalizes the relevance of the variables in the model.
compares the first four components by GPCA (left column) and the 4-component model by XCAN with sparse loadings (middle column) and both sparse scores and loadings (right column). We re-ordered the components of XCAN to match those in GPCA. To improve the visualization for security analysts, we included statistical control limits in the scores of each component, so that anomalies can be easily identified. Both GPCA and XCAN provide useful results for anomaly detection, and components can be easily interpreted one-at-a-time. For instance, the first component of GPCA (upper-left figure) describes a number of anomalies around sampling time 400, which are related to the variables in the loadings. So a brief description of the location in time of the anomalies, and their diagnosis (the variables related to the abnormal behavior) is obtained in a single plot. In comparison to PCA (e.g., seeMSNM2016 ), this approach for anomaly detection is much simpler and easy to understand for security analysts.
We can also see that GPCA and XCAN with sparse loadings are very similar. If we apply also sparsity in the rows in XCAN, a reduced set of observations are identified for each component, allowing the analyst to focus on a subset of time points to proceed with a more detailed forensic analysis.
4.3.2 Olive Data
Oil samples from 24 brands and several types (olive, corn, sesame, etc.) were obtained. Infrared spectra were measured using a Nicolet 5-DX FT-IR system. Each spectrum consisted of 1556 measurements from 3600 to 600 cm-1 of which two regions were used in the analysis here in accordance with the original publication doi:10.1366/0003702971941935 . The resulting spectra, with dimension , are shown in Figure 12, with different colors representing different brands of oil.
One meaningful way to model spectra with matrix factorization is:
where and are non-negative, and takes the roll of a baseline. For this example, we will follow the same approach with XCAN, and also incorporate the baseline and non-negativity constraints on , and . The result with is shown in Figure 13. The top shows the baseline, contained in . We observe that components are generally not sparse.
Since in this case study the data is not centered and it is far from the origin of coordinates, the cross-product expressions in eqs. (13) and (14) are impractical: they are almost completely filled with 1s. For this reason, we post-processed cross-product matrices by subtracting a baseline computed with the minimum values of each column (row) for (). This approach gives us the cross-product matrices in Figure 14(a) and (b). We can see that the cross-product matrix still does not indicate any groups of observations. For this reason, we will not use this cross-product matrix with XCAN. Instead, here, we will perform a different analysis to illustrate another potential use of XCAN with data from samples, which has class information, and define based on the classes of the samples as in Figure 14(c). Each matrix entry contains 1 if the corresponding observations belong to the same class, or 0 otherwise. This illustrates how we can use external information to influence the XCAN outcome, instead of using the cross-product matrix definitions given earlier.
The 4-component XCAN model constrained only with , along with the baseline, is shown in Figure 15. The model is similar to the one maximizing variance but with sparse loadings. Figure 16 shows the same analysis using the observation class map within XCAN. Now each component shows the particularities of a different oil brand. Notice this works as a description of the oil brands, not a discrimination. For instance, the first component shows the pattern of the yellow class, without an attempt to distinguish it from the other brands. It might not be a complete description, though. For instance, look at the fourth component. It is focused on the blue class of oil, which is different from the rest in the last interval of wavelengths. However, XCAN only shows one wavelength in this component, because this wavelength is uncorrelated to the others where the blue brand shows its peculiarities. This is interesting information that can improve interpretation but cannot be found with similar sparse methods.
In this paper, we introduce the Cross-product penalized component analysis (XCAN) method and illustrate its use with examples in different application domains. XCAN combines variance maximization and structural penalties, that are specified in the form of cross-product matrices and result in sparse matrix factorizations. This provides a flexible modeling framework to explore complex data and enhance the structural information. We plan to extend the application of XCAN to a variety of problems, in particular to data fusion and the derivation of gray models with a-priori information.
Appendix: Partial Derivatives
Let us call the loss function in eq. (15). The partial derivatives are as follows:
where all partial derivatives with respect to , and are of the same dimension of the corresponding matrices. The partial derivatives of the terms in the loss functions , and can be calculated element-wise, using the symmetric structure of and :
with , being the -th row of , and , being the -th row of .
This work is partly supported by the Spanish Ministry of Economy and Competitiveness and FEDER funds through project TIN2017-83494-R and the “Plan Propio de la Universidad de Granada”.
- (1) Jolliffe I.T.. Principal component analysis. EEUU: Springer Verlag Inc. 2002.
- (2) Jackson J.E.. A User’s Guide to Principal Components. England: Wiley-Interscience 2003.
- (3) Camacho J.. Missing-data theory in the context of exploratory data analysis Chemometrics and Intelligent Laboratory Systems. 2010;103:8–18.
- (4) Jolliffe I.T., Trendafilov N.T., Uddin M.. A modified principal component technique based on the LASSO Journal of Computational and Graphical Statistics. 2003.
- (5) Zou Hui, Hastie Trevor, Tibshirani Robert. Sparse Principal Component Analysis Journal of Computational and Graphical Statistics. 2006;15:265–286.
- (6) Camacho José, Rodríguez-Gómez Rafael A., Saccenti Edoardo. Group-wise Principal Component Analysis for Exploratory Data Analysis Journal of Computational and Graphical Statistics. 2017;26:501-512.
- (7) Bro R., Papalexakis Evangelos E., Acar E., Sidiropoulos Nicholas D.. Coclustering - a useful tool for chemometrics Journal of Chemometrics. 2012;26:256-263.
- (8) Witten Daniela M., Tibshirani Robert, Hastie Trevor. A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis Biostatistics. 2009;10:515–534.
- (9) Mackey Lester. Deflation Methods for Sparse PCA. Nips. 2008:1–8.
- (10) Richt Peter, Tak Martin. Alternating Maximization : Unifying Framework for 8 Sparse PCA Formulations and Efficient Parallel Codes 2012 IEEE International Conference on Big Data. 2012:1–20.
- (11) Tibshirani Robert. Regression Selection and Shrinkage via the Lasso Journal of the Royal Statistical Society, Series B. 1994;58:267–288.
- (12) Rasmussen Morten Arendt, Bro Rasmus. A tutorial on the Lasso approach to sparse modeling Chemometrics and Intelligent Laboratory Systems. 2012;119:21–31.
- (13) Hastie Trevor, Tibshirani Robert, Wainwright Martin. Statistical Learning with Sparsity: The Lasso and Generalizations. Chapman & Hall/CRC 2015.
- (14) Zou H, Hastie T. Regularization and variable selection via the elastic-net Journal of the Royal Statistical Society. 2005;67:301–320.
- (15) Sjöstrand Karl, Clemmensen Line, Larsen Rasmus, Einarsson Gudmundur, Ersbøll Bjarne. SpaSM: A MATLAB Toolbox for Sparse Statistical Modeling Journal of Statistical Software, Articles. 2018;84:1–37.
- (16) Camacho J.. Missing-data theory in the context of exploratory data analysis Chemometrics and Intelligent Laboratory Systems. 2010;103:8–18.
- (17) Dunlavy Daniel M., Kolda Tamara G., Acar Evrim. Poblano v1.0: A Matlab Toolbox for Gradient-Based Optimization Tech. Rep. SAND2010-1422Sandia National Laboratories, Albuquerque, NM and Livermore, CA 2010.
- (18) Nocedal Jorge, Wright Stephen J.. Numerical Optimization. Springer 2006.
- (19) Camacho J.. On the generation of random multivariate data Chemometrics and Intelligent Laboratory Systems. 2017;160:40 - 51.
- (20) Forni A.A., Meulen R.. Market Insight: Security Market Transformation Disrupted by the Emergence of Smart, Pervasive and Efficient SecurityCritical Capabilities for Security Information and Event Management Gartner. 2017.
- (21) García-Teodoro P., Díaz-Verdejo J.E., Maciá-Fernández E. Vázquez:. Anomaly-based network intrusion detection: Techniques, systems and challenges Computers & Security. 2009:18–28.
- (22) Lakhina Anukool, Crovella Mark, Diot Christiphe. Characterization of network-wide anomalies in traffic flows Proceedings of the 4th ACM SIGCOMM conference on Internet measurement - IMC ’04. 2004;6:201.
- (23) Delimargas Athanasios, Skevakis Emmanouil, Halabian Hassan, Lambadaris Ioannis. Evaluating a modified PCA approach on network anomaly detection Fifth International Conference on Next Generation Networks and Services (NGNS). 2014:124–131.
Callegari Christian, Gazzarrini Loris, Giordano Stefano, Pagano Michele, Pepe Teresa. Improving PCA-based anomaly detection by using multiple time scale analysis and Kullback-Leibler divergenceInternational Journal of Communication Systems. 2014;27:1731–1751.
- (25) Aiello Maurizio, Mongelli Maurizio, Cambiaso Enrico, Papaleo Gianluca. Profiling DNS tunneling attacks with PCA and mutual information Logic Journal of IGPL. 2016:1–14.
- (26) Camacho José, Pérez-Villegas Alejandro, García-Teodoro Pedro, Maciá-Fernández Gabriel. PCA-based Multivariate Statistical Network Monitoring for anomaly detection Computers & Security. 2016;59:118-137.
Network-wide intrusion detection supported by multivariate analysis and interactive visualization(Phoenix, AZ, USA)IEEE 2017.
- (28) VAST Challenge 2012 http://www.vacommunity.org/VAST+Challenge+2012.
- (29) Camacho J, García-Giménez JM, Fuentes-García NM, Maciá-Fernández G. Multivariate Big Data Analysis for Intrusion Detection: 5 steps from the haystack to the needle Submitted to Computers and Security. 2019.
- (30) Dahlberg Donald B., Lee Shawn M., Wenger Seth J., Vargo Julie A.. Classification of Vegetable Oils by FT-IR Applied Spectroscopy. 1997;51:1118-1124.