1 Introduction
Although the basic models for tensor (i.e., multiway array) decompositions and factorizations such as Tucker and (Canonicaldecomposition/Parafac) CP models were proposed long time ago, they are recently emerging as promising tools for exploratory analysis of multidimensional data in diverse applications, especially, in multiway blind source separation (BSS), feature extraction, classification, prediction, and multiway clustering [1, 2, 3, 4, 5]. The recent advances in neuroimage technologies (e.g., high density array EEG/MEG, fMRI, NIRS) have generated massive amounts of brain data exhibiting high dimensionality, multiple modality and multiple couplings, functional connectivity. By virtue of their multiway nature, tensors provide a powerful tools for analysis and fusion of such massive data together with a mathematical backbone for the discovery of underlying hidden complex data structures [1, 6, 7]. Constrained Matrix Factorizations, called sometimes penalized matrix decompositions (e.g., ICA, SCA, NMF, SVD/PCA, CCA) and their extensions or penalized tensor decompositions  multidimensional constrained models (Tucker, CP, NTF, NTD models [6, 1, 7, 8]), with some constraints such as statistical independence, decorrelation, orthogonality, sparseness, nonnegativity, and smoothness  have been recently proposed as meaningful and efficient representations of signals, images and in general natural multidimensional data [1]. From signal processing and data analysis point of view, tensor decompositions are very attractive because they take into account spatial, temporal and spectral information and provide links among various data and extracted factors or hidden (latent variables) components with physical or physiological meaning and interpretations [1, 8, 10, 11, 12]
. In fact, tensor decompositions are emerging techniques for data fusion, dimensionality reduction, pattern recognition, object detection, classification, multiway clustering, sparse representation and coding, and multilinear blind source separation (MBSS)
[1, 2, 13, 14, 15].Basic notations. Tensors (i.e., multiway arrays) are denoted by underlined capital boldface letters, e.g., . The order of a tensor is the number of modes, also known as ways or dimensions (e.g., space, time, frequency, subjects, trials, classes, groups, conditions). In contrast, matrices (twoway tensors) are denoted by boldface capital letters, e.g.,
; vectors (oneway tensors) are denoted by boldface lowercase letters, e.g., columns of the matrix
by and scalars are denoted by lowercase letters, e.g., .The mode product of a tensor and a matrix is a tensor , with elements . Unfolding (matricization, flattening) of a tensor in mode is denoted as , which consists of arranging all possible mode tubes in as columns of a matrix. [2]. Throughout this paper, standard notations and basic tensor operations are used [1].
2 Basic Models for Multilinear BSS/ICA
Most of the linear blind source separation (BSS) models can be represented as constrained matrix factorization problems, with suitable constraints imposed on factor matrices (or their columns referred to as components)
(1) 
where denotes outer product^{1}^{1}1The outer product of two vectors builds up a rankone matrix and the outer product of three vectors: builds up a thirdorder rankone tensor: , with entries defined as ., is a known data matrix (representing observations or measurements), represents errors or noise, is an unknown basis (mixing) matrix, with basis vectors and the columns of a matrix represent unknown components. latent variables or sources .
Remark:We notice that we have symmetry in the factorization: For Eq (1) we could just as easily write , so the meaning of sources and mixture are often somewhat arbitrary.
Our primary objective in the BSS is to estimate uniquely (neglecting unavoidable scaling and permutation ambiguities) factor matrices
and , subject to various specific constraints imposed on the vectors and/or , such as mutual statistical independence (ICA), sparseness (SCA), smoothness (SmoCA), nonnegativity (NMF) or orthogonality (PCA/SVD), uncorrelation, etc.In some applications the data matrix is factorized into three or more factors [1]
. In the special case, of a singular value decomposition (SVD) of the data matrix
, we have the following factorization:(2) 
where and are orthogonal matrices and is a diagonal matrix containing only nonnegative singular values. The SVD and its generalizations play key roles in signal processing and data analysis [15].
Often multiple subject, multiple task data sets can be represented by a set of data matrices and it is necessary to perform simultaneous constrained matrix factorizations^{2}^{2}2This form of factorizations are typical with EEG/MEG related data for multisubjects, multitasks, while the factorization for the transposed of is typical for fMRI data [16, 17, 18].:
(3) 
or in preprocessed form with a dimensionally reduction
(4) 
subject to various additional constraints (e.g., and their columns are mutually independent and/or sparse). This problem is related to various models of group ICA, with suitable preprocessing, dimensionality reduction and postprocessing procedures [16, 17, 18]. In this paper, we introduce the group multiway BSS concept, which is more general and flexible than the group ICA, since various constraints can be imposed on factor matrices in different modes (i.e., not only mutual independence but also nonnegativity, sparseness, smoothness or orthogonality). There is neither a theoretical nor an experimental basis that statistical independence (ICA) is the unique right concept to extract brain sources or isolate brain networks [6]
. In real world scenarios, latent (hidden) components (e.g., brain sources) have various complex properties and features. In other words, true unknown involved components are seldom all statistically independent. Therefore, if we apply only one single criterion like ICA, we may fail to extract all desired components with physical interpretation. We need rather to apply an ensemble of strategies by employing several suitably chosen criteria and associated learning algorithms to extract all desired components
[1, 6]. For these reasons, we have developed multiway BSS methods, which are based not only on the statistical independence of components, but rather exploits multiple criteria or diversities of factor matrices in various modes, in order to extract physiologically meaningful components, with specific features and statistical properties [1, 6]. By diversity, we mean different characteristics, features or morphology of source signals or hidden latent variables [7]. Since multiarray data can be always interpreted in many different ways, some a priori knowledge is needed to determine, which diversities, characteristics, features or properties represent true latent (hidden) components with physical meaning.It should be noted, although standard 2D BSS (constrained matrix factorizations) approaches, such as ICA, NMF, SCA, SmoCA, PCA/SVD, and their variants, are invaluable tools for feature extraction and selection, dimensionality reduction, noise reduction, and data mining, they have only two modes or 2way representations (typically, space and time), and their use is therefore limited. In many neuroscience applications the data structures often contain higherorder ways (modes) such as subjects, groups, trials, classes and conditions, together with the intrinsic dimensions of space, time, and frequency. For example, studies in neuroscience often involve multiple subjects (people or animals) and trials leading to experimental data structures conveniently represented by multiway arrays or blocks of multiway data.
The simple linear BSS models (1)(3) can be naturally extended for multidimensional data to the multiway BSS models using constrained tensor decompositions. In this paper, we consider a general and flexible approach based on the Tucker decomposition model  called also TuckerN model (see Fig. 1 (a)) [5, 1, 2]:
(5)  
where is the given data tensor, is a core tensor of reduced dimension, () are factors (component matrices) representing components, latent variables, common factors or loadings, is an approximation of the measurement , and denotes the approximation error or noise depending on the context [1, 5, 2]. The objective is to estimate factor matrices: , with components (vectors) , and the core tensor assuming that the number of factors in each mode are known or can be estimated [1].
If the factor matrices and a core tensor are orthogonal the Tucker model can be considered as extension of the SVD model (2), known as the High Order SVD (HOSVD) [15]. While the optimal approximation of a matrix can be obtained by truncation of its SVD, the optimal tensor approximation (with a minimal norm of the error) cannot in general be obtained by truncation of the Tucker decomposition. However, it was shown that the truncation of the particular constrained version usually yields a quite good approximation [15]. In general, in the Tucker model the orthogonality constraints are usually not imposed. Instead, we consider alternative constraints, such as sparseness, nonnegativity, smoothness or statistical mutual independence. It should be noted that the Tucker model is not unique in general, even if we impose some weak constraints. However, this model is unique if we impose suitable constraints (e.g., sparseness or independence).
This leads to a concept of group multiway BSS. There are two possible interpretations or concepts of employing Tucker decomposition as multiway BSS. In the first concept the columns of factor matrices represent desired components or latent variables and the core tensor represent some ”mixing process” or more precisely the core tensor shows links among components in different modes, while data tensor represents collection of 1D or 2D mixing signals. In the second concepts, the core tensor represents desired but unknown (hidden) dimensional signal (e.g., 3D MRI image or 4D video) and factor matrices represent various transformations, e.g., time frequency transformation or wavelets dictionaries (mixing or filtering processes), while a data tensor represents observed dimensional signal which is distorted, transformed compressed or mixed depending on applications. In this paper, we will consider only the first interpretation.
The Tucker model (5) can be represented by approximative matrix factorizations with three factors:
(6) 
where .
Moreover, the Tucker model (5) can be compressed to Tucker1 models, with only one factor matrix , in each mode (see Fig 1 (b)):
(7) 
where , . Note that the above models correspond to group BSS/ICA models (3), with , and , , under the assumption that we impose desired constraints for factor matrices .
Most of the existing approaches exploit only the CP model and impose only statistical independence constraints This leads to tensor probabilistic ICA [19] or ICACPA models [22, 23]. However, our approaches are quite different, because we use the Tucker models and we are not restricting only to ICA assumptions but exploit multiple criteria and allows for diversities of components. The advantage of the Tucker model over the CP model is that the Tucker model is more general and the number of components in each mode can be different and furthermore the components are linked via a core tensor, and hence allows us to model more complex hidden data structures. Note that the Tucker decomposition can be simplified to the CP model in the special case, where the ”hypercube” core tensor (with ) has nonzero elements only on the superdiagonal. The CP model has unique factorization without any constraints under some mild conditions.
For the Tucker and the CP models there exist many efficient algorithms [1, 2]. Most of them are based on the ALS (Alternating Least Squares) and HALS (Hierarchical ALS) [1, 8, 24, 11] and CUR (ColumnRow) decompositions [25]. However, description of these algorithms is out of the scope of this paper.
We can implement Multilinear BSS algorithms in several ways. First of all, we can minimize a global cost function, with suitable penalty and/or regularization terms to estimate desired components (see Eq. (5)):
(8) 
where are penalty coefficients and are penalty terms, which are added to achieve specific characterstic of the components. For example, if we need to impose mutual independence constraints the penalty terms can take the following form , where are suitable nonlinear functions. In principle, this method referred as penalized tensor decompositions, allows to find the factor matrices , with unique components and an associated core tensor but the method involves heavy computations and it is time consuming.
Another approach is to apply standard Tucker decompositions method, without applying any desired constraints using ALS [1], CUR [25] or HOOI/HOSVD [15, 20] algorithms and in the next step apply for each factor matrix standard constrained matrix factorization algorithms (e.g., ICA, NMF or SCA) [21, 26]. Assuming that each unconstrained factor in the model (5) can be further factorized using standard BSS algorithms as , we can formulate the following decomposition model:
(9)  
(10) 
where . It is worth to note, that in each mode, we can apply different criteria for matrix factorization.
Alternatively, a simpler approach is to perform the unfolding the data tensor for each mode , according to the Tucker1 decompositions (7) and to apply directly a suitable constrained factorization of matrices (or a penalized matrix decomposition) ^{3}^{3}3Unfortunately, this approach does not guarantee best fitness of the model to the data or minimum norm of errors , but usually the solutions are close to optimal. Note that in practice, for large scale problems, we do not need to perform explicitly unfolding of the full data tensor. Instead, we may apply fast sampling of tubes (columns) of data tensors [25].
(11) 
subject to desired constraints, by employing standard efficient BSS algorithms (e.g., NMF or ICA) [1, 26]. Since some matrices may have large dimensions, we usually need to apply some efficient methods for dimensionality reduction [26]. Finally, the core tensor, which shows the links among components in different modes can be computed as
(12) 
where denotes MoorePenrose pseudoinverse of a matrix.
Finally, we can apply the Block Tensor Decomposition (BTD) approach [3], with suitable constraints imposed on all or only in some preselected factors or components. In this particular case, the simplest scenario is to employ a constrained Block Oriented Decomposition (BOD) model (combining or averaging Tucker1 models) [1]:
(13) 
This model allows us to estimate all desired components in parallel or sequentially by minimizing the norm of the total errors () subject to specific constraints.
3 Dimensionality Reduction, Feature Extraction and Classification of Multiway Data
Dimensionality reduction, feature extraction and selection are essential problems in the analysis of multidimensional datasets with large number of variables [14]. We shall first illustrate the basic concepts of dimensionality reduction and feature extraction for a set of largescale sample matrices. Let us consider, that we have available a set of matrices (2D samples) that represent multi subjects or multitrials 2D data, which belong to different classes or categories (e.g., multiple mental task/state data or different mental diseases). In order to perform dimensionality reduction and to extract essential features for all the training samples, we apply simultaneous (approximative and constrained) matrix factorizations (see Fig. 2 (a)):
(14) 
where the two common factors (basis matrices) and , code (explain) each sample simultaneously along the horizontal and vertical dimensions and the extracted features are represented by matrices , typically, with and . In special cases are squared diagonal matrices. This problem can be considered as a generalization of Joint Approximative Diagonalization (JAD) [1, 27].
The common method to solve such matrix factorizations problem is to minimize a set of cost functions sequentially or in parallel, with respect to all the factor matrices. We can solve the problem more efficiently by concatenation or tensorization of all matrices along the third dimension to form an dimensional data tensor and perform the standard Tucker decomposition (see Fig. 2 (b)).
In order to obtain meaningful components and a unique decomposition it is often convenient to impose additional constraints [1, 14]. In the special case when the feature matrices are diagonal the Tucker2 model is reduced to special form of the unique CP model [1].
This approach can be naturally and quite straightforwardly extended to multidimensional data. Assume, that we have available a set of multidimensional tensors , , representing training data belonging to classes or categories. Each training sample is given a label indicating the category (class) to which it belongs.
In order to preform a dimensionality reduction and to extract significant features, we need to apply simultaneous approximative tensor decompositions (see Fig. 3)
(15) 
, where the compressed core tensors representing features are of lower dimension than the original data tensors , and we assume that the factors (basis matrices) are common factors for all data tensors.
To compute the factor matrices and the core tensors , we concatenate all training (sample) tensors into one order training data tensor , with and perform the Tucker decomposition [14]:
(16) 
where the sample tensors can be extracted back from the concatenated tensor by fixing the th index at a value , i.e., and the individual features (corresponding to different classes) are extracted from the core tensor as , with . In other words, the features of a specific training data are represented by the th row of the mode matricized version of the core tensor .
The above procedure for the feature extraction has been applied for a wide class of classification problems, as illustrated in Fig 4 (a) [14]. In the first stage, we extracted features and estimated the basis matrices for the concatenated tensor of all training data
with labels and we built up a projection filter to extract the features of the test data (without labels). In the next step, we extracted features of test data using estimated common basis. The extracted features were then compared with the training features using standard classifier, e.g., LDA, KNN, SVM
[14]. We applied the procedure described above to various feature extraction classification problems [14]. For example, we applied this procedure to successfully classify three groups of human subjects: Control age matched subjects (CTRL), Mild Cognitive Impairment (MCI) and probable Alzheimer’s disease (AD) patients (see Fig.
4 (b)) on the basis of spontaneous, followup EEG data [28, 14]. All the training EEG data were organized in the form of 3way tensors using a timefrequency representation by applying Morlet wavelets. Each frontal slice represented preprocessed data of one subject. We applied the standard Tucker decomposition using HOOI and NTD algorithms to extract the basis matrices and reduced features represented by a core tensor . The new test data in the form of matrices were projected via the basis matrices to extract features. The extracted test features were compared with the training features and classification was performed using LDA, KNN and SVM. We obtained quite promising classification accuracies to predict AD ranging from 82% to 96% depending on EEG data sets.4 Linked Multiway BSS/ICA
In neuroscience, we often need to perform a so called group analysis which seeks to identify mental tasks or stimuli driven brain patterns that are common in two or more subjects in a group.
Various group ICA methods have been proposed to combine the results of multisubjects, multitasks only after ICA is carried out on individual subjects and usually no constraints are imposed on components, which are allowed to differ with respect to their spatial, spectral maps, as well as to their temporal patterns. However, in many scenarios some links need to be considered to analyze variability and consistency of the components across subjects. Furthermore, some components do not need to be necessarily independent, they can be instead sparse, smooth or nonnegative (e.g., for spectral components). Moreover, it is often necessary to impose some constraints to be able to estimate some components, which are identical or maximally correlated across subjects with regard to their spatial distributions, spectral or temporal patterns. This leads to a new concept and model of linked multiway BSS (or linked multiway ICA, if statistical independence criteria are used).
In the linked multiway BSS, we perform approximative decompositions of a set of data tensors , representing multiple subjects and/or multiple tasks (see Fig. 5):
(17) 
where each factor (basis matrix) is composed of two bases: (with ), which are common bases for all subjects in the group and correspond to the same or maximally correlated components and , which correspond to stimuli/tasks independent individual characteristics.
For example, denotes the brain activity in spacetimefrequency domains for the th subject. Based on a set of such data, we can compute common factors and interactions between them. For example, linked multiway BSS approach my reveal coupling of brain regions, possibly at different time slots and/or different frequency bins.
To solve problems formulated this way, we can apply similar procedures to the one described in previous two Sections. If for a specific mode , then we can concatenate all data tensors along this mode, perform tensor decomposition and apply any standard BSS algorithm to compute desired components in each mode (e.g., to estimate independent components, we apply any standard ICA algorithm). In the more general case, when the number of common components are unknown, we perform an additional unfolding for each data tensor in each mode and then perform a set of constrained matrix factorizations (by applying standard algorithms for ICA, NMF, SCA, SmoCA, PCA/SVD, etc.):
(18) 
for and , where matrices represent individual linked components, while matrices represent basis vectors or linked mixing processes and .
In the next stage, we need to perform the statistical analysis and to compare the individual components extracted in each mode , for all subjects , by performing clustering and ordering components to identify common or similar (highly correlated) components (see e.g., [1, 17]). In the last stage, we compute core tensors, which describe the functional links between components (see Eq. (12)).
For the linked multiway BSS, we can also exploit constrained Block Tensor Decomposition (BTD) models for an averaging data tensor across subjects:
(19) 
Such model can provide us additional information.
In group and linked multiway BSS and ICA, we usually seek stimuli driven ERPs (event related responses) and/or task related common components or common bases reflecting both intra subject and inter subject features as bases, which are independent involving individual on going brain activities [29]. In other words, we seek event/taskrelated components , that are identical in each mode or maximally correlated across subjects and event/task independent individual bases , which are independent or even as far apart as possible (anticorrelated) [29].
Note that our Linked Multiway BSS is different from the linked ICA [23] and group NMF [29, 30], since we do not limit components diversity by constraining them to be only statistically independent or only nonnegative components and our model is based on constrained Tucker models instead of rather the quite restrictive trilinear CP model.
How to select common components will depend on the validity of the underlying assumptions and a priori knowledge. For instance, identical spatial distributions can well be assumed for homogeneous subject groups, but may be unacceptable in studies that include patients with different ages or mental diseases or abnormalities. Temporal components may be the same for stimulus or taskevoked responses that are related to a specific mental task or paradigm, but these will vary for the spontaneous fluctuations that occur in resting state experiments. In some experiments, responses may be assumed similar or identical within but different across subgroups [18, 17].
We conclude that the proposed Linked BSS model provides a framework that is very flexible and general and it may substantially supplement many of the currently available techniques for group ICA and feature extraction models. Moreover, the model can be extended to a multiway Canonical Correlation Analysis (MCCA), in which we impose maximal correlations among normalized factor matrices (or subset of components) and/or core tensors.
5 Multiway Partial Least Squares
The Partial Least Squares (PLS) methods (originally developed in chemometrics and econometrics) are particularly suitable for the analysis of relationships among multimodal brain data (e.g., EEG, MEG, ECoG (electrocorticogram), fMRI) or relationships between measures of brain activity and behavior data [31, 32]. The standard PLS approaches have been recently summarized by H. Abdi [31] and their suitability to model relationships between brain activity and behavior (experimental design) has been highlighted by A. Krishnan et al. [32].
In computational neuroscience, there are two related basic PLS methods: PLS correlation (PLSC), which analyzes correlations or associations between two or more sets of data (e.g., two modalities brain data or brain and behavior data) and PLS regression (PLSR) methods, which attempt to predict one set of data from another independent data that constitutes the predictors (e.g., experimental behavior data from brain data such multichannel ECoG or scalp EEG from ECoG, by performing simultaneous recordings for epileptic patients).
In order to predict response variables represented by matrix
from the independent variables , the standard PLSR techniques find a set of common orthogonal latent variables (also called latent vectors, score vectors or components) by projecting both and onto a new subspace, which ensures a maximal correlation between the latent variables of and [31]. In other words, the prediction is achieved by simultaneous approximative decompositions training data sets: and into components ( and ) :(20)  
(21) 
where is a scaling diagonal matrix; with the constraint that these components explain as much as possible of the covariance between and [31, 33]. The latent components are defined as , where consists of direction vectors . The basic concept of the standard PLS is to find these directions vectors from successive optimizations problems:
for .
Such decompositions and relationships between components can be used to predict values of dependent variables for new situations, if the values of the corresponding independent variables are available.
Since the brain data are often multidimensional and multimodal and can be naturally represented via tensors, attempts have been made to extend PLS methods to multiway models [33]. In this section, we briefly describe multiway PLS based on a constrained Tucker model (PTD): Given an thorder independent data tensor and an thorder dependent data tensor , with the same size in at least one mode (typically, the first mode)^{4}^{4}4 The first mode, usually, corresponds to the samples mode or the time mode. However, in some applications, for example, for simultaneous recordings of EEG and ECoG data two modes (ways) may have the same size and they represent time and frequency (temporal and spectral modes, respectively). .
Our objective is to preform simultaneous constrained Tucker decomposition, with at least one (or two) common or maximally correlated factor(s) (see Fig. 6):
(22)  
(23) 
where additional constraints are imposed: (and of ), while other factor matrices are essentially different (e.g., orthogonal or mutually independent). Note that the core tensors and have special blockdiagonal structures (see Fig. 6) that indicate sparseness. New algorithms for constrained Tucker based multiway PLS models have been developed in [33].
Such models allow for different types of structures on and and provides a general framework for solving multiway regression problems that explore complex relationships between multidimensional dependent and independent variables. For example, tensor decompositions can be applied in emerging neuroimaging genetics studies to investigate links between biological parameters measured with brain imaging and genetic variability [34].
6 Conclusions
Tensor decompositions is a fascinating emerging field of research, with many applications in multilinear blind source separation, linked multiway BSS/ICA, feature extraction, classification and prediction. In this review/tutorial paper, we have briefly discussed several new and promising models for decompositions of linked or correlated sets of tensors, by imposing various constraints on factors, components or latent variables depending on the specific applications. We have developed several promising and efficient algorithms for such models, which can be found in our recent book, publications and reports.
References
 [1] A. Cichocki, R. Zdunek, A.H. Phan, and S. Amari, Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multiway Data Analysis and Blind Source Separation, Wiley, Chichester, 2009.
 [2] T. Kolda and B. Bader, “Tensor decompositions and applications,” SIAM Review, vol.51, no.3, pp.455–500, 2009.
 [3] L. De Lathauwer, “Decompositions of a higherorder tensor in block terms – Part I and II,” SIAM Journal on Matrix Analysis and Applications (SIMAX), vol.30, no.3, pp.1022–1066, 2008.
 [4] F. Hitchcock, “Multiple invariants and generalized rank of a pway matrix or tensor,” Journal of Mathematics and Physics, vol.7, pp.39–79, 1927.
 [5] L. Tucker, “Some mathematical notes on threemode factor analysis,” Psychometrika, vol.31, pp.279–311, 1966.
 [6] A. Cichocki, “Generalized Component Analysis and Blind Source Separation Methods for Analyzing Mulitchannel Brain Signals,” in Statistical and Process Models for Gognitive Neuroscience and Aging, pp.201–272, Lawrence Erlbaum Associates, 2007.
 [7] A. Cichocki and S. Amari, Adaptive Blind Signal and Image Processing, John Wiley & Sons Ltd, New York, 2003.
 [8] A. Cichocki and A.H. Phan, “Fast local algorithms for large scale nonnegative matrix and tensor factorizations,” IEICE (invited paper), vol.EA92A(3), pp.708–721, March.
 [9] A. Cichocki, R. Zdunek, and S.I. Amari, “Hierarchical ALS algorithms for nonnegative matrix and 3D tensor factorization,” Springer, Lecture Notes on Computer Science, LNCS4666, pp.169–176, 2007.
 [10] M. Mrup, L.K. Hansen, C.S. Herrmann, J. Parnas, and S.M. Arnfred, “Parallel factor analysis as an exploratory tool for wavelet transformed eventrelated EEG,” NeuroImage, vol.29, no.3, pp.938–947, 2006.
 [11] E. Acar, D.M. Dunlavy, T.G. Kolda, and M. Mørup, “Scalable tensor factorizations for incomplete data,” Chemometrics and Intelligent Laboratory Systems, vol.106 (1), pp.41–56, 2011.
 [12] F. Miwakeichi, E. MartinezMontes, P. ValdsSosa, N. Nishiyama, H. Mizuhara, and Y. Yamaguchi, “Decomposing EEG data into spacetimefrequency components using parallel factor analysis,” NeuroImage, vol.22, no.3, pp.1035–1045, 2004.
 [13] A. Cichocki, R. Zdunek, S. Choi, R. Plemmons, and S. Amari, “Nonnegative tensor factorization using Alpha and Beta divergencies,” Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP07), Honolulu, Hawaii, USA, pp.1393–1396, April 15–20 2007.
 [14] A.H. Phan and A. Cichocki, “Tensor decompositions for feature extraction and classification of high dimensional datasets,” Nonlinear Theory and Its Applications, NOLTA IEICE (invited paper), vol.1, pp.37–68, 2010.
 [15] L. De Lathauwer, B.D. Moor, and J. Vandewalle, “On the best rank1 and rank(R1,R2,…,RN) approximation of higherorder tensors,” SIAM Journal of Matrix Analysis and Applications, vol.21, no.4, pp.1324–1342, 2000.
 [16] V. Calhoun, J. Liu, and T. Adali, “A review of group ICA for fMRI data and ICA for joint inference of imaging, genetic, and ERP data,” Neuroimage, vol.45, pp.163–172, 2009.
 [17] Y. Guo and G. Pagnoni, “A unified framework for group independent component analysis for multisubject fMRI data,” Neuroimage, vol.42 (3), pp.1078–1093, 2008.
 [18] D. Langers, “Unbiased grouplevel statistical assessment of independent component maps by means of automated retrospective matching,” Human Brain Mapping, vol.31, pp.727 –742, 2010.
 [19] C. Beckmann and S. Smith, “Tensorial extensions of independent component analysis for multisubject fMRI analysis,” NeuroImage, vol.25, no.1, pp.294 – 311, 2005.

[20]
M. Vasilescu and D. Terzopoulos, “Multilinear independent components analysis,” Computer Vision and Pattern Recognition, 2005. CVPR 2005, IEEE Computer Society Conference on, pp.547 – 553, 2005.
 [21] S. Unkel, A. Hannachi, N. Trendafilov, and I. Jolliffe, “Independent component analysis for threeway data with an application from atmospheric science,” Journal of Agricultural, Biological, and Environmental Statistics, p.(In press), 2011.
 [22] M. De Vos, D. Nion, S. Van Huffel, and L. De Lathauwer, “A combination of parallel factor and independent component analysis,” tech. rep., 2011.
 [23] A. Groves, C. Beckmann, S. Smith, and M. Woolrich, “Linked independent component analysis for multimodal data fusion,” NeuroImage, vol.54, no.1, pp.2198 – 21217, 2011.
 [24] I. Kopriva, I. Jeric, and A. Cichocki, “Blind decomposition of infrared spectra using flexible component analysis,” Chemometrics and Intelligent Laboratory Systems, vol.97, pp.170–178, 2009.
 [25] C. Caiafa and A. Cichocki, “Generalizing the columnrow matrix decomposition to multiway arrays,” Linear Algebra and its Applications, vol.433, no.3, pp.557–573, 2010.
 [26] G. Zhou and A. Cichocki, “Fast and unique Tucker decompositions via Multiway Blind Source Separation,” 2011 (submitted).
 [27] L. De Lathauwer, “A link between the canonical decomposition in multilinear algebra and simultaneous matrix diagonalization,” SIAM Journal on Matrix Analysis and Applications, vol.28, pp.642–666, 2006.
 [28] A. Cichocki, S. Shishkin, T. Musha, Z. Leonowicz, T. Asada, and T. Kurachi, “EEG filtering based on blind source separation (BSS) for early detection of Alzheimer disease,” vol.116, pp.729–737, 2005.

[29]
H. Lee and S. Choi, “Group nonnegative matrix factorization for EEG classification,” in Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics (AISTATS2009), Clearwater Beach, Florida, April 1618, 2009, 2009.
 [30] H. Lee, Y.D. Kim, A. Cichocki, and S. Choi, “Nonnegative tensor factorization for continuous EEG classification,” International Journal of Neural Systems, vol.17, no.4, pp.1–13, August 2007.
 [31] H. Abdi, “Partial least square regression, projection on latent structure regression, PLSregression,” Wiley Interdisciplinary Reviews: Computational Statistics, vol.2, pp.97–106, 2010.
 [32] A. Krishnan, L. Williams, H. McIntosh, and H. Abdi, “Partial Least Squares (PLS) methods for neuroimaging: A tutorial and review,” NeuroImage, vol.56, pp.455–475, 2011.

[33]
Q. Zhao, C. Caiafa, Z. Chao, Y. Nagasaka, D. Mandic, N. Fujii, L. Zhang, and A. Cichocki, “Higherorder partial least squares (HOPLS): A generalized multilinear regression method,” PAMI, vol.(submitted), 2012.
 [34] J. Poline, C. Lalanne, A. Tenenhaus, E. Duchesnay, B. Thirion, and V. Frouin, “Imaging genetics: bioinformatics and biostatistics challenges,” in Proceedings of the 19th International Conference on Computational Statistics, Paris, France, 2010.
Comments
There are no comments yet.