1 Introduction
It is imperative to incorporate latent variables in any modeling framework. Latent variables can capture the effect of hidden causes which are not directly observed. Learning these hidden factors is central to many applications, e.g., identifying the latent diseases through observed symptoms, identifying the latent communities through observed social ties, and so on. Moreover, latent variable models (LVMs) can provide an efficient representation of the observed data, and learning these representations can lead to improved performance on various tasks such as classification. The recent performance gains in domains such as speech and computer vision can be largely attributed to efficient representation learning
(Bengio et al., 2012). Moreover, it has been shown that learning overcomplete representations is crucial to achieving these impressive gains (Coates et al., 2011b).In an overcomplete representation, the dimensionality of the latent space exceeds the observed dimensionality. Overcomplete representations are known to be more robust to noise, and can provide greater flexibility in modeling (Lewicki and Sejnowski, 2000). Although overcomplete representations have led to huge performance gains in practice, theoretical guarantees for learning are mostly lacking. In many domains, we face the challenging task of unsupervised or semisupervised learning, since it is expensive to obtain labeled samples and we typically have access to a large number of unlabeled samples, e.g. (Coates et al., 2011b; Le et al., 2011). Therefore, it is imperative to develop novel guaranteed methods for efficient unsupervised/semisupervised learning of overcomplete models.
In this paper, we bridge the gap between theory and practice, and establish that a wide range of overcomplete LVMs can be learned efficiently through simple spectral learning techniques. We perform spectral decomposition of the higher order moment tensors (estimated using unlabeled samples) to obtain the model parameters. A recent line of work has shown that tensor decompositions can be employed for unsupervised learning of a wide range of LVMs, e.g., independent components
(De Lathauwer et al., 2007), topic models, Gaussian mixtures, hidden Markov models
(Anandkumar et al., 2012a), network community models (Anandkumar et al., 2013b), and so on. It involves decomposition of a multivariate moment tensor, and is guaranteed to provide a consistent estimate of the model parameters. The sample and computational requirements are only a low order polynomial in the latent dimensionality for the tensor method (Anandkumar et al., 2012a; Song et al., 2013). However, a major drawback behind these works is that they mostly consider the undercomplete setting, where the latent dimensionality cannot exceed the observed dimensionality.In this work, we establish guarantees for tensor decomposition in learning overcomplete LVMs, such as multiview mixtures, independent component analysis, Gaussian mixtures and sparse coding models. Note that learning general overcomplete models is illposed since the latent dimensionality exceeds the observed dimensionality. We impose a natural incoherence condition on the components, which can be viewed as a
soft orthogonality constraint, which limits the redundancy among the components. We establish that this constraint not only makes learning wellposed but also enables efficient learning through tensor methods. Incoherence constraints are natural in the overcomplete regime, and have been considered before, e.g., in compressed sensing (Donoho, 2006), independent component analysis (Le et al., 2011), and sparse coding (Arora et al., 2013; Agarwal et al., 2013).1.1 Summary of results
In this paper, we provide semisupervised and unsupervised learning guarantees for LVMs such as multiview mixtures, Independent Component Analysis (ICA), Gaussian mixtures and sparse coding models. For the learning algorithm, we exploit the tensor decomposition algorithm in (Anandkumar et al., 2014), which performs alternating asymmetric power updates on the input tensor modes (or performs symmetric power updates if the input tensor is symmetric). Under the semisupervised setting, we establish that highly overcomplete models can be learned efficiently through tensor decomposition methods. The moment tensors are constructed using unlabeled samples, and the labeled samples are used to provide a rough initialization to the tensor decomposition algorithm. In the unsupervised setting, we propose a simple initialization strategy for the tensor method, and require stricter conditions on the extent of overcompleteness for guaranteed learning. In addition, we provide tight concentration bounds on the empirical tensors through novel covering arguments, which imply efficient sample complexity bounds for learning using the tensor method.
We now summarize the results for learning multiview mixtures model with incoherent components^{4}^{4}4We use the term incoherence to say that the deterministic condition in the appendix of Anandkumar et al. (2014) is satisfied which basically imposes softorthogonality constraints on the components. It is also shown that this condition is satisfied whp when the components are uniformly i.i.d. drawn from unit sphere.. Let be the number of hidden components, and be the observed dimensionality. In the semisupervised setting, we prove guaranteed learning when , where is the order of observed moment employed for tensor decomposition. We prove that in the “low” noise regime (where the norm of noise is of the same order as that of the component means), having an extremely small number of labeled samples for each label is sufficient (scaling as independent of the final precision). This is far less than the number of unlabeled samples required. Note that in most applications, labeled samples are expensive/hard to obtain, while many more unlabeled samples are easily available, e.g., see Le et al. (2011); Coates et al. (2011a). Furthermore, we show that the sample complexity bounds for unlabeled samples is . Note that this is the minimax bound up to factors.
We also provide unsupervised learning guarantees when no label is available. Here, the initialization is obtained by performing a rank SVD on the random slices of the moment tensor. This imposes additional conditions on rank and sample complexity. We prove that when (for arbitrary constant which can be larger than 1), the model parameters can be learned using a polynomial number of initializations (which depends on and scales as ) and sample complexity scales as , which is efficient.
We also provide semisupervised and unsupervised learning guarantees for ICA model. By semisupervised setting in ICA, we mean some prior information is available which provides good initializations (with a constant error on the columns) for the tensor decomposition algorithm. In the semisupervised setting, we show that when the number of components scales as , the ICA model can be efficiently learned from fourth order moments with number of unlabeled samples. In the unsupervised setting, we show that when , the ICA model can be learned with in time .
We also provide learning results for the sparse coding model, when the coefficients are independently drawn from a BernoulliGaussian distribution. Note that this corresponds to a sparse ICA model since the hidden coefficients are independent. Let
be the expected sparsity level of the hidden variables. In the semisupervised setting (where prior information gives good initialization), we require number of unlabeled samples for learning as long as . Note that in the special case when is a constant, the sample complexity is akin to learning multiview models, where ; and when , it is akin to learning the “dense” ICA model, where . Thus, the sparse coding model bridges the range of models between multiview mixtures and ICA. Furthermore, we also extend the learning results to dependent sparsity setting, but with worse performance guarantees.Although we prove strong theoretical guarantees for learning overcomplete models, there are two main caveats for our approach. We recover the model parameters with an approximation error, which decays with the dimension . Concretely, for the order tensor, the approximation error is , which decays since
. This is because the actual mixture components are not the stationary points of the tensor algorithm updates (even in the noiseless setting) since the components are not strictly orthogonal. This bias can be presumably removed by performing joint updates (e.g alternating least squares) where the objective is to fit the learnt vectors to the input tensor and we leave this for future study. Second, the setting is not suited for topic models, where there is a nonnegativity constraint on the topicword matrix. Here, incoherence can only be enforced through sparsity, and since our method does not exploit sparsity, we believe that other formulations may be better suited for learning in this setting.
Overview of techniques:
We establish tight concentration bounds for empirical tensors when the samples are drawn from multiview linear mixtures, Gaussian mixtures, ICA or sparse coding models. The concentration bound involves bounding the spectral norm of the error tensor, and this relies on the construction of nets to cover all vectors (on the sphere). A naive net argument is however too loose since it results in a large number of vectors without a “finegrained” distinction between them. A more refined notion is to employ an entropyconcentration tradeoff, as proposed in Rudelson and Vershynin (2009), where the vectors in the
net are classified into sparse and dense vectors, and to analyze them separately. The sparse vectors can result in large correlations, but the number of such vectors is small, while the dense vectors have small correlations, although their number is larger. In our setting, however, this classification is still not enough, and we need a more refined analysis. We group the data samples into “buckets” based on their correlation with a given vector, and bound each “bucket” separately. We impose additional conditions on the factor and noise matrices to bound the size of the buckets.
For the multiview linear mixtures, we impose a restricted isometry property (RIP) on the noise matrices and a bounded norm condition on the factor matrices (which is weaker than RIP). For Gaussian mixtures, the RIP property on noise is satisfied, and we only require a condition of bounded norm on the matrix of component mean vectors. These constraints allow us to bound the size of the “buckets”, where each bucket corresponds to noise or factor vectors with a certain level of correlation with a fixed vector. Intuitively, the number of samples having a high correlation with a fixed vector (i.e. size of a “bucket”) cannot be too large due to RIP/bounded 2to3 norm constraints. We apply Bernstein’s bound on each of these buckets separately and combine them to obtain the final bound. Our construction has only a logarithmic number of buckets (since we vary correlation levels geometrically), and therefore the overall concentration bound only has additional logarithmic factors when we combine the results.
For the ICA model, the conditions and analysis are somewhat different. This is because all the hidden sources “mix” together in each sample, in contrast to the mixture model, where each sample is generated from only one component. Establishing concentration bounds involves two steps, viz., first having a bound on the fourth order empirical moment of the hidden sources, assuming they are subGaussian and kurtotic,^{5}^{5}5
Note that while the kurtotis (4th order cumulant) of a Gaussian random variable is zero, the kurtotis of subGaussian random variables is in general nonzero. In addition, note that this analysis can be also extended to subexponential random variables.
and then converting the bound to the observed space. This involves a spectral norm bound on the linear map between the hidden sources and the observations.We then consider the sparse coding model, where the hidden variables are assumed to be sparsely activated. In the special case, when the hidden variables are independent, this corresponds to a sparse ICA model. We derive the concentration bound for BernoulliGaussian variables, assuming that the dictionary has the RIP property (e.g., Gaussian matrix). In this case, we establish that the concentration bound depends only on the sparsity level, and not on the total number of dictionary elements. Here, we partition the vectors into “buckets” based on their correlation with the dictionary elements and the RIP property allows us to bound the size of buckets, as before in the case of multiview mixtures. In addition, we exploit the sparsity of elements to obtain a tighter bound for the sparse coding setting.
Thus, we obtain tight concentration bounds for empirical tensors for multiview and Gaussian mixtures, ICA and sparse coding models. The conditions on noise (RIP) and factor matrices (bounded to norm) are fairly benign and natural to impose. Our novel bucketing arguments could be applicable in other settings involving matrix and tensor concentration bounds.
We then employ the concentration bounds in conjunction with the alternating rank updates algorithm to obtain learning guarantees for the above models. In our recent work (Anandkumar et al., 2014), we establish local and global convergence guarantees for this algorithm when the components are incoherent. We combine these guarantees with the concentration bounds to establish that a wide range of latent variable models can be learned with low computational and sample complexities.
1.2 Related work
Tensor decomposition for learning undercomplete models:
Several latent variable models can be learned through tensor decomposition including independent component analysis (De Lathauwer et al., 2007), topic models, Gaussian mixtures, hidden Markov models (Anandkumar et al., 2012a) and network community models (Anandkumar et al., 2013b). In the undercomplete setting, Anandkumar et al. (2012a) analyze robust tensor power iteration for learning LVMs, and Song et al. (2013) extend analysis to the nonparametric setting. These works require the tensor factors to have full column rank, which rules out overcomplete models. Moreover, they require whitening the input data, and hence the sample complexity depends on the condition number of the factor matrices. For instance, when , for random factor matrices, the previous tensor approaches in Song et al. (2013); Anandkumar et al. (2013a) have a sample complexity of , while our result provides improved sample complexity assuming incoherent components.
Learning overcomplete models:
In general, learning overcomplete models is challenging, and they may not even be identifiable. The FOOBI procedure by De Lathauwer et al. (2007) shows that a polynomialtime procedure can recover the components of ICA model (with generic factors) when , where the moment is fourth order. However, the procedure does not work for thirdorder overcomplete tensors. For the fifth order tensor, Goyal et al. (2013); Bhaskara et al. (2013) perform simultaneous diagonalization on the matricized versions of random slices of the tensor and provide careful perturbation analysis. But, this procedure cannot handle the same level of overcompleteness as FOOBI, since an additional dimension is required for obtaining two (or more) fourth order tensor slices. In addition, Goyal et al. (2013) provide stronger results for ICA, where the tensor slices can be obtained in the Fourier domain. Given th order tensor, they need number of unlabeled samples for learning ICA (where the poly factor is not explicitly characterized), while we only need (when ). Anderson et al. (2013) convert the problem of learning Gaussian mixtures to an ICA problem and exploit the Fourier PCA method in Goyal et al. (2013)
. More precisely, for a Gaussian mixtures model with known identical covariance matrices, when the number of components
, the model can be learned in polynomial time (as long as a certain nondegeneracy condition is satisfied).Arora et al. (2013); Agarwal et al. (2013); Barak et al. (2014) provide guarantees for the sparse coding model (also known as dictionary learning problem). Arora et al. (2013); Agarwal et al. (2013) provide clustering based approaches for approximately learning incoherent dictionaries and then refining them through alternating minimization to obtain exact recovery of both the dictionary and the coefficients. They can handle sparsity level up to (per sample) and the size of the dictionary can be arbitrary. Barak et al. (2014) consider tensor decomposition and dictionary learning using sumofsquares (SOS) method. In contrast to simple iterative updates considered here, SOS involves solving semidefinite programs. They provide guaranteed recovery by a polynomial time complexity for some , when the size of the dictionary , and the sparsity level is . They also provide guarantees for higher sparsity levels up to (a small enough) constant fraction of , but the computational complexity of the algorithm becomes quasipolynomial: . They can also handle higher level of overcompleteness at the expense of reduced sparsity level. They do not require any incoherence conditions on the factor matrices and they can handle the signal to noise ratio being a constant. Thus, their work has strong guarantees, but at the expense of running a complicated algorithm. In contrast, we consider a simple alternating rank updates algorithm, but require more stringent conditions on the model.
There are other recent works which can learn overcomplete models, but under different settings than the one considered in this paper. Anandkumar et al. (2013c) learn overcomplete sparse topic models, and provide guarantees for Tucker tensor decomposition under sparsity constraints. Specifically, the model is identifiable using order moments when the latent dimension and the sparsity level of the factor matrix is , where is the observed dimension. The Tucker decomposition is more general than the CP decomposition considered here, and the techniques in (Anandkumar et al., 2013c) differ significantly from the ones considered here, since they incorporate sparsity, while we incorporate incoherence here.
Concentration Bounds:
We obtain tight concentration bounds for empirical tensors in this paper. In contrast, applying matrix concentration bounds, e.g. (Tropp, 2012), leads to strictly worse bounds since they require matricizations of the tensor. Latala (2006) provides an upper bound on the moments of the Gaussian chaos, but they are limited to independent Gaussian distributions (and can be extended to other cases such as Rademacher distribution). The principle of entropyconcentration tradeoff (Rudelson and Vershynin, 2009), employed in this paper, have been used in other contexts. For instance, Nguyen et al. (2010) provide a spectral norm bound for random tensors. They first apply a symmetrization argument which reduces the problem to bounding the spectral norm of a random Gaussian tensor and then employ entropyconcentration tradeoff to bound its spectral norm. They also exploit the bounds on the Lipschitz functions of Gaussian random variables. While Nguyen et al. (2010) employ a rough classification of vectors (to be covered) into dense and sparse vectors, we require a finer classification of vectors into different “buckets” (based on their inner products with given vectors) to obtain the tight concentration bounds in this paper. Moreover, we do not impose Gaussian assumption in this paper, and instead require more general conditions such as RIP or bounded to norms.
1.3 Notations and tensor preliminaries
Define . Let denote the norm of vector , and the induced norm of matrix is defined as
Notice that while the standard asymptotic notation is to write and , we sometimes use and for additional clarity. We also use the asymptotic notation if and only if for all , for some and , i.e., hides factors. Similarly, we say if and only if for all , for some and .
Tensor preliminaries
A real th order tensor is a member of the outer product of Euclidean spaces , . For convenience, we restrict to the case where , and simply write . As is the case for vectors (where ) and matrices (where ), we may identify a th order tensor with the way array of real numbers , where is the th coordinate of with respect to a canonical basis. For convenience, we limit to third order tensors for the rest of this section, while the results for higher order tensors are similar.
The different dimensions of the tensor are referred to as modes. For instance, for a matrix, the first mode refers to columns and the second mode refers to rows. In addition,￼ fibers are higher order analogues of matrix rows and columns. A fiber is obtained by fixing all but one of the indices of the tensor (and is arranged as a column vector). For instance, for a matrix, its mode fiber is any matrix column while a mode fiber is any row. For a third order tensor , the mode fiber is given by , mode by and mode by . Similarly, slices are obtained by fixing all but two of the indices of the tensor. For example, for the third order tensor , the slices along rd mode are given by .
We view a tensor as a multilinear form. Consider matrices . Then tensor is defined as
(1) 
In particular, for vectors , we have ^{6}^{6}6Compare with the matrix case where for , we have .
(2) 
which is a multilinear combination of the tensor mode fibers. Similarly is a multilinear combination of the tensor entries, and is a linear combination of the tensor slices.
A rd order tensor is said to be rank if it can be written in the form
(3) 
where notation represents the outer product and , , are unit vectors (without loss of generality). A tensor is said to have a CP rank if it can be written as the sum of rank tensors
(4) 
This decomposition is closely related to the multilinear form. In particular, for vectors , we have
Consider the decomposition in equation (4), denote matrix , and similarly and . Without loss of generality, we assume that the matrices have normalized columns (in norm), since we can always rescale them, and adjust the weights appropriately.
For vector , we define
as its th tensor power.
Throughout, denotes the Euclidean or norm of a vector , and denotes the spectral (operator) norm of a matrix . Furthermore, and denote the spectral (operator) norm and the Frobenius norm of a tensor, respectively. In particular, for a rd order tensor, we have
2 Tensor Decomposition for Learning Latent Variable Models
In this section, we discuss that the problem of learning several latent variable models reduces to the tensor decomposition problem. We show that the observed moment of the latent variable models can be written in a CP tensor decomposition form when appropriate modifications are performed. This is done for multiview linear mixtures model, spherical Gaussian mixtures and ICA (Independent Component Analysis). For a more detailed discussion on the connection between observed moments of LVMs and tensor decomposition, see Section 3 in Anandkumar et al. (2012a).
Therefore, an efficient tensor decomposition method leads to efficient learning procedure for a wide range of latent variable models. In Section 4.1, we provide the tensor decomposition algorithm introduced in Anandkumar et al. (2014), and exploit it for learning latent variable models providing sample complexity results in the subsequent sections. Note that the sample complexity guarantees are argued through tensor concentration bounds proposed in Section 3.
2.1 Multiview linear mixtures model
Consider a multiview linear mixtures model as in Figure 1 with components and views. Throughout the paper, we assume for simplicity, while the results can be also extended to higherorder. Suppose that hidden variable is a discrete categorical random variable with . The variables (views) are conditionally independent given the categorical latent variable , and the conditional means are
where denotes the factor matrix and are similarly defined. The goal of the learning problem is to recover the parameters of the model (factor matrices) , , and given observations.
For this model, the third order observed moment has the form (See Anandkumar et al. 2012a)
(5) 
The decomposition in (5) is referred to as the CP decomposition (Carroll and Chang, 1970), and denotes the CP tensor rank. Hence, given third order observed moment, the unsupervised learning problem (recovering factor matrices , , and ) reduces to computing a tensor decomposition as in (5).
In addition, suppose that given hidden state , the observed variables have conditional distributions as
where are independent random vectors with zero mean and covariance , and
is a scalar denoting the variance of each entry. We also assume that noise vectors
are independent of hidden vector . In addition, let all the vectors have unit norm. Furthermore, since’s are the mixture probabilities, for simplicity we consider
. We call this model .When , the norm of the noise is roughly the same as the norm of the components. We call this the low noise regime. When , the norm of noise in every dimension is roughly the same as the norm of the components. We call this the high noise regime.
2.2 Spherical Gaussian mixtures
Consider a mixture of different Gaussian distributions with spherical covariances. Let denote the proportion for choosing each mixture. For each Gaussian component , is the mean, and is the spherical covariance. For simplicity, we restrict to the case where all the components have the same spherical variance, i.e., . The generalization is discussed in Hsu and Kakade (2012). In addition, in order to generalize the learning result to the overcomplete setting, we assume that variance parameter is known (see Remark 1 for more discussions). The following lemma shows that the problem of estimating parameters of this mixture model can be formulated as a tensor decomposition problem. This is a special case of Theorem 1 in Hsu and Kakade (2012) where we assume the variance parameter is known.
Lemma 1 (Hsu and Kakade 2012).
If
(6) 
then
In order to provide the learning guarantee, we define the following empirical estimates. Let , , and respectively denote the empirical estimates of the raw moments , , and . Then, the empirical estimate of the third order modified moment in (6) is
(7) 
Remark 1 (Variance parameter estimation).
Notice that we assume variance is known in order to generalize the learning result to the overcomplete setting. Since is a scalar parameter, it is reasonable to try different values of till we get a good reconstruction. On the other hand, in the undercomplete setting, variance can be also estimated as proposed in Hsu and Kakade (2012), where estimate is the
th largest eigenvalue of the empirical covariance matrix
.2.3 Independent component analysis (ICA)
In the standard ICA model (Comon, 1994; Cardoso and Comon, 1996; Hyvarinen and Oja, 2000; Comon and Jutten, 2010), random independent latent signals are linearly mixed and perturbed with noise to generate the observations. Let be a random latent signal, where its coordinates are independent, be the mixing matrix, and be the Gaussian noise. In addition, and are also independent. Then, the observed random vector is
Figure 2 depicts a graphical representation of the ICA model where the coordinates of are independent.
The following lemma shows that the problem of estimating parameters of the ICA model can be formulated as a tensor decomposition problem.
Lemma 2 (Comon and Jutten 2010).
Define
(8) 
where is the fourth order tensor with
(9) 
Let , . Then, we have
(10) 
See Hsu and Kakade (2012) for a proof of this theorem in this form. Let be the empirical estimate of given samples.
Sparse ICA
We also consider the sparse ICA model, which is the ICA with the additional constraint that the hidden vector is sparse.
This is related to the dictionary learning or sparse coding model where the observations are sparse combination of dictionary atoms through sparse vector . If in addition, the coordinates of are random and independent, the dictionary learning model is the same as the sparse ICA model. Others have studied the general sparse coding problem which are briefly mentioned in the related works section.
3 Tensor Concentration Bounds
In this section, we provide tensor concentration results for the proposed latent variable models. For each LVM, consider the higherorder observed moment (tensor) described in Section 2. The tensor concentration result bounds the spectral norm of error between the true moment tensor and its empirical estimate given samples.
3.1 Multiview linear mixtures model
For the multiview linear mixtures model, we provide the tensor concentration result for the rd order observed moment in (5).
Consider the multiview linear mixtures model described in Section 2.1 denoted as model . Let , denote samples of views , respectively. Since the main focus is on recovering the components, we bound the spectral norm of difference between the empirical tensor estimate
and
where the expectation is conditioned on the choice of hidden states for samples, and taken over the randomness of noise. Here, denotes the hidden state for sample . Notice that tensor has the same form as true tensor in (5) where
Here are the empirical frequencies of different hidden states . It is easy to see that if , then all the empirical frequencies are within . Therefore, tensor decomposition of
has the same eigenvectors and similar eigenvalues as the true expectation (over both the noise and the hidden variables), and hence, it suffices to bound
provided as follows.Theorem 1 (Tensor concentration bound for multiview linear mixtures model).
Consider samples from the multiview linear mixtures model with corresponding hidden states . Assume matrices , and have norm bounded by , and noise matrices , and defined in (12) satisfy the RIP condition in 1 (see Remark 3 for details on RIP condition). For and as above, if , we have with high probability (over the choice of hidden state and the noise)
See the proof in Appendix C.1. The main ideas are described later in this section.
The above bound holds for any level of noise, but in each specific regime of noise, one of the terms is dominant and the bound is simplified. We now provide the bound for the high noise and low noise regimes which were introduced in Section 2.1. In the high noise regime , the term in Theorem 1 is dominant, and in the low noise regime , the term in Theorem 1 is dominant. This concentration bound is later used in Section 5 to provide sample complexity guarantees for learning multiview linear mixtures model.
Remark 2 (Application of Theorem 1 to whiteningbased approaches).
In the undercomplete setting, a guaranteed approach for tensor decomposition is to first orthogonalize the tensor through the whitening step, and then perform the orthogonal tensor eigendecomposition through the power method (Anandkumar et al., 2012a). The whitening step leads to dependency to the condition number in the sample complexity result. Applying the proposed tensor concentration bound in Theorem 1 to this approach, we get similar dependency to the condition number, but better dependency in the dimension . This improvement comes at the cost of additional bounded norm condition on the factor matrices.
Concretely, following the analysis in Anandkumar et al. (2012a); Song et al. (2013), we have the error in recovery (up to permutation) as
(11) 
where is the error in estimating the third order moment, is the error in estimating the second order moments and is the singular value of the factor matrices. While the can be obtained by matrix Bernstein’s bounds as before (e.g. see Anandkumar et al. (2012b)), we have an improved bound for from Theorem 1, compared to previous results. Note that the first term corresponding to is the dominant one and we improve its scaling.
Remark 3 (RIP property).
Given samples for the model proposed in Section 2.1, define noise matrix
(12) 
where is the th sample of noise vector . and are similarly defined. These matrices need to satisfy the RIP property as follows which is adapted from Candes and Tao (2006).

[label=(RIP)]

Matrix satisfies a weak RIP condition such that for any subset of number of columns, the spectral norm of restricted to those columns is bounded by .
It is known that when , the above condition is satisfied with high probability for many random models such as when the entries are i.i.d. zero mean Gaussian or Bernoulli random variables.
Proof ideas:
The basic idea for proving the concentration result in Theorem 1 is an net argument. We construct an net and then show that with high probability the norm of error tensor is bounded for every vector in the net.
In some cases even a usual net of size is good enough. But, in many other cases the usual net construction does not provide a useful result since the failure probability is not small enough, and the union bound argument over all vectors in the net fails (or incurs additional polynomial factors in the sample complexity result). In particular, for a vector with high correlation with the data, we get a worse concentration bound. But, the key observation is that there can not be too many vectors that have high correlation with the data. Therefore, for each fixed vector in the net, we partition the terms in the error into two sets; one set corresponds to the small terms (where the vector is not highly correlated with the data) and the other set corresponds to the large terms. For the small terms, the usual net argument still works. For the large terms, we show that the number of such terms is limited. This is done either by RIP property of the noise matrices or by the bounded norm of factor matrices , and . See the proofs of Claims 13 for more details. This partitioning argument is inspired by the entropyconcentration tradeoff proposed in (Rudelson and Vershynin, 2009); however, here we have a finer partitioning into several sets, while in (Rudelson and Vershynin, 2009) the partitioning is done into only two sets.
Spherical Gaussian mixtures:
Similar tensor concentration bound as above holds for the spherical Gaussian mixtures model with exploiting symmetrization trick as follows. In the spherical Gaussian mixtures model, the modified higher order moment (tensor) in (6) is symmetric, and hence noise matrices , and are all the same. This can cause a problem because some square terms in the error tensor are not zero mean and we need to show their concentration around the mean. The wellknown symmetrization technique can be exploited here where we draw two independent set of samples, and show the difference between the two is with high probability small. This technique is widely applied to show concentration around the median, and in all our cases the median is very close to the mean.
3.2 ICA and sparse ICA
For the ICA model, we provide the tensor concentration result for the modified th order observed moment (tensor) in (8) in both dense and sparse cases.
Theorem 2 (Tensor concentration bound for ICA).
Consider samples from the ICA model with mixing matrix . Suppose and the entries of are independent subgaussian variables with and constant nonzero th order cumulant. For the th order cumulant in (8) and its empirical estimate , if , we have with high probability
See the proof in Appendix C.2. We have an improved bound for the sparse ICA setting as follows.
Theorem 3 (Tensor concentration bound for sparse overcomplete ICA).
In the ICA model , suppose where ’s are i.i.d. Bernoulli random variables with , and ’s are independent subgaussian random variables. Consider independent samples where each is distributed as . Suppose satisfies 1 property (see Remark 3 for details on RIP condition). For the th order cumulant in (8) and its empirical estimate , if , we have with high probability
See the proof in Appendix C.3.
Dependence on : It may seem counterintuitive that the bound in Theorem 3 does not depend on . The dependency on is actually in the expectation where the expected tensor in is close to . We typically require the deviation to be less than the expected value.
Proof ideas:
The proof ideas are similar to the multiview mixtures model where we provide net arguments and partition the terms to small and large ones. In addition, for the ICA model, we exploit the subgaussian property of ’s to provide concentration bound for the summation of subgaussian random variables raised to the th power (see Claim 4). This implies the concentration bound for the th order term in (see Claim 5). For the nd order term in , the bound is argued using Matrix Bernstein’s inequality (see Claim 6). For the sparse ICA model, the RIP property of is exploited to bound the size of intersection between the support of (partitioned) vectors in the net and the support of sparse vectors (see Claim 7).
4 Learning Algorithm
In this section, we first introduce the tensor decomposition algorithm. Then, we provide some basic definitions and assumptions incorporated throughout the learning results. We conclude the section stating the organization of learning guarantees which are proposed in subsequent sections.
4.1 Tensor decomposition algorithm
We exploit the tensor decomposition algorithm in (Anandkumar et al., 2014) to learn the parameters of the latent variable models. This is given in Algorithm 1. The main step in (14) basically performs alternating asymmetric power updates ^{7}^{7}7This is exactly the generalization of asymmetric matrix power update to rd order tensors. on the different tensor modes. Notice that the updates alternate among different modes of the tensor which can be viewed as a rank form of the standard alternating least squares (ALS) method. For vectors , recall the definition of multilinear form in (2) where is a multilinear combination of the tensor mode fibers.
Intuition about the performance of tensor power update under nonorthogonal components is provided in (Anandkumar et al., 2014), which is reminded here. For a rank tensor as in (4), suppose we start at the correct vectors and , for some . Then the numerator of tensor power update in (14) is expanded as
(13) 
We observe that under orthogonal components the second term is zero, and thus the true vectors and are stationary points for the power update procedure. However under incoherent (softorthogonal) components, the stationary points of the power update procedure are approximate estimates of the true components with small error.
The purpose of clustering step is to identify which initializations are successful in recovering the true components under unsupervised setting. For more detailed discussion on the algorithm, see Anandkumar et al. (2014).
Notice that in this paper, the input tensor is the higher order moment of the LVMs described in Section 2. More details are stated in the learning results provided in next sections.
(14) 
(15) 
Efficient implementation given samples:
In Algorithm 1, a given tensor
is input, and we then perform the updates. However, in many settings (especially machine learning applications), the tensor is not available before hand, and needs to be computed from samples. Computing and storing the tensor can be enormously expensive for highdimensional problems. Here, we provide a simple observation on how we can manipulate the samples directly to carry out the update procedure in Algorithm
1 as multilinear operations, leading to efficient computational complexity.Consider the mutiview mixtures model desribed in Section 2.1 where the goal is to decompose the empirical moment tensor of the form
(16) 
where is the sample from view . Applying the power update (14) in Algorithm 1 to , we have
(17) 
where corresponds to the Hadamard product. Here, . Thus, the update can be computed efficiently using simple matrix and vector operations. It is easy to see that the above update in (17) is easily parallelizable, and especially, the different initializations can be parallelized, making the algorithm scalable for large problems.
Basic definitions and assumptions
The error bounds in the subsequent results are provided in terms of distance between the estimated and the true vectors.
Definition 1.
For any two vectors , the distance between them is defined as
(18) 
Note that distance function is invariant w.r.t. norm of input vectors and . Distance also provides an upper bound on the error between unit vectors and as (see Lemma A.1 of Agarwal et al. (2013))
Incorporating distance notion resolves the sign ambiguity issue in recovering the components: note that a third order tensor is unchanged if the sign along one of the modes is fixed and the signs along the other two modes are flipped.
Here, we review some of the assumptions and settings assumed throughout the learning results provided in next sections. Consider tensor decomposition form in (4). Let denote the factor matrix. Similar factor matrices are defined as and in the asymmetric cases, e.g., multiview linear mixtures model. For simplicity and without loss of generality, we assume that the columns of factor matrices have unit norm, since we can always rescale them, and adjust the weights appropriately. Also, for simplicity we assume are uniformly i.i.d. drawn from the unit dimensional sphere (see Remark 6 for more details).
In this paper, we focus on learning in the challenging overcomplete regime where the number of components/mixtures is larger than observed dimension. Precisely, we assume . Note that the results can be easily adapted to the highly undercomplete regime when .
Learning results organization
In Section 2, we described how learning different latent variable models can be formulated as a tensor decomposition problem by performing appropriate modifications on the observed moments. For those LVMs, the tensor concentration bounds are provided in Section 3. We then proposed the tensor decomposition algorithm in Section 4.1 which is robust to noise. Employing all these techniques and results, we finally provide learning results for different latent variable models including multiview linear mixtures, ICA and sparse ICA in the subsequent sections. We consider two settings, viz., semisupervised setting, where a small amount of label information is available, and unsupervised setting where such information is not available. In the former setting, we can handle overcomplete mixtures with number of components , where is the observed dimension and is the order of observed moment. In the latter case, our analysis only works when for any constant . See the following two sections for learning guarantees.
5 Learning Multiview Linear Mixtures Model
In this section, we provide the semisupervised and unsupervised learning results for the multiview linear mixtures model described in Section 2.1.
5.1 Semisupervised learning
In the semisupervised setting, label information is exploited to build good initialization vectors for tensor decomposition Algorithm 1 as follows. For the multiview linear mixtures model in Figure 1, let
denote samples of vectors corresponding to different labels, where the samples with subscript have label . Then, for any , we have the empirical estimate of mixture components as
(19) 
Given unlabeled samples, let
(20) 
denote the recovery error. We first provide the settings of Algorithm 1 which include input tensor , number of iterations and the initialization setting.
Settings of Algorithm 1 in Theorem 4:
Conditions for Theorem 4:

[itemsep=1mm]

Rank condition:
Comments
There are no comments yet.