Tensor decompositions and their applications in mining multi-aspect datasets are ubiquitous and ever increasing in popularity. Data Mining application of these techniques has been largely pioneered by the work of Kolda et al.  where the authors introduce a topical aspect to a graph between webpages, and extend the popular HITS algorithm in that scenario. Henceforth, the field of multi-aspect/tensor data mining has witnessed rich growth with prime examples of applications being citation networks , computer networks[2, 3, 4], Knowledge Base data [5, 4, 6, 7], and social networks [8, 2, 9, 4, 10], to name a few.
Tensor decompositions are undoubtedly a very powerful analytical tool with a rich variety of applications. However there exist research challenges in the field of data mining that need to be addressed, in order for tensor decompositions to claim their position as a de-facto tool for practicioners.
One challenge, which has received considerable attention, is the one of making tensor decompositions scalable to today’s web scale. For instance, Facebook has around 2 billion users at the time of writing of this paper and is ever growing, and making tensor decompositions able to work on even small portions of the entire Facebook network is imperative for the adoption of these techniques by such big players. Very frequently, data that fall under the aforementioned category turn out to be highly sparse; the reason is that, e.g. each person on Facebook interacts with only a few hundreds of the users. Computing tensor decompositions for highly sparse scenarios is a game changer, and exploiting sparsity is key in scalability. The work of Kolda et al. [1, 11] introduced the first such approach of exploiting sparsity for scalability. Later on, distributed approaches based on the latter formulation , or other scalable approaches [4, 12, 13, 14] have emerged. By no means do we claim that scalability is a solved problem, however, we point out that there has been significant attention to it.
The main focus of this work, however, is on another, relatively less explored territory; that of assessing the quality of a tensor decomposition. In a great portion of tensor data mining, the task is exploratory and unsupervised: we are given a dataset, usually without any sort of ground truth, and we seek to extract interesting patterns or concepts from the data. It is crucial, therefore, to know whether a pattern that we extract actually models the data at hand, or whether it is merely modelling noise in the data. Especially in the age of Big Data, where feature spaces can be vast, it is imperative to have a measure of quality and avoid interpreting noisy, random variation that always exists in the data. Determining the “right” number of components in a tensor is a very hard problem . This is why, many seminal exploratory tensor mining papers, understandably, set the number of components manually [16, 17, 8, 2]. When there is a specific task at hand, e.g. link prediction , recommendation 
, and supervised learning[20, 21], that entails some measure of success, then there is some procedure (e.g. cross-validation) for selecting a good number of latent components which unfortunately cannot generalize to the case where labels or ground truth are absent.
However, not all hope is lost. There have been very recent approaches following the Minimum Description Length (MDL) principle [22, 23], where the MDL cost function usually depends heavily on the application at hand (e.g. community detection or boolean tensor clustering respectively). Additionally, there have been Bayesian approaches  that, as in the MDL case, do not require the number of components as input. These approaches are extremely interesting, and we reserve their deeper investigation in future work, however in this work, we choose to operate on top of a different, very intuitive approach which takes into account properties of the PARAFAC decomposition  and is application independent, requiring no prior knowledge about the data; there exists highly influential work in the Chemometrics literature  that introduces heuristics for determining a good rank for tensor decompositions. Inspired by and drawing from , we provide a comprehensive method for mining multi-aspect datasets using tensor decompositions.
Our contributions are:
Algorithms We propose AutoTen, a comprehensive methodology on mining multi-aspect datasets using tensors, which minimizes manual trial-and-error intervention and provides quality characterization of the solution (Section 3.2). Furthermore, we extend the quality assessment heuristic of  assuming KL-divergence, which has been shown to be more effective in highly sparse, count data  (Section 3.1).
Evaluation & Discovery We conduct a large scale study on 10 real datasets, exploring the structure of hidden patterns within these datasets. To the best of our knowledge, this is the first such broad study. (Section 5.1). As a data mining case study, we apply AutoTen to two real datasets discovering meaningful patterns (Section 5.2). Finally, we extensively evaluate our proposed method in synthetic data (Section 4).
In order to encourage reproducibility, most of the datasets used are public, and we make our code publicly available at
Table 1 provides an overview of the notation used in this and subsequent sections.
Tensor, matrix, column vector, scalar
|element-wise multiplication and division|
|Moore-Penrose Pseudoinverse of|
|KronMatVec||efficient computation of|
|-th entry of (same for matrices and tensors)|
|spans the entire -th column of (same for tensors)|
|value at the -th iteration|
|CP_NMU||non-negative, Frobenius norm PARAFAC |
|CP_APR||KL-Divergence PARAFAC |
2.1 Brief Introduction to Tensor Decompositions
Given a tensor , we can decompose it according to the CP/PARAFAC decomposition  (henceforth referred to as PARAFAC) as a sum of rank-one tensors:
where the entry of is . Usually, PARAFAC is represented in its matrix form , where the columns of matrix are the vectors (and accordingly for ). The PARAFAC decomposition is especially useful when we are interested in extracting the true latent factors that generate the tensor. In this work, we choose the PARAFAC decomposition as our tool, since it admits a very intuitive interpretation of its latent factors; each component can be seen as soft co-clustering of the tensor, using the high values of vectors as the membership values to co-clusters.
Another very popular Tensor decomposition is Tucker3 , where a tensor is decomposed into rank-one factors times a core tensor:
where are orthogonal. The Tucker3 model is especially used for compression. Furthermore, PARAFAC can be seen as a restricted Tucker3 model, where the core tensor is super-diagonal, i.e. non-zero values are only in the entries where . This observation will be useful in order to motivate the CORCONDIA diagnostic.
Finally, there also exist more expressive, but harder to interpret models, such as the Block Term Decomposition (BTD)  , however, we reserve future work for their investigation.
2.2 Brief Introduction to CORCONDIA
As outlined in the Introduction, in the chemometrics literature, there exists a very intuitive heuristic by the name of CORCONDIA , which can serve as a guide in judging how well a given PARAFAC decomposition is modelling a tensor.
In a nutshell, the idea behind CORCONDIA is the following: Given a tensor and its PARAFAC decomposition , one could imagine fitting a Tucker3 model where matrices are the factors of the Tucker3 decomposition and is the core tensor (which we need to solve for). Since, as we already mentioned, PARAFAC can be seen as a restricted Tucker3 decomposition with super-diagonal core tensor, if our PARAFAC modelling of using is modelling the data well, the core tensor should be as close to super-diagonal as possible. If there are deviations from the super-diagonal, then this is a good indication that our PARAFAC model is somehow flawed (either the decomposition rank is not appropriate, or the data do not have the appropriate structure). We can pose the problem as the following least squares problem:
with the least squares solution:
After computing , the CORCONDIA diagnostic can be computed as where is a super-diagonal tensor with ones on the entries. For a perfectly super-diagonal (i.e. perfect modelling), will be 100. One can see that for rank-one models, the metric will always be 100, because the rank one component can trivially produce a single element “super-diagonal” core; thus, CORCONDIA is applicable for rank two or higher. According to , values below 50 show some imperfection in the modelling or the rank selection; the value can also be negative, showing severe problems with the modelling. In , some of the chemical data analyzed have perfect, low rank PARAFAC structure, and thus expecting is reasonable. In many data mining applications, however, due to high data sparsity, data cannot have such perfect structure, but an approximation thereof using a low rank model is still very valuable. Thus, in our case, we expand the spectrum of acceptable solutions with reasonable quality to include smaller, positive values of (e.g. 20 or higher).
2.3 Scaling Up CORCONDIA
As we mention in the Introduction CORCONDIA as it’s introduced in  is suitable for small and dense data. However, this contradicts the area of interest of the vast majority of data mining applications. To that end, very recently  we extended CORCONDIA to the case where our data are large but sparse, deriving a fast and efficient algorithm. Key behind  is avoiding to pseudoinvert
In order to achieve the above, we need to reformulate the computation of CORCONDIA. The pseudoinverse can be rewritten as
where , , and
(i.e. the respective Singular Value Decompositions).
After we rewrite the least squares problem in the above form, we can efficiently carry out a series of Kronecker products times a vector very efficiently, without materializing the (potentially big) Kronecker product. In  we use the algorithm proposed in  to do this, which we will henceforth refer to as KronMatVec operation:
What we have achieved thus far is extending CORCONDIA to large and sparse data, assuming Frobenius norm. This assumption postulates that the underlying data distribution is Gaussian. However, recently 
showed that for sparse data that capture counts (e.g. number of messages exchanged), it is more beneficial to postulate a Poisson distribution, therefore using the KL-Divergence as a loss function. This has been more recently adopted in showing very promising results in biomedical applications. Therefore, one natural direction, which we follow in the first part of the next section, is to extend CORCONDIA for this scenario.
3 Proposed Methods
In exploratory data mining applications, the case is very frequently the following: we are given a piece of (usually very large) data that is of interest to a domain expert, and we are asked to identify regular and irregular patterns that are potentially useful to the expert who is providing the data. During this process, very often, the analysis is carried out in a completely unsupervised way, since ground truth and labels are either very expensive or impossible to obtain. In our context of tensor data mining, here is the problem at hand:
Informal Problem 1
Given a tensor without ground truth or labelled data, how can we analyze it using the PARAFAC decomposition so that we can also:
Determine automatically a good number of components for the decomposition
Provide quality guarantees for the resulting decomposition
Minimize human intervention and trial-and-error testing
In order to attack the above problem, first, in Section 3.1 we describe how we can derive a fast and efficient metric of the quality of a decomposition, assuming the KL-Divergence. Finally, in 3.2, we introduce AutoTen, our unified algorithm for automatic tensor mining with minimal user intervention and quality characterization of the solution.
3.1 Quality Assessment with KL-Divergence
As we saw in the description of CORCONDIA with Frobenius norm loss, its computation requires solving the least squares problem:
In the case of the CP_APR modelling, where the loss function is the KL-Divergence, the minimization problem that we need to solve is:
where in our case, .
Unlike the Frobenius norm case, where the solution to the problem is the Least Squares estimate, in the KL-Divergence case, the problem does not have a closed form solution. Instead, iterative solutions apply. The most prominent approach to this problem is via an optimization technique calledMajorization-Minimization (MM) or Iterative Majorization . In a nutshell, in MM, given a function that is hard to minimize directly, we derive a “majorizing” function, which is always greater than the function to be minimized, except for a support point where it is equal; we minimize the majorizing function, and iteratively updated the support point using the minimizer of that function. This procedure converges to a local minimum. For the problem of Eq. 1,  and subsequently , employ the following update rule for the problem, which is used iteratively until convergence to a stationary point.
where , and denotes the -th iteration index.
The above solution is generic for any structure of . Remember, however, that has very specific Kronecker structure which we should exploit. Additionally, suppose that we have a tensor; then, the large dimension of will be . If we attempt to materialize, store, and use throughout the algorithm, that can prove catastrophic to the algorithm’s performance. We can exploit the Kronecker structure of so that we break down Eq. 2 into pieces, each one which can be computed efficiently, given the structure of . The first step is to decompose the expression of the numerator of Eq. 2. In particular, we equivalently write where and . Due to the Kronecker structure of :
Therefore, the update to is efficiently calculated in the three above steps. The normalization factor of the equation is equal to: Given the Kronecker structure of however, the following holds:
The row sum of a Kronecker product matrix can be rewritten as
We can rewrite the row sums and where and are all-ones column vectors of size and respectively. For the Kronecker product of the row sums and by using properties of the Kronecker product, and calling we have
which concludes the proof. Thus,
Putting everything together, we end up with Algorithm 2 which is an efficient solution to the minimization problem of Equation 1. As in the naive case, we also use Iterative Majorization in the efficient algorithm; we iterate updating until we converge to a local optimum. Finally, Algorithm 1 shows the steps to compute CORCONDIA under KL-Divergence efficiently.
3.2 AutoTen: Automated Unsupervised Tensor Mining
At this stage, we have the tools we need in order to design an automated tensor mining algorithm that minimizes human intervention and provides quality characterization of the solution. We call our proposed method AutoTen, and we view this as a step towards making tensor mining a fully automated tool, used as a black box by academic and industrial practicioners.
AutoTen is a two step algorithm, where we first search through the solution space and at the second step, we automatically select a good solution based on its quality and the number of components it offers. A sketch of AutoTen follows, and is also outlined in Algorithm 3.
The user provides a data tensor, as well as a maximum rank that reflects the budget that she is willing to devote to AutoTen’s search. We neither have nor require any prior knowledge whether the tensor is highly sparse, or dense, contains real values or counts, hinting whether we should use, say, CP_NMU postulating Frobenius norm loss, or CP_APR postulating KL-Divergence loss.
Fortunately, our work in this paper, as well as our previous work  has equipped us with tools for handling all of the above cases. Thus, we follow a data-driven approach, where we let the data show us whether using CP_NMU or CP_APR is capturing better structure. For a grid of values for the decomposition rank (bounded by the user provided maximum rank), we run both CP_NMU and CP_APR, and we record the quality of the result as measured by the CORCONDIA diagnostic into vectors and (using the algorithm in  and Algorithm 1 respectively), truncating negative values to zero.
At this point, for both CP_NMU and CP_APR we have points in two dimensional space , reflecting the quality and the corresponding number of components. Informally, our problem here is the following:
Informal Problem 2
Given points we need to find one that maximizes the quality of the decomposition, as well as finding as many hidden components in the data as possible.
Intuitively, we are seeking a decomposition that discovers as many latent components as possible, without sacrificing the quality of those components. Essentially, we have a multi-objective optimization problem, where we need to maximize both and . However, if we, say, get the Pareto front of those points (i.e. the subset of all non-dominated points), we end up with a family of solutions without a clear guideline on how to select one. We propose to use the following, effective, two-step maximization algorithm that gives an intuitive data-driven solution:
Max c step: Given vector , run 2-means clustering on its values. This will essentially divide the vector into a set of good/high values and a set of low/bad ones. If we call the means of the two clusters, then we select the cluster index that corresponds to the maximum between and .
Max F step: Given the cluster of points with maximum mean, we select the point that maximizes the value of . We call this point .
Another alternative is to formally define a function of that we wish to maximize, and select the maximum via enumeration. Coming up with the particular function to maximize, considering the intuitive objective of maximizing the number of components that we can extract with reasonably high quality (), is a hard problem, and we risk biasing the selection with a specific choice of a function. Nevertheless, an example such function can be for , and ; this function essentially measures the area of the rectangle formed by the lines connecting with the axes (in the log-log space) and intuitively seeks to find a good compromise between maximizing and . This function performs closely to the proposed data-driven approach and we defer a detailed discussion and investigation to future work.
After choosing the “best” points and , at the final step of AutoTen, we have to select between the results of CP_NMU and CP_APR. In order do so, we can use the following strategies:
Calculate and , and select the method that gives the largest sum. The intuition behind this data-driven strategy is choosing the loss function that is able to discover results with higher quality on aggregate, for more potential ranks.
Select the results that produce the maximum value between and . This strategy is conservative and aims for the highest quality of results, possibly to the expense of components of lesser quality that could still be acceptable for exploratory analysis.
Select the results that produce the maximum value between and . Contrary to the previous strategy, this one is more aggressive, aiming for the highest number of components that can be extracted with acceptable quality.
Empirically, the last strategy seems to give better results, however they all perform very closely in synthetic data. Particular choice of strategy depends on the application needs, e.g. if quality of the components is imperative to be high, then strategy 2 should be preferred over strategy 3.
We point out that lines 1-5 of Algorithm 3 are embarrassingly parallel. Finally, it is important to note that AutoTen not only seeks to find a good number of components for the decomposition, combining the best of both worlds of CP_NMU and CP_APR, but furthermore is able to provide quality assessment for the decomposition: if for a given none of the solutions that AutoTen sifts through yields a satisfactory result, the user will be able to tell because of the very low (or zero in extreme cases) value.
4 Experimental Evaluation
We implemented AutoTen in Matlab, using the Tensor Toolbox , which provides efficient manipulation and storage of sparse tensors. We make our code publicly available111Download our code at
http://www.cs.cmu.edu/~epapalex/src/AutoTen.zip. The online version of our code contains a test case that uses the same code that we used for the following evaluation. All experiments were run on a workstation with 4 Intel(R) Xeon(R) E7- 8837 and 1TB of RAM.
4.1 Evaluation on Synthetic Data
In this section, we empirically measure AutoTen’s ability to uncover the true number of components hidden in a tensor. The experimental setting is as follows: We create synthetic tensors of size , using the function create_problem of the Tensor Toolbox for Matlab as a standardized means of generating synthetic tensors, we create two different test cases: 1) sparse factors, with total number of non-zeros equal to 500, and 2) dense factors. In both cases, we generate random factors with integer values. We generate these three test cases for true rank
ranging from 2-5. For both test cases, we distinguish a noisy case (where Gaussian noise with varianceis by default added by create_problem) and a noiseless case.
We compare AutoTen against three baselines:
Baseline 1: A Bayesian tensor decomposition approach, as introduced very recently in  which automatically determines the rank.
Baseline 2: This is a very simple heuristic approach where, for a grid of values for the rank, we run CP_NMU and record the Frobenius norm loss for each solution. If for two consecutive iterations the loss does not improve more than a small positive number (set to here), we declare as output the result of the previous iteration.
Baseline 3: Same as Baseline 2 with sole difference being that we use CP_APR and accordingly instead of the Frobenius norm reconstruction error, we measure the log-likelihood, and we stop when it stops improving more than . We expect Baseline 3 to be more effective than Baseline 2 in sparse data, due to the more delicate and effective treatment of sparse, count data by CP_APR.
AutoTen as well as Baselines 2 & 3 require a maximum bound on the rank; for fairness, we set for all three methods. In Figures 2 and 3 we show the results for both test cases, for noisy and noiseless data respectively. The error is measured as where is the estimated number of components by each method. Due to the randomized nature of the synthetic data generation, we ran 1000 iterations and we show the average results. In the noisy case (Fig. 2) we observe that in both scenarios and for all chosen ranks, AutoTen outperforms the baselines, having lower error. In the noiseless case of Fig. 3, we observe consistent behavior, with all methods experiencing a small boost to their performance, due to the absence of noise. We calculated statistical significance of our results () using a two-sided sign test.
Overall, we conclude that AutoTen largely outperforms the baselines. The problem at hand is an extremely hard one, and we are not expecting any tractable method to solve it perfectly. Thus, the results we obtain here are very encouraging.
5 Data Mining Case Study
After establishing that AutoTen is able to perform well in a control, synthetically generated setting, the next step is to see its results “in the wild”. To that end, we are conducting two case studies. Section 5.1 takes 10 diverse real datasets shown in Table 2 and investigates their rank structure. In Section 5.2 we apply AutoTen to two of the datasets of Table 2 and we analyze the results, as part of an exploratory data mining study.
|Name||Description||Dimensions||Number of nonzeros|
|ENRON||(sender, recipient, month)||9838|
|Reality Mining ||(person, person, means of communication)||5022|
|Facebook ||(wall owner, poster, day)||737778|
|Taxi [39, 40]||(latitude, longitude,minute)||17762489|
|DBLP ||(paper, paper, view)||274106|
|Netflix||(movie, user, date)||50244707|
|Amazon co-purchase ||(product, product, product group)||5726|
|Amazon metadata ||(product, customer, product group)||441301|
|Yelp||(user, business, term)||10009860|
|Airport||(airport, airport, airline)||58443|
5.1 Rank Structure of Real Datasets
Since exploration of the rank structure of a dataset, using the CORCONDIA diagnostic, is an integral part of AutoTen, we deem necessary to dive deeper into that process. In this case study we are analyzing the rank structure of 10 real datasets, as captured by CORCONDIA with Frobenius norm loss (using our algorithm from , as well as CORCONDIA with KL-Divergence loss (introduced here). Most of the datasets we use are publicly available and can be obtained by either following the link within the original work that introduced them, or (whenever applicable) a direct link. ENRON 222http://www.cs.cmu.edu/~enron/ is a social network dataset, recording the number of emails exchanged between employees of the company for a period of time, during the company crisis. Reality Mining  is a multi-view social network dataset, recording relations between MIT students (who calls whom, who messages whom, who is close to whom and so on). Facebook  is a time evolving snapshot of Facebook, recording people posting on other peoples’ Walls. Taxi 333http://research.microsoft.com/apps/pubs/?id=152883 is a dataset of taxi trajectories in Beijing; we discretize latitude and longitude to a grid. DBLP is a dataset recording which researched published what paper under three different views (first view shows co-authorship, second view shows citation, and third view shows whether two authors share at least three keywords in their title or abstract of their papers). Netflix comes from the Netflix prize dataset and records movie ratings by users over time. Amazon co-purchase data records items bought together, and the category of the first of the two products. Amazon metadata records customers who reviewed a product, and the corresponding product category. Yelp contains reviews of Yelp users for various businesses (from the data challenge444https://www.yelp.com/dataset_challenge/dataset). Finally, Airport 555http://openflights.org/data.html contains records of flights between different airports, and the operating airline.
We ran our algorithms for , and truncated negative values to zero. For KL-Divergence and datasets Facebook, Netflix, Yelp, and Airport we used smaller versions (first 500 rows for Netflix and Yelp, and first 1000 rows for Facebook and Airport), due to high memory requirements of Matlab; this means that the corresponding figures describe the rank structure of a smaller dataset, which might be different from the full one. Figure 4 shows CORCONDIA when using Frobenius norm as a loss, and Fig. 5 when using KL-Divergence. The way to interpret these figures is the following: assuming a CP_NMU (Fig. 4) or a CP_APR (Fig. 5) model, each figure shows the modelling quality of the data for a given rank. This sheds light to the rank structure of a particular dataset (although that is not to say that it provides a definitive answer about its true rank). For the given datasets, we observe a few interesting differences in structure: for instance, ENRON and Taxi in Fig. 4 seem to have good quality for a few components. On the other hand, Reality Mining, DBLP, and Amazon metadata have reasonably acceptable quality for a larger range of components, with the quality decreasing as the number gets higher. Another interesting observation, confirming recent results in , is that Yelp seems to be modelled better using a high number of components. Figures that are all-zero merely show that no good structure was detected for up to 50 components, however, this might indicate that such datasets (e.g. Netflix) have an even higher number of components. Finally, contrasting Fig. 5 to Fig. 4, we observe that in many cases using the KL-Divergence is able to discover better structure than the Frobenius norm (e.g. ENRON and Amazon co-purchase).
5.2 AutoTen in practice
We used AutoTen to analyze two of the datasets shown in Table 2. In the following lines we show our results.
5.2.1 Analyzing Taxi
The data we have span an entire week worth of measurements, with temporal granularity of minutes. First, we tried quantizing the latitude and longitude into a grid; however, AutoTen warned us that the decomposition was not able to detect good and coherent structure in the data, perhaps due to the extremely sparse variable space of our grid. Subsequently, we modelled the data using a grid and AutoTen was able to detect good structure. In particular, AutoTen output 8 rank-one components, choosing Frobenius norm as a loss function.
In Figure 6 we show 4 representative components of the decomposition. In each sub-figure, we overlay the map of Beijing with the coordinates that appear to have high activity in the particular component; every sub-figure also shows the temporal profile of the component. The first two components (Fig. 6(a), (b)) spatially refer to a similar area, roughly corresponding to the tourist and business center in the central rings of the city. The difference is that Fig. 6(a) shows high activity during the weekdays and declining activity over the weekend (indicated by five peaks of equal height, followed by two smaller peaks), whereas Fig. 6(b) shows a slightly inverted temporal profile, where the activity peaks over the weekend; we conclude that Fig. 6(a) most likely captures business traffic that peaks during the week, whereas Fig. 6(b) captures tourist and leisure traffic that peaks over the weekend. The third component (Fig. 6(c)) is highly active around the Olympic Center and Convention Center area, with peaking activity in the middle of the week. Finally, the last component (Fig. 6(d) ) shows high activity only outside of Beijing’s international airport, where taxis gather to pick-up customers; on the temporal side, we see daily peaks of activity, with the collective activity dropping during the weekend, when there is significantly less traffic of people coming to the city for business. By being able to analyze such trajectory data into highly interpretable results, we can help policymakers to better understand the traffic patterns of taxis in big cities, estimate high and low demand areas and times and optimize city planning in that respect. There has been very recent work  towards the same direction, and we view our results as complementary.
5.2.2 Analyzing Amazon co-purchase
|Cluster type||Products||Product Types|
|#1 Self Improvement||Resolving Conflicts At Work : A Complete Guide for Everyone on the Job||Book|
|How to Kill a Monster (Goosebumps)||Book|
|Mensa Visual Brainteasers||Book|
|Learning in Overdrive: Designing Curriculum, Instruction, and Assessment from Standards : A Manual for Teachers||Book|
|#2 Psychology, Self Improvement||Physicians of the Soul: The Psychologies of the World’s Greatest Spiritual Leaders||Book|
|The Rise of the Creative Class: And How It’s Transforming Work, Leisure, Community and Everyday Life||Book|
|#3 Technical Books||Beginning ASP.NET Databases using C#||Book|
|BizPricer Business Valuation Manual wSoftware||Book|
|Desde Que Samba E Samba||Music|
|#4 History||War at Sea: A Naval History of World War II||Book|
|Jailed for Freedom: American Women Win the Vote||Book|
|The Perfect Plan (7th Heaven)||Book|
This dataset records pairs of products that were purchased together by the same customer on Amazon, as well as the category of the first product in the pair. This dataset, as shown in Figures 4(g) and 5 does not have perfect trilinear structure, however a low rank trilinear approximation still offers reasonably good insights for product recommendation and market basket analysis. By analyzing this dataset, we seek to find coherent groups of products that people tend to purchase together, aiming for better product recommendations and suggestions. For the purposes of this study, we extracted a small piece of the co-purchase network of 256 products. AutoTen was able to extract 24 components by choosing KL-Divergence as a loss. On Table 3 we show a representative subset of our resulting components (which were remarkably sparse, due to the KL-Divergence fitting by CP_APR
). We observe that products of similar genre and themes tend to naturally cluster together. For instance, cluster #1 contains mostly self improvement books. We also observe a few topical outliers, such as the bookHow to Kill a Monster (Goosebumps) in cluster #1, and CD Desde Que Samba E Samba in cluster #3 that contains Technical / Software Development books.
6 Related Work
Tensors and their data mining applications
One of the first applications was on web mining, extending the popular HITS algorithm . There has been work on analyzing citation networks (such as DBLP) , detecting anomalies in computer networks[2, 3, 4], extracting patterns from and completing Knowledge Bases [5, 4, 7] and analyzing time-evolving or multi-view social networks. [8, 2, 9, 4, 22, 10], The long list of application continues, with extensions of Latent Semantic Analysis [44, 6], extensions of Subspace Clustering to higher orders , Crime Forecasting , Image Processing , mining Brain data [48, 21, 49], trajectory and mobility data [50, 40], and bioinformatics .
Choosing the right number of components
As we’ve mentioned throughout the text, CORCONDIA  is using properties of the PARAFAC decomposition in order to hint towards the right number of components. In , we introduce a scalable algorithm for CORCONDIA (under the Frobenius norm). Moving away from the PARAFAC decompostion, Kiers and Kinderen  introduce a method for choosing the number of components for Tucker3. There has been recent work using Minimum Description Length (MDL): In  the authors use MDL in the context of community detection in time-evolving social network tensors, whereas in , Metzler and Miettinen use MDL to score the quality of components for a binary tensor factorization. Finally, there have also been recent advances using Bayesian methods  in order to automatically decide the number of components.
In this paper, we work towards an automatic, unsupervised tensor mining algorithm that minimizes user intervention. Our main contributions are:
Algorithms We propose AutoTen, a novel automatic and unsupervised tensor mining algorithm, which can provide quality characterization of the solution. We extend the highly intuitive heuristic of  for KL-Divergence loss, providing an efficient algorithm.
Evaluation & Discovery We evaluate our methods in synthetic data, showing their superiority compared to the baselines, as well as a wide variety of real datasets. Finally, we apply AutoTen to two real datasets discovering meaningful patterns (Section 5.2).
Research was supported by the National Science Foundation Grant No. IIS-1247489. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the funding parties. The author would like to thank Professors Christos Faloutsos, Nicholas Sidiropoulos, and Rasmus Bro for valuable conversations, and Miguel Araujo for sharing the Airport dataset.
-  Tamara G Kolda, Brett W Bader, and Joseph P Kenny, “Higher-order web link analysis using multilinear algebra,” in Data Mining, Fifth IEEE International Conference on. IEEE, 2005, pp. 8–pp.
-  Tamara G Kolda and Jimeng Sun, “Scalable tensor decompositions for multi-aspect data mining,” in Data Mining, 2008. ICDM’08. Eighth IEEE International Conference on. IEEE, 2008, pp. 363–372.
-  K. Maruhashi, F. Guo, and C. Faloutsos, “Multiaspectforensics: Pattern mining on large-scale heterogeneous networks with tensor analysis,” in Proceedings of the Third International Conference on Advances in Social Network Analysis and Mining, 2011.
-  Evangelos E Papalexakis, Christos Faloutsos, and Nicholas D Sidiropoulos, “Parcube: Sparse parallelizable tensor decompositions,” in Machine Learning and Knowledge Discovery in Databases, pp. 521–536. Springer, 2012.
-  U Kang, Evangelos Papalexakis, Abhay Harpale, and Christos Faloutsos, “Gigatensor: scaling tensor analysis up by 100 times-algorithms and discoveries,” in Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2012, pp. 316–324.
-  Kai-Wei Chang, Wen-tau Yih, and Christopher Meek, “Multi-relational latent semantic analysis.,” in EMNLP, 2013, pp. 1602–1612.
Kai-Wei Chang, Wen-tau Yih, Bishan Yang, and Christopher Meek,
“Typed tensor decomposition of knowledge bases for relation
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2014, pp. 1568–1579.
-  Brett W Bader, Richard A Harshman, and Tamara G Kolda, “Temporal analysis of semantic graphs using asalsan,” in Data Mining, 2007. ICDM 2007. Seventh IEEE International Conference on. IEEE, 2007, pp. 33–42.
-  Yu-Ru Lin, Jimeng Sun, Paul Castro, Ravi Konuru, Hari Sundaram, and Aisling Kelliher, “Metafac: community discovery via relational hypergraph factorization,” in Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2009, pp. 527–536.
-  Meng Jiang, Peng Cui, Fei Wang, Xinran Xu, Wenwu Zhu, and Shiqiang Yang, “Fema: flexible evolutionary multi-faceted analysis for dynamic behavioral pattern discovery,” in Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2014, pp. 1186–1195.
-  Brett W. Bader and Tamara G. Kolda, “Efficient MATLAB computations with sparse and factored tensors,” SIAM Journal on Scientific Computing, vol. 30, no. 1, pp. 205–231, December 2007.
-  Dóra Erdos and Pauli Miettinen, “Walk’n’merge: A scalable algorithm for boolean tensor factorization,” in Data Mining (ICDM), 2013 IEEE 13th International Conference on. IEEE, 2013, pp. 1037–1042.
-  Alex Beutel, Abhimanu Kumar, Evangelos Papalexakis, Partha Pratim Talukdar, Christos Faloutsos, and Eric P Xing, “Flexifact: Scalable flexible factorization of coupled tensors on hadoop,” in SIAM SDM, 2014.
-  André LF De Almeida, Alain Y Kibangou, et al., “Distributed large-scale tensor decomposition,” in Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on, 2014.
-  Christopher J Hillar and Lek-Heng Lim, “Most tensor problems are np-hard,” Journal of the ACM (JACM), vol. 60, no. 6, pp. 45, 2013.
-  T.G. Kolda and B.W. Bader, “The tophits model for higher-order web link analysis,” in Workshop on Link Analysis, Counterterrorism and Security, 2006, vol. 7, pp. 26–29.
-  J. Sun, D. Tao, and C. Faloutsos, “Beyond streams and graphs: dynamic tensor analysis,” in Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2006, pp. 374–383.
-  Daniel M Dunlavy, Tamara G Kolda, and Evrim Acar, “Temporal link prediction using matrix and tensor factorizations,” ACM Transactions on Knowledge Discovery from Data (TKDD), vol. 5, no. 2, pp. 10, 2011.
-  Steffen Rendle and Lars Schmidt-Thieme, “Pairwise interaction tensor factorization for personalized tag recommendation,” in Proceedings of the third ACM international conference on Web search and data mining. ACM, 2010, pp. 81–90.
-  Dacheng Tao, Xuelong Li, Weiming Hu, Stephen Maybank, and Xindong Wu, “Supervised tensor learning,” in Data Mining, Fifth IEEE International Conference on. IEEE, 2005, pp. 8–pp.
-  Lifang He, Xiangnan Kong, S Yu Philip, Ann B Ragin, Zhifeng Hao, and Xiaowei Yang, “Dusk: A dual structure-preserving kernel for supervised tensor learning with applications to neuroimages,” matrix, vol. 3, no. 1, pp. 2, 2014.
-  Miguel Araujo, Spiros Papadimitriou, Stephan Günnemann, Christos Faloutsos, Prithwish Basu, Ananthram Swami, Evangelos E Papalexakis, and Danai Koutra, “Com2: Fast automatic discovery of temporal (’comet’) communities,” in Advances in Knowledge Discovery and Data Mining, pp. 271–283. Springer, 2014.
-  Saskia Metzler and Pauli Miettinen, “Clustering boolean tensors,” arXiv preprint arXiv:1501.00696, 2015.
-  Q Zhao, L Zhang, and A Cichocki, “Bayesian cp factorization of incomplete tensors with automatic rank determination,” .
-  R.A. Harshman, “Foundations of the parafac procedure: Models and conditions for an" explanatory" multimodal factor analysis,” 1970.
-  Rasmus Bro, Multi-way analysis in the food industry: models, algorithms, and applications, Ph.D. thesis, 1998.
-  Eric C Chi and Tamara G Kolda, “On tensors, sparsity, and nonnegative factorizations,” SIAM Journal on Matrix Analysis and Applications, vol. 33, no. 4, pp. 1272–1299, 2012.
-  Paul E Buis and Wayne R Dyksen, “Efficient vector and parallel manipulation of tensor products,” ACM Transactions on Mathematical Software (TOMS), vol. 22, no. 1, pp. 18–23, 1996.
-  B.W. Bader and T.G. Kolda, “Matlab tensor toolbox version 2.2,” Albuquerque, NM, USA: Sandia National Laboratories, 2007.
Pieter M Kroonenberg and Jan De Leeuw,
“Principal component analysis of three-mode data by means of alternating least squares algorithms,”Psychometrika, vol. 45, no. 1, pp. 69–97, 1980.
-  Lieven De Lathauwer and Dimitri Nion, “Decompositions of a higher-order tensor in block terms-part iii: Alternating least squares algorithms,” SIAM journal on Matrix Analysis and Applications, vol. 30, no. 3, pp. 1067–1083, 2008.
-  Rasmus Bro and Henk AL Kiers, “A new efficient method for determining the number of components in parafac models,” Journal of chemometrics, vol. 17, no. 5, pp. 274–286, 2003.
-  E.E. Papalexakis and C. Faloutsos, “Fast efficient and scalable core consistency diagnostic for the parafac decomposition for big sparse tensors,” in Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on. IEEE, 2015.
-  Joyce C Ho, Joydeep Ghosh, and Jimeng Sun, “Marble: high-throughput phenotyping from electronic health records via sparse nonnegative tensor factorization,” in Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2014, pp. 115–124.
Willem J Heiser,
“Convergent computation by iterative majorization: theory and
applications in multidimensional data analysis,”
Recent advances in descriptive multivariate analysis, pp. 157–189, 1995.
-  Cédric Févotte and Jérôme Idier, “Algorithms for nonnegative matrix factorization with the -divergence,” Neural Computation, vol. 23, no. 9, pp. 2421–2456, 2011.
-  Nathan Eagle, Alex Sandy Pentland, and David Lazer, “Inferring friendship network structure by using mobile phone data,” Proceedings of the National Academy of Sciences, vol. 106, no. 36, pp. 15274–15278, 2009.
-  Bimal Viswanath, Alan Mislove, Meeyoung Cha, and Krishna P. Gummadi, “On the evolution of user interaction in facebook,” in Proceedings of the 2nd ACM SIGCOMM Workshop on Social Networks (WOSN’09), August 2009.
-  Jing Yuan, Yu Zheng, Xing Xie, and Guangzhong Sun, “Driving with knowledge from the physical world,” in Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2011, pp. 316–324.
-  Yilun Wang, Yu Zheng, and Yexiang Xue, “Travel time estimation of a path using sparse trajectories,” in Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York, NY, USA, 2014, KDD ’14, pp. 25–34, ACM.
-  Evangelos E Papalexakis, Leman Akoglu, and Dino Ienco, “Do more views of a graph help? community detection and clustering in multi-graphs,” in Information Fusion (FUSION), 2013 16th International Conference on. IEEE, 2013, pp. 899–905.
-  Jure Leskovec and Andrej Krevl, “SNAP Datasets: Stanford large network dataset collection,” http://snap.stanford.edu/data, June 2014.
-  Yongfeng Zhang, Min Zhang, Yi Zhang, Yiqun Liu, and Shaoping Ma, “Understanding the sparsity: Augmented matrix factorization with sampled constraints on unobservables,” in Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management. ACM, 2014, pp. 1189–1198.
-  Deng Cai, Xiaofei He, and Jiawei Han, “Tensor space model for document analysis,” in Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval. ACM, 2006, pp. 625–626.
Heng Huang, Chris Ding, Dijun Luo, and Tao Li,
“Simultaneous tensor subspace selection and clustering: the equivalence of high order svd and k-means clustering,”in Proceedings of the 14th ACM SIGKDD international conference on Knowledge Discovery and Data mining. ACM, 2008, pp. 327–335.
-  Yang Mu, Wei Ding, Melissa Morabito, and Dacheng Tao, “Empirical discriminative tensor analysis for crime forecasting,” in Knowledge Science, Engineering and Management, pp. 293–304. Springer, 2011.
-  Ji Liu, Przemyslaw Musialski, Peter Wonka, and Jieping Ye, “Tensor completion for estimating missing values in visual data,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 35, no. 1, pp. 208–220, 2013.
-  Ian Davidson, Sean Gilpin, Owen Carmichael, and Peter Walker, “Network discovery via constrained tensor analysis of fmri data,” in Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2013, pp. 194–202.
-  Evangelos E Papalexakis, Tom M Mitchell, Nicholas D Sidiropoulos, Christos Faloutsos, Partha Pratim Talukdar, and Brian Murphy, “Turbo-smt: Accelerating coupled sparse matrix-tensor factorizations by 200x,” in SIAM SDM, 2014.
-  Vincent Wenchen Zheng, Bin Cao, Yu Zheng, Xing Xie, and Qiang Yang, “Collaborative filtering meets mobile recommendation: A user-centered approach.,” in AAAI, 2010, vol. 10, pp. 236–241.
-  Henk AL Kiers and Albert Kinderen, “A fast method for choosing the numbers of components in tucker3 analysis,” British Journal of Mathematical and Statistical Psychology, vol. 56, no. 1, pp. 119–125, 2003.