# Extension of Correspondence Analysis to multiway data-sets through High Order SVD: a geometric framework

This paper presents an extension of Correspondence Analysis (CA) to tensors through High Order Singular Value Decomposition (HOSVD) from a geometric viewpoint. Correspondence analysis is a well-known tool, developed from principal component analysis, for studying contingency tables. Different algebraic extensions of CA to multi-way tables have been proposed over the years, nevertheless neglecting its geometric meaning. Relying on the Tucker model and the HOSVD, we propose a direct way to associate with each tensor mode a point cloud. We prove that the point clouds are related to each other. Specifically using the CA metrics we show that the barycentric relation is still true in the tensor framework. Finally two data sets are used to underline the advantages and the drawbacks of our strategy with respect to the classical matrix approaches.

## Authors

• 3 publications
• 5 publications
• 2 publications
01/15/2017

### Iterative Block Tensor Singular Value Thresholding for Extraction of Low Rank Component of Image Data

Tensor principal component analysis (TPCA) is a multi-linear extension o...
02/03/2022

### Some notes on Goodman's marginal-free correspondence analysis

In his seminal paper Goodman (1996) introduced marginal-free corresponde...
12/03/2018

### Tensor N-tubal rank and its convex relaxation for low-rank tensor recovery

As low-rank modeling has achieved great success in tensor recovery, many...
05/14/2021

### A multidimensional principal component analysis via the c-product Golub-Kahan-SVD for classification and face recognition

Face recognition and identification is a very important application in m...
01/09/2018

### Polar n-Complex and n-Bicomplex Singular Value Decomposition and Principal Component Pursuit

Informed by recent work on tensor singular value decomposition and circu...
11/01/2020

### Comments on "correspondence analysis makes you blind"

Collins' (2002) statement "correspondence analysis makes you blind" foll...
07/06/2015

### Correspondence Factor Analysis of Big Data Sets: A Case Study of 30 Million Words; and Contrasting Analytics using Apache Solr and Correspondence Analysis in R

We consider a large number of text data sets. These are cooking recipes....
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## Preliminaries

### Notations

Here real numbers and integers are denoted by small Latin letters, vectors by boldface Latin letters, matrices by capital Latin letters and tensors by boldface capital Latin letters. denotes the set of all the real orthogonal matrices of rows and columns. Let us have finite dimensional vector spaces on a same field . Let be a tensor in with for . The integer is called the order of the tensor , each vector space is called a mode and is the dimension of the tensor for mode . We denote as well . The matricization or unfolding of with respect to the mode denoted by is a matrix of rows and columns with , see [Lathauwer2000a, Definition 1]. Notice that the matricization with respect to mode maps the -th tensor element to the -th matrix element with

 ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯i1,…,iμ−1,iμ+1,…,id=1+d∑α=1α≠μ(iα−1)mαwithmα=α−1∏β=1β≠μnβ, ()

see [Kolda2009]. Finding an approximation of Tucker model is facilitated by using an elementary operations on tensors: the Tensor-Times-Matrix product (TTM). Let , and for any mode . Let be an elementary tensor with . Then, TTM product of with , denoted , is the tensor . This is extended to any tensor by linearity, see, e.g., [Lathauwer2000a]. Let be a tensor of and let be a -tuple of matrices of compatible dimension with . If , then the -matricization of is

 B(μ)=MμA(μ)(Md⊗K⋯⊗KMμ+1⊗KMμ−1⊗K⋯⊗KM1)⊤ ()

for every and, the Kronecker product, we refer to [SAND2006-2081, Proposition 3.7] for further details. Let be a Symmetric Positive Definite (SPD) matrix defining an inner product on by . It is extended to the whole spaces by linearity. This induces an inner product on on elementary tensors by . Let us denote by the tensor space endowed with standard inner product with

being the identity matrix on

, and by the same tensor space when endowed with inner product induced by the matrices . We will use the observation that the map

 ν:\lx@sectionsign\textscm→\lx@sectionsignwithν(⨂μ→aμ)=⨂μMμ→aμ

is an isometry, because where denotes the Frobenius norm. This can be extended to the whole tensor space by linearity (see [Franc1992] for details). The isometry can be written as .

### Tucker model

The Tucker decomposition [Tucker1966, Kroonenberg1983, Kapteyn1986, Kroonenberg2008, Kolda2009] of tensor is

 →A=r1∑i1=1…rd∑id=1Ci1…id→u1i1⊗⋯⊗→udid ()

where the array of the is the core tensor of and is an orthonormal basis of with minimal for and . So if belongs to with and , then is the multilinear rank of  [Lathauwer2000a]. In a synthetic way Tucker model is expressed with the TTM product as . We identify each Tucker subspace with its orthonormal basis, denoting for every mode . Starting from this Tucker model, we formulate an approximation problem as follows. Given a tensor , its best Tucker approximation at multilinear rank is the tensor of multi-linear rank such that is minimal. This is a natural extension of PCA because the unknowns are the spaces under constraints of dimensions. Historically speaking, finding a solution to Tucker best approximation has a long history which can be found, e.g., in [Tucker1966, Franc1992, Kolda2009, Grasedick2010]. There is no known algorithm yielding the best solution of the approximation problem, although several algorithms provide good quality results, known nowadays as High Order Singular Value Decomposition (HOSVD) [Lathauwer2000a], its Truncated version (T-HOSVD) [StHOSVD] and High Order Orthogonal Iterations (HOOI) [Lathauwer2000b, Kolda2009]. The seminal paper for what is now called HOOI has been published as Tuckals3 for -order tensors is [Kroonenberg1980] (see as well [Kroonenberg2008]). The extension to four mode tensors has been made by Lastovicka in [Lastovicka1981], and more generally to -order multi-arrays by a group in Groningen in 1986 [Kapteyn1986]. These works used Kronecker product as an algebraic framework, and those results have been put into a common framework of tensor algebra in [Franc1992]. Both approaches (decomposition and best approximation) have been popularized by two papers by de Lauthauwer et al. in 2000, who derived HOSVD [Lathauwer2000a] and HOOI [Lathauwer2000b] relying on matricization and matrix algebra. Matricization, called unfolding as well, is building a matrix with one mode in row, and a combination of the remaining ones in columns.

### Multiway correspondence analysis

PCA is solving dimension reduction problem for a matrix in , with and

as natural choice in data science. In the geometric context, a cloud

of points in is associated with a matrix , where point is row of (points are in ). PCA of is building a new orthonormal basis in , called principal axis, and computing the coordinates of the points in this new basis, called principal components. Classically PCA is realized with a SVD of the given matrix, i.e., . Then the principal axis are defined as the columns of , and the array of coordinates is given by . It can be developed mutatis mutandis by selecting some inner products associated with SPD matrices in and/or . Often in data analysis, those SPD matrices are diagonal, and the metrics are defined by weights. Indeed given in and in , we define the inner product . Then, PCA of at rank with this inner product is finding a rank matrix such that is minimal. In other words, we compute the SVD at rank of . If is the set of principal components of and principal axis , then, principal components and axis of PCA of with metrics so defined are . In particular, as Figure Document shows, CA is a PCA on a contingency table with metrics associated with inverse of the marginals as weights [LMF82].

This is extended naturally to a multiway contingency table T [Kroonenberg2008], and formalized through HOSVD. In Figure Document we sketch this approach, which we call MWCA as it extends with HOSVD the method of CA. As preprocessing step we compute the relative frequency tensor dividing each tensor entry by the sum of all its entries. First step is to compute all marginals of for all indices for all modes. This yields a vector of weights for mode . In the second step the isometry is defined by the diagonal matrix whose diagonal elements are inverse of the weights. Third step is to perform HOSVD of . Finally the HOSVD decomposition is transported back with the isometry inverse .

## Multiway principal components analysis

Starting from the extension of principal component analysis to tensors with HOSVD, we associate a point cloud with each mode, and show the existence of an algebraic link between them. The aim of this section is interpreting from a geometrical point of view this relation. For the sake of simplicity in the result explicit verification, we first prove a link between the point clouds in the standard Euclidean -order tensor space, and then we generalize to -order tensors. This structure choice is kept throughout the document. We will focus especially on the geometric interpretation of these results. We naturally attach a point cloud to each matricization of -order tensor. Each point cloud is the optimal projection of the mode matricization in low dimension space. Finally, we show how their coordinates are linked and we extend this result to general metric spaces of -order tensors. Let be a tensor with and let be the rank Tucker decomposition basis obtained from the HOSVD algorithm. The tensor is expressed as

 →X=r1,…,rd∑i1,…,id=1Ci1…id→u1i1⊗⋯⊗→udid ()

with the HOSVD core tensor and the -th column of . The condition for Equation eq3:0 to be a decomposition is for . Let be the diagonal singular value matrix of the matricization of with respect to mode . For simplicity the -th diagonal element of is denoted by for every and for every . The principal component of mode is defined as for each .

### The 3-order tensor case in the Euclidean space

For the sake of simplicity and clarity, we assume equal to . The following proposition states a relation linking the three sets of principal coordinates. Let be a tensor of with and let be its HOSVD core at multi-linear rank . Let be the principal components of mode . If for , then

with . [Proof:]We start the proof for the first mode principal components. Let be expressed in the HOSVD basis as in Equation eq3:0, i.e.,

 →X=r1,r2,r3∑i,j,k=1Cijk→u1i⊗→u2j⊗→u3k.

Then the matricization of with respect to mode in the Tucker basis is expressed with the Kronecker product as

 X(1)=r1∑i=1→u1i⊗(r2,r3∑j,k=1Cijk→u3k⊗K→u2j). ()

The PCA of is

 X(1)=Y1V⊤1=r1∑i=1σ(1)i→u1i⊗→v1i ()

with the -th column of and -th column of . By comparing Equations eq3:1 and eq3:2 for a fixed index , we get

 σ(1)i→u1i⊗→v1i=→u1i⊗(r2,r3∑j,k=1Cijk→u3k⊗K→u2j).

Remarking that is invertible, we identify with a linear combination of the Kronecker product of and scaled by as

 →v1i=1σ(1)ir2,r3∑j,k=1Cijk→u3k⊗K→u2j. ()

Notice that the -th and -th column of and are and respectively. So introducing in Equation eq3:3 the singular values and , we express the -th column of as a linear combination of the Kronecker product of the -th and -th column of and , i.e.

with , and the -th and -th column of and respectively .
Remark that is a matrix of rows and columns whose -th column is with for every , and , as defined in Equation eq2:1. The tensor matricized with respect to mode is a matrix of rows and columns, whose -th element is for all , . So the sum in the right-hand side of Equation eq3:4 can be expressed as the matrix-product between and tensor matricized with respect to mode as

 V1=(Y3⊗KY2)(B(1))⊤. ()

Multiplying Equation eq3:2 on the right by yields . Therefore multiplying Equation eq3:5 by the matricization of with respect to mode , the principal component is expressed as linear combination of the Kronecker product of principal components and , i.e.

 Y1=X(1)V1=X(1)(Y3⊗KY2)(B(1))⊤.

The other relations follow straightforwardly from this one, permuting the indices coherently. From an algebraic view this first proposition shows that the principal components of each mode can be expressed as a linear combination of the principal components of the two other modes. However as pointed out in the preliminary section, principal components can be seen from different viewpoints. From a geometric viewpoint a point cloud is attached to each mode matricization of tensor for . Indeed the -th row of represents the coordinates of the -th element of mode point cloud living in the space where . Given a multi-linear rank , we reformulate the problem of the Tucker approximation as a problem of dimensional reduction. Indeed we look for the subspace of of dimension which minimizes in norm the projection of point cloud on it. This problem is solved with the HOSVD algorithm, which provides three orthogonal basis of the corresponding subspaces. Therefore the -th row of represents the coordinates of the -th element of projected into the subspace of of dimension for . The Proposition Document result is interpreted geometrically as each point cloud living in the linear subspace built from the Kronecker product of the other two.

### Generalization to d-order tensors

Now, we generalize Proposition Document to the -case as follows. Let be a tensor of with and let be its HOSVD core at multi-linear rank . Let be the principal components of mode . If for , then

 Yμ=X(μ)(Yd⊗K⋯⊗KYμ+1⊗KYμ−1⊗K⋯⊗KY1)(B(μ))⊤

with for every . [Proof:] The proof is very similar to that of Proposition Document, so we give only the main steps. Let start by the first mode principal components. The -mode matricization of in the Tucker basis is expressed with the Kronecker product, getting

 X(1)=r1∑i1=1→u1i1⊗(r2,…,rd∑i2,…,id=1Ci1…id→udid⊗K⋯⊗K→u2i2). ()

 X(1)=Y1V⊤1=r1∑i1=1σ(1)i1→u1i1⊗→v1i1 ()

with the -th unitary column of where and the -th column of . Comparing Equations eq3:14 and eq3:15 for a fixed index , we identify with a linear combination of the Kronecker product of for scaled by as

 →v1i1=1σ(1)i1r2,…,rd∑i2,…,id=1Ci1…id→udid⊗K⋯⊗K→u2i2. ()

Introducing in Equation eq3:16 the singular values , we express the -th column of as a linear combination of the Kronecker product of the -th column of for every as

 ()

with and the -th column of for . Thanks to the correspondence between Kronecker product and matricization, the right-hand-side of Equation eq3:17 is expressed as the matrix-product between and tensor matricized with respect to mode and transposed as

 V1=(Yd⊗K⋯⊗KY2)(B(1))⊤. ()

Multiplying Equation eq3:15 on the right by yields and replacing by its expression of Equation eq3:18, it finally follows

 Y1=X(1)V1=X(1)(Yd⊗K⋯⊗KY2)(B(1))⊤.

The other relations follow straightforwardly from this one, permuting the indices coherently.

### Extension to generic metric space for d-order tensors

As discussed in Section Document, the minimization problem faced with the HOSVD algorithm is expressed by the Frobenious norm, induced by an inner product. The standard inner product is defined by the identity matrix. However, whatever SPD matrix induces an inner product and the associated metric norm on a vector space, which is therefore isomorphic to the standard Euclidean space. We emphasize in this section the role of the metric on the relationships between point clouds, using this isomorphic relationship between Euclidean spaces with different inner products. Let where is the tensor space with endowed with the inner product induced by SPD matrices of size . Let be the Euclidean tensor space endowed with the standard inner product. As already mentioned, we can move back to , thanks to the isometry define as , such that . Let now be the HOSVD approximation of at multi-linear rank and let be the associated basis. is the singular value matrix of and define the principal components of mode for tensor for every . Let be the principal components of in the tensor space . In the following proposition, we link the sets of principal components in the metric space . As previously, for the sake of clarity the result is first proved for and afterwards geralized to whatever order . Let be a tensor in and let be its image through the isometry in the standard tensor space . Let be the HOSVD core tensor of at multi-linear rank such that for . Then for the principal component of mode of in the metric tensor space it holds

with . [Proof:] This result comes straightforwardly from Proposition Document proof by introducing the metrics matrices. For completeness, we illustrate the proof focusing on the link for the principal components of the first mode. Let be the HOSVD basis of at multi-linear rank with . Let be the principal components of mode for tensor , where is the singular values matrix of . Proposition Document yields

 Y1=X(1)(Y3⊗KY2)(B(1))⊤ ()

with . Notice that . So thanks to Equation rk:1, we express in function of as

 X(1)=M1F(1)(M⊤3⊗KM⊤2)

and replacing it into eq3:6, it gets

 \splitY1=M1F(1)(M3⊗KM2)(Y3⊗KY2)(B(1))⊤ ()

since are SPD matrices. Remarking that from the definition, substituting it in the Equation eq3:7 we obtain

 M1W1=M1F(1)(M3⊗KM2)(M3W3⊗KM2W2)(B(1))⊤. ()

Since is SPD and consequently invertible, from the previous equation it follows the thesis. The other relations follow straightforwardly from this proof, permuting the indices coherently. This result is easily generalized to -order tensors as follows. Let be a tensor in and let be its image through the isometry in the standard tensor space . Let be the HOSVD core tensor of at multi-linear rank such that for for . Then for the principal component of mode of in the metric tensor space it holds

 Wμ=F(1)(M2dWd⊗K⋯⊗KM2μ+1Wμ+1⊗KM2μ−1Wμ−1⊗K⋯⊗KM21W1)(B(μ))⊤

with for every . [Proof:] The proof is a direct consequence of Proposition Document and Proposition Document, so the details are omitted here.

## Geometric view for multiway correspondence analysis

In this last section, we transport the previous results in the correspondence analysis framework. We firstly clarify the Euclidean space and its metric where we set our problem. Then we make explicit the point cloud relation in this particular context. As final outcome we are able to prove the correspondence between the point clouds attached to each mode. In accordance with the correspondence analysis framework, we consider a -way contingency table . The first step for performing CA is scaling by the sum of all its components setting a new frequency tensor

 →F=1∑n1,…,ndi1,…,id=1Ti1…id→T.

We first clarify the tensor space we will work with. Let be the marginal of mode , i.e., the vector whose components are the sums of the slices in mode for all . For example the -th element of is

 f1i1=n2,…,nd∑i2,…,id=1Fi1…idforalli1∈\sqb1n1.

We assume that has no zero component and by construction for every . Let define for each and let assume that belongs to endowed by the metric induced by the matrices , since is SPD for every . We denote by this metric space and by the tensor space endowed with the standard inner product. Under this assumption, let the isometry between the spaces and and let . The general element of tensor is written

 Xi1…id=Fi1…id√f1i1…fdid.

Performing the HOSVD over tensor at multi-linear rank leads to a new orthogonal basis , to a core tensor and to the principal components for every in the standard tensor space. Focusing on the principal components of tensor in , Proposition  entails

 Wμ=F(1)(M2dWd⊗K⋯⊗KM2μ+1Wμ+1⊗KM2μ−1Wμ−1⊗K⋯⊗KM21W1)(B(μ))⊤

where is the matricization of for . Let be the principal components scaled by the singular value inverse for in tensor space . Henceforth, we denote by the -th row of . Now we prove that each component of vector can be expressed as a scaling factor times the barycenter of the linear combinations of the other two scaled principal component rows. We assume equal to to facilitate the comprehension of the following proof. Let be a tensor in the tensor space endowed with the norm induced by the inner product matrices with the mode marginal of for . Let be the scaled principal components for tensor of mode in . If for every , then

where from Proposition Document. [Proof:] We describe the proof for the -th row of with . From Proposition Document, under the CA metric choice, it follows that the principal components of in the tensor space satisfy the relation

 W1=F(1)(D−23W3⊗KD−22W2)(B(1))⊤=F(1)(Z3⊗KZ2)(B(1))⊤.

Multiplying on the left this last equation by , we obtain

 Z1=D−21W1=D−21F(1)(Z3⊗KZ2)(B(1))⊤

and since , it gets

 Z1=D−21F(1)(Z3⊗KZ2)(Σ−11B(1)1)⊤. ()

Making explicit the -th component of , the -th row of , from Equation eq3:10, we have

<