1.1 Reshaping or Folding
The simplest way of tensorization is through the reshaping or folding operations, also known as segmentation [Debals and De Lathauwer, 2015, Boussé et al., 2015]. This type of tensorization preserves the number of original data entries and their sequential ordering, as it only rearranges a vector to a matrix or tensor. Hence, folding does not require additional memory space.
Folding. A tensor of size is considered a folding of a vector of length , if
(1.1) 
for all , where is a linear index of ().
In other words, the vector is vectorization of the tensor , while is a tensorization of .
As an example, the arrangement of elements in a matrix of size , which is folded from a vector of length is given by
(1.2) 
Higherorder folding/reshaping refers to the application of the folding procedure several times, whereby a vector is converted into an thorder tensor of size .
Application to BSS. It is important to notice that a higherorder folding (quantization) of a vector of length , sampled from an exponential function , yields an thorder tensor of rank 1. Moreover, wide classes of functions formed by products and/or sums of trigonometric, polynomial and rational functions can be quantized in this way to yield (approximate) lowrank tensor train (TT) network formats [Khoromskij, 2011a, b, Oseledets, 2012]. Exploitation of such lowrank representations allows us to separate the signals from a single or a few mixtures, as outlined below.
Consider a single mixture, , which is composed of component signals, , , and corrupted by additive Gaussian noise, , to give
(1.3) 
The aim is to extract the unknown sources (components) from the observed signal . Assume that higherorder foldings, , of the component signals, , have lowrank representations in, e.g., the CP or Tucker format, given by
or in the TT format
or in any other tensor network format. Because of the multilinearity of this tensorization, the following relation between the tensorization of the mixture, , and the tensorization of the hidden components, , holds
(1.4) 
where is the tensorization of the noise .
Now, by a decomposition of into blocks of tensor networks, each corresponding to a tensor network (TN) representation of a hidden component signal, we can find approximations of and the separate component signals up to a scaling ambiguity. The separation method can be used in conjunction with the Toeplitz and Hankel foldings. Example 1.10.1 illustrates the separation of damped sinusoid signals.
1.2 Tensorization through a Toeplitz/Hankel Tensor
1.2.1 Toeplitz Folding
The Toeplitz matrix is a structured matrix with constant entries in each diagonal. Toeplitz matrices appear in many signal processing applications, e.g., through covariance matrices in prediction, estimation, detection, classification, regression, harmonic analysis, speech enhancement, interference cancellation, image restoration, adaptive filtering, blind deconvolution and blind equalization [Bini, 1995, Gray, 2006].
Before introducing a generalization of a Toeplitz matrix to a Toeplitz tensor, we shall first consider the discrete convolution between two vectors and of respective lengths and , given by
(1.5) 
Now, we can write the entries in a linear algebraic form as
where . With this representation, the convolution can be computed through a linear matrix operator, , which is called the Toeplitz matrix of the generating vector .
Toeplitz matrix. A Toeplitz matrix of size , which is constructed from a vector of length , is defined as
(1.6) 
The first column and first row of the Toeplitz matrix represent its entire generating vector.
Indeed, all entries of in the above convolution (1.5
) can be expressed either by: (i) using a Toeplitz matrix formed from a zeropadded generating vector
, with being the first row of this Toeplitz matrix, to give(1.7) 
or (ii) through a Toeplitz matrix of the generating vector , to yield
(1.8) 
The so expanded Toeplitz matrix is a circulant matrix of .
Consider now a convolution of three vectors, , and of respective lengths , and , given by
For its implementation, we first construct a Toeplitz matrix, , of size from the generating vector . Then, we use the rows to generate Toeplitz matrices, of size . Finally, all Toeplitz matrices, , …, , are stacked as horizontal slices of a thirdorder tensor , i.e., , . It can be verified that entries can be computed as
The tensor is referred to as the Toeplitz tensor of the generating vector .
Toeplitz tensor. An thorder Toeplitz tensor of size , which is represented by , is constructed from a generating vector of length , such that its entries are defined as
(1.9) 
where . An example of the Toeplitz tensor is illustrated in Figure 1.1.
Example 1 Given a dimensional Toeplitz tensor of a sequence , the horizontal slices are Toeplitz matrices of sizes given by
Recursive generation. An thorder Toeplitz tensor of a generating vector is of size , can be constructed from an thorder Toeplitz tensor of size of the same generating vector, by a conversion of mode fibers to Toeplitz matrices of size .
Following the definition of the Toeplitz tensor, the convolution of vectors, of respective lengths , and a vector of length , can be represented as a tensorvector product of an thorder Toeplitz tensor and vectors , that is
where is a Toeplitz tensor of size generated from , and , or
where is a Toeplitz tensor, of the zeropadded vector of , is of size .
1.2.2 Hankel Folding
The Hankel matrix and Hankel tensor have similar structures to the Toeplitz matrix and tensor and can also be used as linear operators in the convolution.
Hankel matrix. An Hankel matrix of a vector , of length , is defined as
(1.10) 
Hankel tensor. [Papy et al., 2005] An thorder Hankel tensor of size , which is represented by , is constructed from a generating vector of length , such that its entries are defined as
(1.11) 
Remark 1
(Properties of a Hankel tensor)

The generating vector can be reconstructed by a concatenation of fibers of the Hankel tensor , where , and
(1.12) 
Slices of a Hankel tensor , i.e., any subset of the tensor produced by fixing indices of its entries and varying the two remaining indices, are also Hankel matrices.

An thorder Hankel tensor, , can be constructed from an thorder Hankel tensor of size by converting its mode fibers to Hankel matrices of size .

Similarly to the Toeplitz tensor, the convolution of vectors, of lengths , and a vector of length , can be represented as
or
where , , is the thorder Hankel tensor of , whereas is the Hankel tensor of a zeropadded version of .

A Hankel tensor with identical dimensions , for all , is a symmetric tensor.
Example 2 A – dimensional Hankel tensor of a sequence is a symmetric tensor, and is given by
1.2.3 Quantized Tensorization
It is important to notice that the tensorizations into the Toeplitz and Hankel tensors typically enlarge the number of data samples (in the sense that the number of entries of the corresponding tensor is larger than the number of original samples). For example, when the dimensions for all , the so generated tensor to be a quantized tensor of order , while the number of entries of a such tensor increases from the original size to
. Therefore, quantized tensorizations are suited to analyse signals of shortlength, especially in multivariate autoregressive modelling.
1.2.4 Convolution Tensor
Consider again the convolution of two vectors of respective lengths and . We can then rewrite the expression for the entries as
where is a thirdorder tensor of size , , for which the th diagonal elements of th slices are ones, and the remaining entries are zeros, for . For example, the slices , for , are given by
The tensor is called the convolution tensor. Illustration of a convolution tensor of size is given in Figure 1.2.
Note that a product of this tensor with the vector yields the Toeplitz matrix of the generating vector , which is of size , in the form
while the tensorvector product yields a Toeplitz matrix of the generating vector , or a circulant matrix of
In general, for a convolution of vectors, , …, , of respective lengths and a vector of length
(1.13) 
the entries of can be expressed through a multilinear product of a convolution tensor, , of thorder and size , , and the input vectors
(1.14) 
Most entries of are zeros, except for those located at , such that
(1.15) 
where , .
The tensor product yields the Toeplitz tensor of the generating vector , shown below
(1.16) 
1.2.5 QTT Representation of the Convolution Tensor
An important property of the convolution tensor is that it has a QTT representation with rank no larger than the number of inputs vectors, . To illustrate this property, for simplicity, we consider an thorder Toeplitz tensor of size generated from a vector of length , where . The convolution tensor of this Toeplitz tensor is of thorder and of size .
Zeropadded convolution tensor. By appending zero tensors of size before the convolution tensor, we obtain an thorder convolution tensor, of size .
QTT representation. The zeropadded convolution tensor can be represented in the following QTT format
(1.17) 
where “” represents the strong Kronecker product between block tensors^{1}^{1}1A “block tensor” represents a multilevel matrix, the entries of which are matrices or tensors. defined from the thorder core tensors as .
The last core tensor
represents an exchange (backward identity) matrix of size
which can represented as an thorder tensor of size . The first core tensors , , …, are expressed based on the socalled elementary core tensor of size , as(1.18) 
The rigorous definition of the elementary core tensor is provided in Appendix 3.
Table 1.1 provides ranks of the QTT representation for various order of convolution tensors. The elementary core tensor can be further reexpressed in a (tensor train) TTformat with sparse TT cores, as
where is of size , for , and the last core tensor is of size .
QTT rank  QTT rank  

2  2, 2, 2, …, 2  10  6, 8, 9, …, 9 
3  2, 3, 3, …, 3  11  6, 9, 10, …, 10 
4  3, 4, 4, …, 4  12  7, 10, 11, …, 11 
5  3, 4, 5, …, 5  13  7, 10, 12, …, 12 
6  4, 5, 6, …, 6  14  8, 11, 13, …, 13 
7  4, 6, 7, …, 7  15  8, 12, 14, …, 14 
8  5, 7, 8, …, 8  16  9, 13, 15, …, 15 
9  5, 7, 8, …, 8  17  9, 13, 15, …, 15 
Example 3 Convolution tensor of 3rdorder.
For the vectors of length and of length , the expanded convolution tensor has size of . The elementary core tensor is then of size and its subtensors, , are given in a block form of the last two indices through four matrices, , , and , of size , that is
where
The convolution tensor can then be represented in a QTT format of rank2 [Kazeev et al., 2013] with core tensors , , and the last core tensor which is of size . This QTT representation is useful to generate a Toeplitz matrix when its generating vector is given in the QTT format. An illustration of the convolution tensor is provided in Figure 1.3.
Example 4 Convolution tensor of fourthorder.
For the convolution tensor of fourth order, i.e., Toeplitz order , the elementary core tensor is of size , and is given in a block form of the last two indices as
where are of size , , are zero tensors, and
Finally, the zeropadded convolution tensor of size has a QTT representation in (1.17) with , , , and the last core tensor which is of size .
1.2.6 Lowrank Representation of Hankel and Toeplitz Matrices/Tensors
The Hankel and Toeplitz foldings are multilinear tensorizations, and can be applied to the BSS problem, as in (1.4). When the Hankel and Toeplitz tensors of the hidden sources are of lowrank in some tensor network representation, the tensor of the mixture is expressed as a sum of low rank tensor terms.
For example, the Hankel and Toeplitz matrices/tensors of an exponential function, , are rank1 matrices/tensors,
and consequently Hankel matrices/tensors of sums and/or products of exponentials, sinusoids, and polynomials will also be of lowrank, which is equal to the degree of the function being considered.
Hadamard Product. More importantly, when Hankel/Toeplitz tensors of two vectors and have lowrank CP/TT representations, the Hankel/Toeplitz tensor of their elementwise product, , can also be represented in the same CP/TT tensor format
The CP/TT rank of or is not larger than the product of the CP/TT ranks of the tensors of and .
Example 5
The thirdorder Hankel tensor of is a rank3 tensor, and the thirdorder Hankel tensor of is of rank2; hence the Hankel tensor of the has at most rank6.
Symmetric CP and Vandermonde decompositions. It is important to notice that a Hankel tensor of size can always be represented by a symmetric CP decomposition
Moreover, the tensor also admits a symmetric CP decomposition with Vandermonde structured factor matrix [Qi, 2015]
(1.19) 
where comprises nonzero coefficients, and is a Vandermonde matrix generated from distinct values
(1.20) 
By writing the decomposition in (1.19) for the entries (see (1.12)), the Vandermonde decomposition of the Hankel tensor becomes a Vandermonde factorization of [Chen, 2016], given by
Observe that various Vandermonde decompositions of the Hankel tensors of the same vector , but of different tensor orders , have the same generating Vandermonde vector .
Moreover, the Vandemonde rank, i.e, the minimum of in the decomposition (1.19), therefore cannot exceed the length of the generating vector .
QTT representation of Toeplitz/Hankel tensor. As mentioned previously, the zeropadded convolution tensor of thorder can be represented in a QTT format of rank of at most . Hence, if a vector of length has a QTT representation of rank, given by
(1.21) 
where is an block matrix of the core tensor of size , for , or of of size , then following the relation between the convolution tensor and the Toeplitz tensor of the generating vector , we have
Comments
There are no comments yet.