Tensor Matched Subspace Detection

The problem of testing whether an incomplete tensor lies in a given tensor subspace, called tensor matched subspace detection, is significant when it is unavoidable to have missing entries. Compared with the matrix case, the tensor matched subspace detection problem is much more challenging due to the curse of dimensionality and the intertwinement between the sampling operator and the tensor product operation. In this paper, we investigate the subspace detection problem for the transform-based tensor models. Under this framework, tensor subspaces and the orthogonal projection onto a given subspace are defined, and the energies of a tensor outside the given subspace (also called residual energy in statistics) with tubal-sampling and elementwise-sampling are derived. We have proved that the residual energy of sampling signals is bounded with high probability. Based on the residual energy, the reliable detection is feasible.

Authors

• 3 publications
• 21 publications
• 10 publications
• Tensor Matched Kronecker-Structured Subspace Detection for Missing Information

We consider the problem of detecting whether a tensor signal having many...
10/25/2018 ∙ by Ishan Jindal, et al. ∙ 0

• GLIMPS: A Greedy Mixed Integer Approach for Super Robust Matched Subspace Detection

Due to diverse nature of data acquisition and modern applications, many ...
10/29/2019 ∙ by Md Mahfuzur Rahman, et al. ∙ 0

• Principal Component Analysis with Tensor Train Subspace

Tensor train is a hierarchical tensor network structure that helps allev...
03/13/2018 ∙ by Wenqi Wang, et al. ∙ 0

• Grassmannian Optimization for Online Tensor Completion and Tracking in the t-SVD Algebra

We propose a new streaming algorithm, called TOUCAN, for the tensor comp...
01/30/2020 ∙ by Kyle Gilman, et al. ∙ 0

• Tensor Krylov subspace methods via the T-product for color image processing

The present paper is concerned with developing tensor iterative Krylov s...
06/10/2020 ∙ by M. El Guide, et al. ∙ 0

• Tensor GMRES and Golub-Kahan Bidiagonalization methods via the Einstein product with applications to image and video processing

In the present paper, we are interested in developing iterative Krylov s...
05/15/2020 ∙ by M. El Guide, et al. ∙ 0

• Testing tensor products

A function f:[n]^d→F_2 is a direct sum if it is of the form f((a_1,...,...
04/29/2019 ∙ by Irit Dinur, et al. ∙ 0

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In signal processing and big data analysis, testing whether a signal lies in a subspace is an important problem, which arises in a variety of applications, such as learning the column subspace of a matrix from incomplete data [1], subspace clustering or identification with missing data [2, 3], shape detection and reconstruction from raw light detection and ranging (LiDAR) data [4], image subspace representation [5], low-complexity MIMO detection [6, 7], tensor subspace modeling under adaptive sampling [8, 9], and so on.

The problem of matched subspace detection is challenging due to three factors: 1) in cases such as Internet of Things (IoT) system [14], we can only obtain a data with high loss rate; 2) there is measurement noise in an observed signal; 3) existing representations of signals have limitations. Missing data will increase the difficulty of tensor matched subspace detection, and the presence of measurement noise may lead to erroneous decision. Moreover, existing mathematical models used to model the signal, such as vectors, may lead to the loss of the information of the signal, since the original structure of the signal is destroyed during modeling the signal as a mathematical model. Works for matched subspace detection in [10, 11, 12, 13, 15, 16, 17, 18] modeled a signal as a vector. However, with the developing of big data, signals can be naturally represented as multi-dimensional data arrays or tensors. When a multi-dimensional data array is represented as a vector, some information, such as the structure information between entries, will loose. Therefore, it is urgent to propose a new method for the problem of matched subspace detection based on multi-dimensional data arrays or tensors.

Tensors, as multi-dimensional modeling tools, have wide applications in signal processing [19, 20, 21], and representing a signal as a tensor can reserve more information of the original signal than representing it as a vector, for a second-order or higher-order tensor has more dimensions to describe the signal than a vector. based on the recently proposed transform-based tensor model [22, 23], a third-order tensor can be viewed as a matrix with tubes as its entries, and be treated as linear operators over the set of second-order tensors. Moreover, we have similar definitions of tensor subspace and the respective orthogonal projection in the transform-based tensor model. Hence, the methods in [15, 16, 17, 18] can be extended to tensor subspaces.

In this paper, we propose a method for the problem matched subspace detection based on transform-based tensor model, called tensor matched subspace detection, and we can utilize more information of the signal than conventional methods. First, we construct the estimators with tubal-sampling and elementwise-sampling respectively, aiming at estimating the energy of a signal outside a subspace (also called residual energy in statistics) based on the sample. When a signal lies in the subspace, the energy of this signal outside the subspace is zero, then the energy estimated by the estimator based on the sample is also zero, but not vice versa. Secondly, bounds of our estimators are given, which show our estimators can work efficiently when the sample size is slightly than for tubal-sampling and for elementwise-sampling, where is the dimension of the subspace. Then, the problem of tensor matched subspace detection is modeled as a binary hypothesis test with the hypotheses, the signal lies in the subspace, while

the signal not lies in the subspace. With the residual energy as test statistics, the detection is given directly in the noiseless case, and for the noisy case, the constant false alarm rate (CFAR) test is made. Finally, based on discrete Fourier transform (DFT) and discrete cosine transform (DCT), our estimators and methods for tensor matched subspace detection are evaluated by corresponding experiments.

The remainder of this paper is organized as follows. In Section II, the transform-based tensor model and the problem statement are given. Then, we construct the estimators and present two theorems which give quantitative bounds on our estimators in Section III. The detections both with nose and without noise are given in Section IV. Section V presents numerical experiments. Finally, Section VI concludes the paper.

Ii Notations and Problem Statement

We first introduce the notations and the transform-based tensor model. Then, we formulate the problem of tensor matched subspace detection.

Ii-a Notations

Scalars are denoted by lowercase letters, e.g., ; vectors are denoted by boldface lowercase letters, e.g., ; matrices are denoted by boldface capital letters, e.g., ; and third-order tensors are denoted by calligraphic letters, i.e., . The transpose of a vector or a matrix is denoted with a superscript , and the transpose of a third-order tensor is denoted with a superscript . We use to denote the index set , to denote the set , and to denote the set .

The -th element of a vector is , the -th element of a matrix is or , and similarly for third-order tensors , the -th element is or . For a third-order tensor , a tube of is defined by fixing all indices but one, while a slice of defined by fixing all but two indices. We use , , to denote mode-1, mode-2, mode-3 tubes of , and , , to denote the frontal, lateral, and horizontal slices of . and are also called tensor row and tensor column. For easy representation, we use to denote , and to denote .

For a vector , the -norm is , while for a matrix , the Frobenius norm is , and the spectral-norm

is the largest singular value of

. For a tensor , the Frobenius norm is . For a tensor column , we define -norm as , and -norm as .

For a tube

and a given linear transform

,

 L(a)(1,1,i)=(Mvec(a))i, (1)

where is the vector representation of a tube, and is the matrix decided by the transform . For a tube , we have , where is a constant and .

Ii-B Transform-based Tensor Model

In order to introduce the definition of -product, we first introduce the tube multiplication. Given an invertible discrete transform , the elementwise multiplication , and , the tube multiplication of and is defined as

 a∙b=L−1(L(a)∘L(b)),

where is the inverse of [22].

Definition 1 (Tensor product: L-product [22]).

The -product of and is a tensor of size , with , for and .

Transform domain representation [22]: For an invertible discrete transform , let denote the tensor obtained by taking the transform of all the tubes along the third dimension of , i.e., for and , . Furthermore, we use to denote the block diagonal matrix of the tensor in the transform domain, i.e.,

 ¯¯¯¯¯A=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣L(A)(1)L(A)(2)⋱L(A)(n3)⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦.

Under the transform-based tensor model, an tensor can be viewed as an matrix of tubes that are in the third-dimension, therefore the -product of two tensors can be regarded as multiplication of two matrices, expect that the multiplication of two numbers is replaced by the multiplication of two tubes. Owing to the definition of -product based on the discrete transform, we have the following remark that is used throughout the paper.

Remark 1 ([22]).

The -product can be calculated in the following way:

 C=L−1(¯¯¯¯¯A ¯¯¯¯B).

Motivated by the definition of t-product in [24] and the cosine transform based product in [23], we introduce the -product based on block matrix tools. For tensor , we use to denote a special structured block matrix determined by the frontal slices of , such that the -product , where and , can be represented as , Where

 unfold(A)=[A(1)HA(2)H⋯A(n3)H]H.

The form of the block matrix varies with the discrete transformation [24, 25, 26]. When the transform is discrete Fourier transform, [24], where is the operation that converts a third order tensor into a block circular matrix, i.e.,

 bcirc(A)=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣A(1)A(k)⋯A(2)A(2)A(1)⋯A(3)⋮⋮⋱⋮A(k)A(k−1)⋯A(1)⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦. (2)

When the transform is discrete cosine transform, , where is the Kronecker product [19, 23], denotes ,

is the circular upshift matrix as follows

 Zn3=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣010⋯0001⋱⋮⋮⋮⋱⋱000⋯0100⋯00⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦, (3)

and is the following block Toeplitz-plus-Hankel matrix [25, 26, 23]

 T+H = ⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣A(1)A(2)⋯A(k)A(2)A(1)⋯A(k−1)⋮⋮⋱⋮A(k)A(k−1)⋯A(1)⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦+⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣A(2)⋯A(k)0⋮⋮A(k)A(k)0⋯⋮0A(k)⋯A(2)⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦. (4)
Definition 2 (Tensor transpose [22, 23]).

Let , then the transpose is such that , .

The transpose of can be obtained by taking the inverse transform of the tensor whose -th frontal slice is , , and the multiplication reversal property of the transpose holds [22, 23], i.e. .

Definition 3 (L-diagonal tensor [8]).

A tensor is called -diagonal tensor if each frontal slice of the tensor is a diagonal matrix.

Let , where denotes a tube of length with all entries equal to , and is the multiplicative unity for the tube multiplication [22]. The multiplicative unity plays a similar role in tensor space as in vector space.

Definition 4 (Identity tensor [22]).

The identity is an -diagonal square tensor with ’s on the main diagonal and zeros elsewhere, i.e., for , where all other tubes ’s.

A square tensor is invertible if there exists a tensor such that [22]. Moreover, is -orthogonal, if [22].

Definition 5 (L-Svd [22]).

The -SVD of is given by , where and are -orthogonal tensors of size and respectively, and is a -diagonal tensor of size .

The -SVD of can be derived from individual matrix SVD in transform space. That is, . Then the number of non-zero tubes of is called the -rank of .

Definition 6 (Tensor-column subspace [22]).

Let be an tensor with -rank of , then the -dimensional tensor-column subspace spanned by the columns of is defined as

 S={X|X=A1∙c1+A2∙c2+⋯+An2∙cn2}

where , , are arbitrary tubes of length .

Remark 2.

Let be spanned by the columns of , then is an orthogonal projection onto if is invertible.

Definition 7.

Let be the orthogonal projection onto an r-dimensional subspace , then the coherence of is defined as

 μ(S)≜n1rmaxj∥∥P∙Ej∥∥2F,

where is the tensor basis with and zeros elsewhere.

Assume the subspace is spanned by the columns of . Note that , then with low , each tube of carries approximately same amount of information [21].

Ii-C Problem Formulation

Let be a given -dimensional subspace in spanned by the columns of a third a third-order tensor , and denotes a signal with its entries are sampled with replacement. The problem of tensor matched subspace detection can be modeled as a binary hypothesis test with hypotheses:

 {H0:T∈S;H1:T∉S. (5)

Here, we consider two types of sampling: tubal-sampling and elementwise-sampling, as showed in Fig. 1. We use to denote the set of the index of samples, to denote the cardinality of , and to denote the corresponding sampling signal of . Then the definitions of tubal-sampling and elementwise-sampling are:

Tubal-sampling: , and is a tensor of with its tubes .

Elementwise-sampling: , and is a tensor of with its entries if and zero if .

Let be the orthogonal projection onto , and . We use to denote the energy of a signal outside a given subspace. Then when the entries of are fully observed, the test statistic can be constructed as

 t(T)=∥T−P∙T∥2FH1≷H0η. (6)

When , we have , and when . In the noiseless case, .

In practice, for high-dimensional applications, it is prohibitive or impossible to measure completely, and we can only obtain a sampling signal , so we can not calculate the energy of outside the subspace directly. Therefore, we should construct a new estimator to estimate the energy of outside the subspace based on and the corresponding projection . A good estimator should satisfy the following conditions (noiseless case):

• When , , then for arbitrary sample size .

• When , , then, as long as the sample size is greater than a constant but much smaller than the size of , .

Iii Energy Estimation and Main Theorems

In this section, based on the tubal-sampling and elementwise-sampling, the estimators are constructed respectively. Then, two theorems are given to bound the estimators, which show that our estimators can work effectively when the sample size is for tubal sampling and for elementwise-sampling. Without loss of generality, we assume , whose columns span the subspace , is orthogonal, that means the dimension of is . For convenience the following representation, we set to be the sample size.

Iii-a Energy Estimation

For tubal-sampling, the estimator can be constructed as follows. Note that be an tensor whose columns span the -dimensional subspace . We let be the tensor organized by the horizontal slices of indicated by , that means . Then we define the projection . It follows immediately that if , and . However, it is possible that even if when the sample size . One of our main theorems show that if is just slightly grater than , then with high probability is very close to .

For elementwise-sampling, the subspace should be mapped into a vector subspace , and for all . Let the vector subspace be spanned by the columns of , then for all , . However, when , , where is the orthogonal subspace of and is the orthogonal subspace of . Let , where and . Then we use to denote the principle angle between and , which is defined as follows

 cos(θ)=|⟨unfold(Y),PS⊥unfold(Y)⟩|∥unfold(Y)∥2∥PS⊥unfold(Y)∥2=∥PS⊥unfold(Y)∥2∥unfold(Y)∥2, (7)

where is the orthogonal projection onto , denotes the inner product of two vectors, and denotes the absolute value.

For elementwise-sampling, the estimator can be constructed as follows. As defined in Section II-C, the sampling signal satisfies

 TΩ(i,1,j)={T(i,1,j),(i,j)∈Ω,0,otherwise. (8)

Let , , and be the projection, where satisfies

 UΩ((j−1)n1+i,:)={U((j−1)n1+i,:),(i,j)∈Ω,0,otherwise. (9)

Then if , and . However, it is possible that even if when the sample size . One of our main theorems show that if is just slightly grater than , then with high probability is very close to .

Iii-B Main Theorem with Tubal-sampling

Rewrite , where and . Hence and under tubal-sampling, then we have the following theorem.

Theorem 1.

Let and . Then with probability at least ,

 m(1−α)−c2n2μ(S)β(1−γ)n1∥T−P∙T∥2F≤∥TΩ−PΩ∙TΩ∥2F≤(1+α)mn1∥T−P∙T∥2F (10)

holds, where , , and .

In order to prove Theorem 1, the following three Lemmas, whose proofs are provided in Appendix, are needed for the proof of Theorem 1.

Lemma 1.

With the same , , given in Theorem 1, then

 (11)

holds with probability at least .

Lemma 2.

With the same , , given in Theorem 1, then

 ∥∥U†Ω∙YΩ∥∥2F≤βmn2μ(S)n21∥Y∥2F (12)

holds with probability at least .

Lemma 3.

With the same , , given in Theorem 1, then

 ∥∥∥(¯¯¯¯¯¯UΩH ¯¯¯¯¯¯UΩ)−1∥∥∥≤n1(1−γ)m (13)

holds with probability at least , provided that .

Proof of Theorem 1.

Consider , and we split into three terms:

 ∥YΩ−PΩ∙YΩ∥2F = 1c2∥∥¯¯¯¯¯¯¯YΩ−¯¯¯¯¯¯¯PΩ ¯¯¯¯¯¯¯YΩ∥∥2F (14) = 1c2trace((¯¯¯¯¯¯¯YΩ−¯¯¯¯¯¯¯PΩ ¯¯¯¯¯¯¯YΩ)H(¯¯¯¯¯¯¯YΩ−¯¯¯¯¯¯¯PΩ ¯¯¯¯¯¯¯YΩ)) = 1c2trace(¯¯¯¯¯¯¯YΩH¯¯¯¯¯¯¯YΩ−¯¯¯¯¯¯¯YΩH¯¯¯¯¯¯¯PΩ ¯¯¯¯¯¯¯YΩ) = ≥ ∥YΩ∥2F−∥∥∥(¯¯¯¯¯¯UΩH ¯¯¯¯¯¯UΩ)−1∥∥∥∥∥U†Ω∙YΩ∥∥2F,

where denotes the trace of a matrix. Taking the union bounds of Lemma 1, Lemma 2 and Lemma 3, we have

 m(1−α)−c2n2μ(S)β(1−γ)n1∥Y∥2F≤∥YΩ−PΩ∙YΩ∥2F≤(1+α)mn1∥Y∥2F (15)

with probability at least . ∎

Iii-C Main Theorem with Elementwise-sampling

As described in Section III-A, the subspace is mapped into the vector subspace for elementwise-sampling. Then the coherence of is needed. The coherence of is defined as

 μ(S)≜n1n2maxj∥∥PSej∥∥22,

where is a standard basis of and is the dimension of . Recall where and . Let , and we rewrite , where , but . Furthermore, . Let be the sample of and and for elementwise-sampling. Then we have the following theorem.

Theorem 2.

Let , , then with probability at least

 m(1−α)−n2n3μ(S)β(1−γ)n1n3cos2(θ)∥T−P∙T∥2F≤∥tΩ−PΩtΩ∥22≤(1+α)mn1n3cos2(θ)∥T−P∙T∥2F (16)

holds, where , , .

We need the following three Lemmas to prove Theorem 2, and the proof of lemma 5 is provided in Appendix.

Lemma 4 ([1]).

With the same , , given in Theorem 2, then

 (1−α)mn1n3∥y∥22≤∥yΩ∥22≤(1+α)mn1n3∥y∥22 (17)

holds with probability at least .

Lemma 5.

With the same , , given in Theorem 2, then

 ∥∥UHΩyΩ∥∥22≤βmn2n21n3μ(S)∥y∥22 (18)

holds with probability at least .

Lemma 6 ([15]).

With the same , , given in Theorem 2, then

 ∥∥∥(UHΩUΩ)−1∥∥∥≤n1n3(1−γ)m (19)

holds with probability at least , provided that .

Proof of Theorem 2.

Consider . In order to apply these three Lemmas, we split into three terms and bound each with high probability.

Assume is invertible, and according to [15], we have

 ∥yΩ−PΩyΩ∥22 = ∥yΩ∥22−yHΩUΩ(UHΩUΩ)−1UHΩyΩ ≥

Combining Lemma 4, Lemma 5, and Lemma 6, we have

with probability at least . ∎

Iii-D Main Results with DFT and DCT

When the transform is DFT, , where denotes the DFT matrix. For , we have . Moreover, , that means for all . Furthermore, the coherence of and are equivalent, that means . Thus, for the transform with DFT, we have Corollary 1 for tubal-sampling and Corollary 2 for elementwise-sampling.

Corollary 1.

Let and . Then with probability at least ,

 m(1−α)−n3n2μ(S)β(1−γ)n1∥T−P∙T∥2F≤∥TΩ−PΩ∙TΩ∥2F≤(1+α)mn