I Introduction
In signal processing and big data analysis, testing whether a signal lies in a subspace is an important problem, which arises in a variety of applications, such as learning the column subspace of a matrix from incomplete data [1], subspace clustering or identification with missing data [2, 3], shape detection and reconstruction from raw light detection and ranging (LiDAR) data [4], image subspace representation [5], lowcomplexity MIMO detection [6, 7], tensor subspace modeling under adaptive sampling [8, 9], and so on.
The problem of matched subspace detection is challenging due to three factors: 1) in cases such as Internet of Things (IoT) system [14], we can only obtain a data with high loss rate; 2) there is measurement noise in an observed signal; 3) existing representations of signals have limitations. Missing data will increase the difficulty of tensor matched subspace detection, and the presence of measurement noise may lead to erroneous decision. Moreover, existing mathematical models used to model the signal, such as vectors, may lead to the loss of the information of the signal, since the original structure of the signal is destroyed during modeling the signal as a mathematical model. Works for matched subspace detection in [10, 11, 12, 13, 15, 16, 17, 18] modeled a signal as a vector. However, with the developing of big data, signals can be naturally represented as multidimensional data arrays or tensors. When a multidimensional data array is represented as a vector, some information, such as the structure information between entries, will loose. Therefore, it is urgent to propose a new method for the problem of matched subspace detection based on multidimensional data arrays or tensors.
Tensors, as multidimensional modeling tools, have wide applications in signal processing [19, 20, 21], and representing a signal as a tensor can reserve more information of the original signal than representing it as a vector, for a secondorder or higherorder tensor has more dimensions to describe the signal than a vector. based on the recently proposed transformbased tensor model [22, 23], a thirdorder tensor can be viewed as a matrix with tubes as its entries, and be treated as linear operators over the set of secondorder tensors. Moreover, we have similar definitions of tensor subspace and the respective orthogonal projection in the transformbased tensor model. Hence, the methods in [15, 16, 17, 18] can be extended to tensor subspaces.
In this paper, we propose a method for the problem matched subspace detection based on transformbased tensor model, called tensor matched subspace detection, and we can utilize more information of the signal than conventional methods. First, we construct the estimators with tubalsampling and elementwisesampling respectively, aiming at estimating the energy of a signal outside a subspace (also called residual energy in statistics) based on the sample. When a signal lies in the subspace, the energy of this signal outside the subspace is zero, then the energy estimated by the estimator based on the sample is also zero, but not vice versa. Secondly, bounds of our estimators are given, which show our estimators can work efficiently when the sample size is slightly than for tubalsampling and for elementwisesampling, where is the dimension of the subspace. Then, the problem of tensor matched subspace detection is modeled as a binary hypothesis test with the hypotheses, the signal lies in the subspace, while
the signal not lies in the subspace. With the residual energy as test statistics, the detection is given directly in the noiseless case, and for the noisy case, the constant false alarm rate (CFAR) test is made. Finally, based on discrete Fourier transform (DFT) and discrete cosine transform (DCT), our estimators and methods for tensor matched subspace detection are evaluated by corresponding experiments.
The remainder of this paper is organized as follows. In Section II, the transformbased tensor model and the problem statement are given. Then, we construct the estimators and present two theorems which give quantitative bounds on our estimators in Section III. The detections both with nose and without noise are given in Section IV. Section V presents numerical experiments. Finally, Section VI concludes the paper.
Ii Notations and Problem Statement
We first introduce the notations and the transformbased tensor model. Then, we formulate the problem of tensor matched subspace detection.
Iia Notations
Scalars are denoted by lowercase letters, e.g., ; vectors are denoted by boldface lowercase letters, e.g., ; matrices are denoted by boldface capital letters, e.g., ; and thirdorder tensors are denoted by calligraphic letters, i.e., . The transpose of a vector or a matrix is denoted with a superscript , and the transpose of a thirdorder tensor is denoted with a superscript . We use to denote the index set , to denote the set , and to denote the set .
The th element of a vector is , the th element of a matrix is or , and similarly for thirdorder tensors , the th element is or . For a thirdorder tensor , a tube of is defined by fixing all indices but one, while a slice of defined by fixing all but two indices. We use , , to denote mode1, mode2, mode3 tubes of , and , , to denote the frontal, lateral, and horizontal slices of . and are also called tensor row and tensor column. For easy representation, we use to denote , and to denote .
For a vector , the norm is , while for a matrix , the Frobenius norm is , and the spectralnorm
is the largest singular value of
. For a tensor , the Frobenius norm is . For a tensor column , we define norm as , and norm as .For a tube
and a given linear transform
,(1) 
where is the vector representation of a tube, and is the matrix decided by the transform . For a tube , we have , where is a constant and .
IiB Transformbased Tensor Model
In order to introduce the definition of product, we first introduce the tube multiplication. Given an invertible discrete transform , the elementwise multiplication , and , the tube multiplication of and is defined as
where is the inverse of [22].
Definition 1 (Tensor product: product [22]).
The product of and is a tensor of size , with , for and .
Transform domain representation [22]: For an invertible discrete transform , let denote the tensor obtained by taking the transform of all the tubes along the third dimension of , i.e., for and , . Furthermore, we use to denote the block diagonal matrix of the tensor in the transform domain, i.e.,
Under the transformbased tensor model, an tensor can be viewed as an matrix of tubes that are in the thirddimension, therefore the product of two tensors can be regarded as multiplication of two matrices, expect that the multiplication of two numbers is replaced by the multiplication of two tubes. Owing to the definition of product based on the discrete transform, we have the following remark that is used throughout the paper.
Remark 1 ([22]).
The product can be calculated in the following way:
Motivated by the definition of tproduct in [24] and the cosine transform based product in [23], we introduce the product based on block matrix tools. For tensor , we use to denote a special structured block matrix determined by the frontal slices of , such that the product , where and , can be represented as , Where
The form of the block matrix varies with the discrete transformation [24, 25, 26]. When the transform is discrete Fourier transform, [24], where is the operation that converts a third order tensor into a block circular matrix, i.e.,
(2) 
When the transform is discrete cosine transform, , where is the Kronecker product [19, 23], denotes ,
is the circular upshift matrix as follows(3) 
and is the following block ToeplitzplusHankel matrix [25, 26, 23]
(4) 
The transpose of can be obtained by taking the inverse transform of the tensor whose th frontal slice is , , and the multiplication reversal property of the transpose holds [22, 23], i.e. .
Definition 3 (diagonal tensor [8]).
A tensor is called diagonal tensor if each frontal slice of the tensor is a diagonal matrix.
Let , where denotes a tube of length with all entries equal to , and is the multiplicative unity for the tube multiplication [22]. The multiplicative unity plays a similar role in tensor space as in vector space.
Definition 4 (Identity tensor [22]).
The identity is an diagonal square tensor with ’s on the main diagonal and zeros elsewhere, i.e., for , where all other tubes ’s.
A square tensor is invertible if there exists a tensor such that [22]. Moreover, is orthogonal, if [22].
Definition 5 (Svd [22]).
The SVD of is given by , where and are orthogonal tensors of size and respectively, and is a diagonal tensor of size .
The SVD of can be derived from individual matrix SVD in transform space. That is, . Then the number of nonzero tubes of is called the rank of .
Definition 6 (Tensorcolumn subspace [22]).
Let be an tensor with rank of , then the dimensional tensorcolumn subspace spanned by the columns of is defined as
where , , are arbitrary tubes of length .
Remark 2.
Let be spanned by the columns of , then is an orthogonal projection onto if is invertible.
Definition 7.
Let be the orthogonal projection onto an rdimensional subspace , then the coherence of is defined as
where is the tensor basis with and zeros elsewhere.
Assume the subspace is spanned by the columns of . Note that , then with low , each tube of carries approximately same amount of information [21].
IiC Problem Formulation
Let be a given dimensional subspace in spanned by the columns of a third a thirdorder tensor , and denotes a signal with its entries are sampled with replacement. The problem of tensor matched subspace detection can be modeled as a binary hypothesis test with hypotheses:
(5) 
Here, we consider two types of sampling: tubalsampling and elementwisesampling, as showed in Fig. 1. We use to denote the set of the index of samples, to denote the cardinality of , and to denote the corresponding sampling signal of . Then the definitions of tubalsampling and elementwisesampling are:
Tubalsampling: , and is a tensor of with its tubes .
Elementwisesampling: , and is a tensor of with its entries if and zero if .
Let be the orthogonal projection onto , and . We use to denote the energy of a signal outside a given subspace. Then when the entries of are fully observed, the test statistic can be constructed as
(6) 
When , we have , and when . In the noiseless case, .
In practice, for highdimensional applications, it is prohibitive or impossible to measure completely, and we can only obtain a sampling signal , so we can not calculate the energy of outside the subspace directly. Therefore, we should construct a new estimator to estimate the energy of outside the subspace based on and the corresponding projection . A good estimator should satisfy the following conditions (noiseless case):

When , , then for arbitrary sample size .

When , , then, as long as the sample size is greater than a constant but much smaller than the size of , .
Iii Energy Estimation and Main Theorems
In this section, based on the tubalsampling and elementwisesampling, the estimators are constructed respectively. Then, two theorems are given to bound the estimators, which show that our estimators can work effectively when the sample size is for tubal sampling and for elementwisesampling. Without loss of generality, we assume , whose columns span the subspace , is orthogonal, that means the dimension of is . For convenience the following representation, we set to be the sample size.
Iiia Energy Estimation
For tubalsampling, the estimator can be constructed as follows. Note that be an tensor whose columns span the dimensional subspace . We let be the tensor organized by the horizontal slices of indicated by , that means . Then we define the projection . It follows immediately that if , and . However, it is possible that even if when the sample size . One of our main theorems show that if is just slightly grater than , then with high probability is very close to .
For elementwisesampling, the subspace should be mapped into a vector subspace , and for all . Let the vector subspace be spanned by the columns of , then for all , . However, when , , where is the orthogonal subspace of and is the orthogonal subspace of . Let , where and . Then we use to denote the principle angle between and , which is defined as follows
(7) 
where is the orthogonal projection onto , denotes the inner product of two vectors, and denotes the absolute value.
For elementwisesampling, the estimator can be constructed as follows. As defined in Section IIC, the sampling signal satisfies
(8) 
Let , , and be the projection, where satisfies
(9) 
Then if , and . However, it is possible that even if when the sample size . One of our main theorems show that if is just slightly grater than , then with high probability is very close to .
IiiB Main Theorem with Tubalsampling
Rewrite , where and . Hence and under tubalsampling, then we have the following theorem.
Theorem 1.
Let and . Then with probability at least ,
(10) 
holds, where , , and .
In order to prove Theorem 1, the following three Lemmas, whose proofs are provided in Appendix, are needed for the proof of Theorem 1.
Lemma 1.
Lemma 2.
Lemma 3.
IiiC Main Theorem with Elementwisesampling
As described in Section IIIA, the subspace is mapped into the vector subspace for elementwisesampling. Then the coherence of is needed. The coherence of is defined as
where is a standard basis of and is the dimension of . Recall where and . Let , and we rewrite , where , but . Furthermore, . Let be the sample of and and for elementwisesampling. Then we have the following theorem.
Theorem 2.
Let , , then with probability at least
(16) 
holds, where , , .
We need the following three Lemmas to prove Theorem 2, and the proof of lemma 5 is provided in Appendix.
Lemma 5.
Lemma 6 ([15]).
IiiD Main Results with DFT and DCT
When the transform is DFT, , where denotes the DFT matrix. For , we have . Moreover, , that means for all . Furthermore, the coherence of and are equivalent, that means . Thus, for the transform with DFT, we have Corollary 1 for tubalsampling and Corollary 2 for elementwisesampling.
Corollary 1.
Let and . Then with probability at least ,
Comments
There are no comments yet.