DeepAI

# Multidimensional Data Tensor Sensing for RF Tomographic Imaging

Radio-frequency (RF) tomographic imaging is a promising technique for inferring multi-dimensional physical space by processing RF signals traversed across a region of interest. However, conventional RF tomography schemes are generally based on vector compressed sensing, which ignores the geometric structures of the target spaces and leads to low recovery precision. The recently proposed transform-based tensor model is more appropriate for sensory data processing, as it helps exploit the geometric structures of the three-dimensional target and improve the recovery precision. In this paper, we propose a novel tensor sensing approach that achieves highly accurate estimation for real-world three-dimensional spaces. First, we use the transform-based tensor model to formulate a tensor sensing problem, and propose a fast alternating minimization algorithm called Alt-Min. Secondly, we drive an algorithm which is optimized to reduce memory and computation requirements. Finally, we present evaluation of our Alt-Min approach using IKEA 3D data and demonstrate significant improvement in recovery error and convergence speed compared to prior tensor-based compressed sensing.

• 10 publications
• 21 publications
• 25 publications
• 13 publications
12/13/2017

### Tensor Sensing for RF Tomographic Imaging

Radio-frequency (RF) tomographic imaging is a promising technique for in...
10/22/2019

### An enhanced decoding algorithm for coded compressed sensing

Coded compressed sensing is an algorithmic framework tailored to sparse ...
04/30/2019

### Uniform recovery in infinite-dimensional compressed sensing and applications to structured binary sampling

Infinite-dimensional compressed sensing deals with the recovery of analo...
03/20/2021

### Spark Deficient Gabor Frame Provides a Novel Analysis Operator for Compressed Sensing

The analysis sparsity model is a very effective approach in modern Compr...
02/13/2021

### Regularized Kaczmarz Algorithms for Tensor Recovery

Tensor recovery has recently arisen in a lot of application fields, such...
04/08/2017

### Exact 3D seismic data reconstruction using Tubal-Alt-Min algorithm

Data missing is an common issue in seismic data, and many methods have b...
02/14/2018

### Compressive Sensing with Low Precision Data Representation: Radio Astronomy and Beyond

Modern scientific instruments produce vast amounts of data, which can ov...

## 1 Introduction

RF Tomographic imaging [1, 2] is an inference technique which can be used for remotely learning the locations and the shapes of objects, as depicted in Fig. 1. RF tomographic imaging can be applied in smart buildings and spaces applications [3], and in emergencies, rescue operations and security breaches [4], since the objects being imaged need not carry an electronic device or a cellphone.

There have been considerable works on reconstruction methods for RF tomographic imaging which can be categorized into vector-based and tensor-based methods. In [1, 5], vector-based compressed sensing approaches for RF tomographic imaging are proposed. These vector-based methods are designed to infer two-dimensional spaces, and they do not have the ability to estimate three-dimensional spaces since spatial structures of the data are inherently ignored. Reference [4]

proposes the tensor-based compressed sensing algorithm using tensor nuclear norm (TNN)

[6]

, which extends the RF tomographic imaging problem to three-dimensional case. However, it requires computing tensor singular value decomposition (t-SVD), which leads to high computational complexity, especially for large scale tensors. Moreover, the recovery error of this approach is relatively high, and the data size that can be handled is too small for realistic scenarios. In this paper, we aim to exploit the spatial structures of three-dimensional spaces for more efficient and accurate inference.

In order to represent three-dimensional spatial structures effectively, we adopt the recently proposed transform-based tensor model [7], which has the following advantages: (i) compared with other tensor models, only real-valued fast transforms are involved, so it is appropriate for the received signal strength (RSS) data; (ii) unlike other existing tensor models, the transform-based tensor model allows for diverse sampling strategies. On the other hand, we decompose the loss field tensor into the product of two tensors of small sizes, so that we only need to iteratively update two small tensors, which is much more effective than prior tensor-based compressed sensing.

In this paper, we propose a novel RF tomographic imaging scheme using tensor sensing to estimate the three-dimensional spaces. We formulate the RF tomographic imaging as a tensor sensing problem in a transform domain, and propose a novel Alternating-Minimization (Alt-Min) algorithm, whose implementation is optimized for memory consumption and computation speed. We apply our algorithm to the IKEA 3D datasets [8], and demonstrate significantly lower recovery error and required number of iterations, compared with the approach in [4].

The remainder of the paper is organized as follows: system model and problem formulation are given in Section II. Section III presents the solution algorithm, implementation and algorithm optimization. The evaluation results are presented in Section IV. Finally, we conclude this paper in Section V.

## 2 Problem Statement and Formulation

We first review the RF tomographic imaging problem [1, 3, 9], then model the shadowing loss data as a transform-based tensor, and formulate the RF tomographic imaging task as a tensor sensing problem.

Notations- We use lowercase boldface letter to denote a vector, uppercase boldface letter to denote a matrix, and calligraphic letter to denote a tensor. Let denote the set .

### 2.1 RF Tomographic Channel Model

We consider the RF tomographic imaging with the space of interest being represented as a 3D tensor in Cartesian coordinates. A set of RF signal nodes are uniformly deployed around the sides of the “tensor”, forming a complete tomography network. Any pair of nodes can establish a unique link. The RF signal on a given link suffers from path loss, which consists of three parts: (i) shadowing loss due to obstructions; (ii) distant-dependent large-scale path loss; (iii) non-shadowing loss due to multipath [1]. We use to represent the node set, and assume that is a transmitter and is a receiver. Let be the received power at , then we can obtain the following power equation:

 Pij=Pt−¯¯¯¯P(dij)−Zij,Zij=Z(1)ij+Z(2)ij, (1)

where is the transmitted power, is the distance between node and , and represents the corresponding large-scale path loss of the link. Let be the total fading loss that involves the non-shadowing fading loss and the shadowing loss .

Assuming that the space of interest is divided into a set of three-dimensional voxels of the same size. We use , , to denote a voxel at the corresponding coordinate. When the RF signal propagates through the voxels, the total value of shadowing loss is equivalent to the total attenuation that arises in each single voxel. Moreover, we assume that the internal medium of each voxel is homogeneous, and the power attenuation coefficient is a constant value within [4]. Given these definitions, the shadowing loss of a unique between and can be formulated as:

 Z(1)ij=∑n1,n2,n3Dij(n1,n2,n3)X(n1,n2,n3), (2)

where represents the overlapped distance between the and voxel . The non-shadowing fading loss

is assumed to be a stationary Gaussian process with zero mean and variance

[4].

### 2.2 Transform-based Tensor Model

Let denote a third-order tensor. ,  ,  denote mode-1, mode-2, mode-3 tubes of , and , , denote the frontal, lateral, and horizontal slices. The Frobenius norm of is defined as . The operator vec() transforms tensors and matrices into vectors. Let and denote the transposes of a matrix and a tensor, respectively.

###### Definition 1.

[7] Given an invertible discrete transform , the elementwise multiplication , and , the tubal-scalar multiplication is defined as

 a∙b=L−1(L(a)∘L(b)), (3)

where is the inverse of .

###### Definition 2.

[7] The -product of and is a tensor of size , , for and .

###### Definition 3.

[7] The transpose of , satisfies , .

###### Definition 4.

[7] Identity tensor based on -product is defined as with , are identity matrices.

###### Definition 5.

[7] is -orthogonal if .

###### Definition 6.

[7] The transform domain singular value decomposition -SVD of is given by , where and are -orthogonal tensors of size and respectively, and is a diagonal tensor of size . The entries of are called the singular values of , and the number of non-zero ones is called the -rank of .

In order to verify the validity of the model, we use an IKEA 3D chair model to generate a ground truth tensor of size (details are given in Section 4.1

), then the tensor is transformed to its frequency domain by fast fourier transformation (FFT) and discrete cosine transformation (DCT). Fig. 2 shows the empirical cumulative distribution function (CDF) of the tensor singular values. For

-SVD with FFT, 3 out of 60 singular values capture 95% of the energy, while the corresponding number of singular values are 38 and 54 for matrix-SVD and traditional tensor CP-Decomposition, respectively. For -SVD with DCT, 4 singular values capture 95% energy. Therefore, the low -rank property of transform-based tensor model is more appropriate for the RF tomography imaging problem than other methods; a similar conclusion is given in the context of fingerprint localization [10].

### 2.3 Problem Formulation

Let be the number of nodes within the network, then the total number of two-way links we can obtain is . Assuming that we implement times measurements, and each measurement involves a unique link. Those nodes pairs are indexed as . We obtain the following linear measurement:

 (4)

where is referred to as the -th measured total fading loss. Then we stack these RF signal measurements into a measurement vector [11]. Given linear measurements of a loss field tensor with -rank and the sensing tensors . With a linear map [12], (4) is rewritten as follows:

 y=H(X)+w, (5)

where denotes the noise vector.

The goal of RF tomographic imaging is to recover the loss field tensor from the measurement vector . We formulate this problem as a low -rank tensor sensing problem:

 ˆX= argminX∈RN1×N2×N3  ∥y−H(X)∥2F, (6) s.t.  rank(X)≤r.

## 3 Solution Algorithm Via Alt-Min

In this section, we present a novel iterative algorithm, called Alt-Min, and review its optimization and implementation.

### 3.1 The Alt-Min Algorithm

To enable alternating minimization, we represent the loss field tensor as the -product of two smaller tensors [13], i.e., , , and . Then we reformulate equation (6) as the following non-convex optimization problem:

 ˆX=argminU∈RN1×r×N3,V∈Rr×N2×N3  ∥y−H(U∙V)∥2F. (7)

The main idea of Alt-Min is to iteratively estimate two low -rank tensors and , each of -rank . The key step is least squares (LS) minimization (see Alg. 2). The detailed implementation of LS minimization are given below.

We adopt circulant algebra [13, 14] to extend matrix algebra to third-order tensors. A tubal scalar represents a vector of length , and the corresponding space is denoted as . Let denote the space of tubal matrices where each element is a tubal scalar in . Let be a tubal scalar, and be a tubal matrix. We use the operator [14] to map circulants to their corresponding circular matrices, which are tagged with the superscript , i.e., :

 α––c=circ(α––)=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣α1αN3⋯α2α2α1⋯⋯⋮⋮⋮αN3αN3αN3−1⋯α1⎤⎥ ⎥ ⎥ ⎥ ⎥⎦,
 A––c=circ(A––)=⎡⎢ ⎢ ⎢⎣% circ(A––1,1)⋯circ(A––1,N2)⋮⋮⋮circ(A––N1,1)⋯circ(A––N1,N2)⎤⎥ ⎥ ⎥⎦.

For simplicity, we use to represent the circular matrix of tensor . Then the -product has an equivalent matrix-product as:

 Xc=UcVc, (8)

where . We can transform the LS minimization in Alg. 2 to the corresponding circular matrix representation:

 ˆVc=argminVc∈RrN3×N2N3  ∥y−Hc(UcVc)∥2F, (9)

where is the corresponding linear map in the circular matrix representation, with . Each sensing tensor is transformed into its circular matrix , and . Similarly, we can estimate in the following way:

 ˆUc=argminUc∈RN1N3×rN3  ∥∥y−HcT(VcTUcT)∥∥2F. (10)

We perform the following steps to solve this non-convex optimization problem:

Step 1). is used to form a block diagonal matrix of size , and the number of is ,

 B1=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣UcUc⋱Uc⎤⎥ ⎥ ⎥ ⎥ ⎥⎦. (11)

Step 2). Stack all the columns of , and then is vectorized to a vector of size as follows:

 b=vec(Vc)=[Vc(:,1)T,Vc(:,2)T,…,Vc(:,N2N3)T]T. (12)

Step 3). Each is represented as a vector of size in the following way:

 cm=vec(Acm)=[Acm(:,1)T,Acm(:,2)T,…,Acm(:,N2N3)T]T, (13)

and then all the are transformed into a matrix of size :

 B2=[c1,c2,…,cM]T. (14)

Therefore, the estimation of is transformed into the following standard least squares minimization problem:

 ˆb=argminb∈RrN2N23×1  ∥y−B2B1b∥2F. (15)

### 3.2 Algorithm Optimization

The proposed Alt-Min requires large memory consumption and high computational-complexity. We propose improvements to resolve such problems.

#### 3.2.1 Optimization of Alt-Min

As stated above, the loss field tensor and the sensing tensor are transformed to an unknown circular matrix and a sensing circular matrix , respectively. The unknown circular matrix consists of entries and if we set the sampling rate as 50%, the total number of all sensing circular matrices is 0.5. In this case, the space complexity of all sensing matrices is , and it is obvious that the memory requirement increases exponentially with the size of the tensor. To alleviate this problem, we propose a modified version of the implementation.

In circulant algebra, it is obvious that the first column of already contains all the entries of itself, and there is no need to recover the redundant information. For recovering the loss field tensor , we only need to recover the first column of each , which we set as the -th tube of the -th lateral slice: . We use the Matlab function to get a new definition:

 Xs=⎡⎢ ⎢ ⎢⎣squeeze(X(1,1,:))⋯% squeeze(X(1,N2,:))⋮⋱⋮squeeze(X(N1,1,:))⋯squeeze(X(N1,N2,:))⎤⎥ ⎥ ⎥⎦,

where transforms the -th tube of the -th lateral slice of into a vector of size .

We use the notation to denote a new mapping for -product as follows:

 X=U∙V⇔Xs=UcVs, (16)

where . We can transform the LS minimization in Alg. 2 to the following representation:

 ˆVs=  argminVs∈RrN3×N2  ∥y−Hs(UcVs)∥2F, (17)

where is the corresponding linear map, with . Similarly, we can estimate in the following way:

 ˆUc=  argminUc∈RN1N3×rN3  ∥∥y−HsT(VsTUcT)∥∥2F. (18)

#### 3.2.2 Complexity Analysis

As stated above, if we use 50% sampling rate, the original space complexity is . In the modified version, we transform to , and the space complexity decreases to , which is of the former value. Note that we only calculate the reduction of space complexity for the sensing matrices, but there are additionally large reduction in intermediate variables. Therefore, the above algorithm optimization is a key enabler in our approach for inferring large three-dimensional physical space.

## 4 Performance Evaluation

### 4.1 Data Sets And Model Verification

We compare the proposed algorithm Alt-Min with tensor-based compressed sensing [4] on the IKEA 3D datasets. We adopt an IKEA 3D chair model and table model to generate two ground truth tensors of the same size . The -rank of the chair model is 3, and that of the table model is 4. Each 3D model is placed in the middle of the “tensor” and occupies a part of the space. In this task, we mainly focus on the location and outline information, while the texture and color information are ignored.

### 4.2 Algorithm Comparison Metrics

We compare the Alt-Min against the recently proposed tensor-based compressed sensing [4] on the IKEA 3D datasets. Note that we carry out two versions of the Alt-Min: Alt-Min with FFT and Alt-Min with DCT. The tensor-based compressed sensing uses TNN as the regularization, in which the t-SVD is conducted in every iteration. Our algorithm is based on the bi-linear factorization, and we only need to iteratively update the two smaller tensors. For quantitative comparison, we adopt two metrics: the recovery error and the convergence speed.

• For recovery error, we use the metric relative square error, defined as .

• For the convergence speed, we linearly fitting the measured RSEs across iterations, and then compare the decreasing rate of each method.

### 4.3 Performance Results

Fig. 3 shows the 3D visualizations of an IKEA chair, an IKEA table, and our corresponding recovery results using Alt-Min with DCT. For all methods, we use 50% sampling rate, and the maximum iteration number is set to 20. The final recovery error (RSE in log-scale) of the chair model is in magnitude, and that of the table model is . We can observe that Alt-Min with DCT successfully recovers the outlines of two models. Note that we focus on the outlines of the models instead of the whole space, and the recovery results are artificially coloured for a better visualization. The following analysis is based on the experiments of the chair model.

To examine recovery error performance, we fix the maximum iteration numbers of three methods to be 20. Then we vary the sampling rate from 20% to 80% by selecting wireless links randomly [15]. Each sampling rate is measured 5 times, and the average recovery error results are computed. Fig. 4 depicts the RSEs of Alt-Min with FFT, Alt-Min with DCT and tensor-based compressed sensing for varying sampling rates. For low sampling rates (20% 25%), tensor-based compressed sensing performs better than Alt-Min with FFT and Alt-Min with DCT. For sampling rates varying from 30% to 80%, the RSEs of two Alt-Min methods decrease significantly, while that of tensor-based compressed sensing decreases very slowly. For 80% sampling rate, the RSE of Alt-Min with FFT is around in magnitude, and that of Alt-Min with DCT is a little higher, while the the RSE of tensor-based compressed sensing is at around .

Fig. 5 and Fig. 6 show the convergence rates of Alt-Min with FFT and with DCT, respectively. The sampling rate is fixed to be , the maximum iteration number is set at 30. A linear fit to the data is also shown. Note that if we set the RSE threshold to be , we only need approximately 27 iterations for both methods.

## 5 Conclusion

In this paper, we use the transform-based tensor model to formulate the RF tomographic imaging as a tensor sensing problem, which can fully exploit the geometric structures of the three-dimensional loss field tensor. Then we propose a fast iterative algorithm Alt-Min for the low -rank tensor sensing. The loss field tensor is factorized as the -product of two smaller tensors, and then Alt-Min alternately estimates those two tensors by LS minimization. The evaluation results on IKEA 3D datasets have demonstrated that Alt-Min significantly improves the recovery error and convergence speed compared to prior tensor-based compressed sensing.

## References

• [1] M.A. Kanso and M.G. Rabbat, “Compressed RF tomography for wireless sensor networks: Centralized and decentralized approaches,” in Int. Con. Distributed Computing in Sensor Systems. Springer, 2009, pp. 173–186.
• [2] A. Liutkus, D. Martina, S. Popoff, and etc., “Imaging with nature: compressive imaging using a multiply scattering medium,” Scientific Reports, vol. 4, 2014.
• [3] J. Wilson and N. Patwari, “Radio tomographic imaging with wireless networks,” IEEE Transactions on Mobile Computing, vol. 9, no. 5, pp. 621–632, 2010.
• [4] T. Matsuda, K. Yokota, K. Takemoto, S. Hara, F. Ono, K. Takizawa, and R. Miura, “Multi-dimensional wireless tomography using tensor-based compressed sensing,” Wireless Personal Communications, vol. 96, no. 3, pp. 3361–3384, 2017.
• [5] Y. Mostofi, “Compressive cooperative sensing and mapping in mobile networks,” IEEE Transactions on Mobile Computing, vol. 10, no. 12, pp. 1769–1784, 2011.
• [6] Q. Li, D. Schonfeld, and S. Friedland, “Generalized tensor compressive sensing,” in IEEE Int. Con. on Multimedia and Expo (ICME). IEEE, 2013, pp. 1–6.
• [7] X.-Y. Liu and X. Wang, “Fourth-order tensors with multidimensional discrete transforms,” arXiv preprint arXiv:1705.01576, 2017.
• [8] Joseph J.L., Hamed P., and Antonio T., “Parsing IKEA objects: fine pose estimation,”

International Conference on Computer Vision

, 2013.
• [9] F. Adib, Z. Kabelac, D. Katabi, and R.C. Miller, “3D tracking via body radio reflections,” in Networked systems Design and Implementation, 2014, vol. 14, pp. 317–329.
• [10] X.-Y. Liu, S. Aeron, V. Aggarwal, X. Wang, and M.-Y. Wu, “Adaptive sampling of RF fingerprints for fine-grained indoor localization,” IEEE Transactions on Mobile Computing, vol. 15, no. 10, pp. 2411–2423, 2016.
• [11] L. Kong, M. Xia, X.-Y. Liu, G. Chen, Y. Gu, M.-Y. Wu, and X. Liu, “Data loss and reconstruction in wireless sensor networks,” IEEE Transactions on Parallel and Distributed Systems, vol. 25, no. 11, pp. 2818–2828, 2014.
• [12] P. Jain, P. Netrapalli, and S. Sanghavi, “Low-rank matrix completion using alternating minimization,” in

Proc. ACM symposium on Theory of Computing

. ACM, 2013, pp. 665–674.
• [13] X.-Y. Liu, S. Aeron, V. Aggarwal, and X. Wang, “Low-tubal-rank tensor completion using alternating minimization,” arXiv preprint arXiv:1610.01690, 2016.
• [14] D.F. Gleich, C. Greif, and J.M. Varah, “The power and arnoldi methods in an algebra of circulants,” Numerical Linear Algebra with Applications, vol. 20, no. 5, pp. 809–831, 2013.
• [15] X.-Y. Liu, Y. Zhu, L. Kong, C. Liu, Y. Gu, A.V. Vasilakos, and M.-Y. Wu, “Cdc: Compressive data collection for wireless sensor networks,” IEEE Transactions on Parallel and Distributed Systems, vol. 26, no. 8, pp. 2188–2197, 2015.