1 Introduction
RF Tomographic imaging [1, 2] is an inference technique which can be used for remotely learning the locations and the shapes of objects, as depicted in Fig. 1. RF tomographic imaging can be applied in smart buildings and spaces applications [3], and in emergencies, rescue operations and security breaches [4], since the objects being imaged need not carry an electronic device or a cellphone.
There have been considerable works on reconstruction methods for RF tomographic imaging which can be categorized into vectorbased and tensorbased methods. In [1, 5], vectorbased compressed sensing approaches for RF tomographic imaging are proposed. These vectorbased methods are designed to infer twodimensional spaces, and they do not have the ability to estimate threedimensional spaces since spatial structures of the data are inherently ignored. Reference [4]
proposes the tensorbased compressed sensing algorithm using tensor nuclear norm (TNN)
[6], which extends the RF tomographic imaging problem to threedimensional case. However, it requires computing tensor singular value decomposition (tSVD), which leads to high computational complexity, especially for large scale tensors. Moreover, the recovery error of this approach is relatively high, and the data size that can be handled is too small for realistic scenarios. In this paper, we aim to exploit the spatial structures of threedimensional spaces for more efficient and accurate inference.
In order to represent threedimensional spatial structures effectively, we adopt the recently proposed transformbased tensor model [7], which has the following advantages: (i) compared with other tensor models, only realvalued fast transforms are involved, so it is appropriate for the received signal strength (RSS) data; (ii) unlike other existing tensor models, the transformbased tensor model allows for diverse sampling strategies. On the other hand, we decompose the loss field tensor into the product of two tensors of small sizes, so that we only need to iteratively update two small tensors, which is much more effective than prior tensorbased compressed sensing.
In this paper, we propose a novel RF tomographic imaging scheme using tensor sensing to estimate the threedimensional spaces. We formulate the RF tomographic imaging as a tensor sensing problem in a transform domain, and propose a novel AlternatingMinimization (AltMin) algorithm, whose implementation is optimized for memory consumption and computation speed. We apply our algorithm to the IKEA 3D datasets [8], and demonstrate significantly lower recovery error and required number of iterations, compared with the approach in [4].
The remainder of the paper is organized as follows: system model and problem formulation are given in Section II. Section III presents the solution algorithm, implementation and algorithm optimization. The evaluation results are presented in Section IV. Finally, we conclude this paper in Section V.
2 Problem Statement and Formulation
We first review the RF tomographic imaging problem [1, 3, 9], then model the shadowing loss data as a transformbased tensor, and formulate the RF tomographic imaging task as a tensor sensing problem.
Notations We use lowercase boldface letter to denote a vector, uppercase boldface letter to denote a matrix, and calligraphic letter to denote a tensor. Let denote the set .
2.1 RF Tomographic Channel Model
We consider the RF tomographic imaging with the space of interest being represented as a 3D tensor in Cartesian coordinates. A set of RF signal nodes are uniformly deployed around the sides of the “tensor”, forming a complete tomography network. Any pair of nodes can establish a unique link. The RF signal on a given link suffers from path loss, which consists of three parts: (i) shadowing loss due to obstructions; (ii) distantdependent largescale path loss; (iii) nonshadowing loss due to multipath [1]. We use to represent the node set, and assume that is a transmitter and is a receiver. Let be the received power at , then we can obtain the following power equation:
(1) 
where is the transmitted power, is the distance between node and , and represents the corresponding largescale path loss of the link. Let be the total fading loss that involves the nonshadowing fading loss and the shadowing loss .
Assuming that the space of interest is divided into a set of threedimensional voxels of the same size. We use , , to denote a voxel at the corresponding coordinate. When the RF signal propagates through the voxels, the total value of shadowing loss is equivalent to the total attenuation that arises in each single voxel. Moreover, we assume that the internal medium of each voxel is homogeneous, and the power attenuation coefficient is a constant value within [4]. Given these definitions, the shadowing loss of a unique between and can be formulated as:
(2) 
where represents the overlapped distance between the and voxel . The nonshadowing fading loss
is assumed to be a stationary Gaussian process with zero mean and variance
[4].2.2 Transformbased Tensor Model
Let denote a thirdorder tensor. , , denote mode1, mode2, mode3 tubes of , and , , denote the frontal, lateral, and horizontal slices. The Frobenius norm of is defined as . The operator vec() transforms tensors and matrices into vectors. Let and denote the transposes of a matrix and a tensor, respectively.
Definition 1.
[7] Given an invertible discrete transform , the elementwise multiplication , and , the tubalscalar multiplication is defined as
(3) 
where is the inverse of .
Definition 2.
[7] The product of and is a tensor of size , , for and .
Definition 3.
[7] The transpose of , satisfies , .
Definition 4.
[7] Identity tensor based on product is defined as with , are identity matrices.
Definition 5.
[7] is orthogonal if .
Definition 6.
[7] The transform domain singular value decomposition SVD of is given by , where and are orthogonal tensors of size and respectively, and is a diagonal tensor of size . The entries of are called the singular values of , and the number of nonzero ones is called the rank of .
In order to verify the validity of the model, we use an IKEA 3D chair model to generate a ground truth tensor of size (details are given in Section 4.1
), then the tensor is transformed to its frequency domain by fast fourier transformation (FFT) and discrete cosine transformation (DCT). Fig. 2 shows the empirical cumulative distribution function (CDF) of the tensor singular values. For
SVD with FFT, 3 out of 60 singular values capture 95% of the energy, while the corresponding number of singular values are 38 and 54 for matrixSVD and traditional tensor CPDecomposition, respectively. For SVD with DCT, 4 singular values capture 95% energy. Therefore, the low rank property of transformbased tensor model is more appropriate for the RF tomography imaging problem than other methods; a similar conclusion is given in the context of fingerprint localization [10].2.3 Problem Formulation
Let be the number of nodes within the network, then the total number of twoway links we can obtain is . Assuming that we implement times measurements, and each measurement involves a unique link. Those nodes pairs are indexed as . We obtain the following linear measurement:
(4) 
where is referred to as the th measured total fading loss. Then we stack these RF signal measurements into a measurement vector [11]. Given linear measurements of a loss field tensor with rank and the sensing tensors . With a linear map [12], (4) is rewritten as follows:
(5) 
where denotes the noise vector.
The goal of RF tomographic imaging is to recover the loss field tensor from the measurement vector . We formulate this problem as a low rank tensor sensing problem:
(6)  
3 Solution Algorithm Via AltMin
In this section, we present a novel iterative algorithm, called AltMin, and review its optimization and implementation.
3.1 The AltMin Algorithm
To enable alternating minimization, we represent the loss field tensor as the product of two smaller tensors [13], i.e., , , and . Then we reformulate equation (6) as the following nonconvex optimization problem:
(7) 
The main idea of AltMin is to iteratively estimate two low rank tensors and , each of rank . The key step is least squares (LS) minimization (see Alg. 2). The detailed implementation of LS minimization are given below.
We adopt circulant algebra [13, 14] to extend matrix algebra to thirdorder tensors. A tubal scalar represents a vector of length , and the corresponding space is denoted as . Let denote the space of tubal matrices where each element is a tubal scalar in . Let be a tubal scalar, and be a tubal matrix. We use the operator [14] to map circulants to their corresponding circular matrices, which are tagged with the superscript , i.e., :
For simplicity, we use to represent the circular matrix of tensor . Then the product has an equivalent matrixproduct as:
(8) 
where . We can transform the LS minimization in Alg. 2 to the corresponding circular matrix representation:
(9) 
where is the corresponding linear map in the circular matrix representation, with . Each sensing tensor is transformed into its circular matrix , and . Similarly, we can estimate in the following way:
(10) 
We perform the following steps to solve this nonconvex optimization problem:
Step 1). is used to form a block diagonal matrix of size , and the number of is ,
(11) 
Step 2). Stack all the columns of , and then is vectorized to a vector of size as follows:
(12) 
Step 3). Each is represented as a vector of size in the following way:
(13) 
and then all the are transformed into a matrix of size :
(14) 
Therefore, the estimation of is transformed into the following standard least squares minimization problem:
(15) 
3.2 Algorithm Optimization
The proposed AltMin requires large memory consumption and high computationalcomplexity. We propose improvements to resolve such problems.
3.2.1 Optimization of AltMin
As stated above, the loss field tensor and the sensing tensor are transformed to an unknown circular matrix and a sensing circular matrix , respectively. The unknown circular matrix consists of entries and if we set the sampling rate as 50%, the total number of all sensing circular matrices is 0.5. In this case, the space complexity of all sensing matrices is , and it is obvious that the memory requirement increases exponentially with the size of the tensor. To alleviate this problem, we propose a modified version of the implementation.
In circulant algebra, it is obvious that the first column of already contains all the entries of itself, and there is no need to recover the redundant information. For recovering the loss field tensor , we only need to recover the first column of each , which we set as the th tube of the th lateral slice: . We use the Matlab function to get a new definition:
where transforms the th tube of the th lateral slice of into a vector of size .
We use the notation to denote a new mapping for product as follows:
(16) 
where . We can transform the LS minimization in Alg. 2 to the following representation:
(17) 
where is the corresponding linear map, with . Similarly, we can estimate in the following way:
(18) 
3.2.2 Complexity Analysis
As stated above, if we use 50% sampling rate, the original space complexity is . In the modified version, we transform to , and the space complexity decreases to , which is of the former value. Note that we only calculate the reduction of space complexity for the sensing matrices, but there are additionally large reduction in intermediate variables. Therefore, the above algorithm optimization is a key enabler in our approach for inferring large threedimensional physical space.
4 Performance Evaluation
4.1 Data Sets And Model Verification
We compare the proposed algorithm AltMin with tensorbased compressed sensing [4] on the IKEA 3D datasets. We adopt an IKEA 3D chair model and table model to generate two ground truth tensors of the same size . The rank of the chair model is 3, and that of the table model is 4. Each 3D model is placed in the middle of the “tensor” and occupies a part of the space. In this task, we mainly focus on the location and outline information, while the texture and color information are ignored.
4.2 Algorithm Comparison Metrics
We compare the AltMin against the recently proposed tensorbased compressed sensing [4] on the IKEA 3D datasets. Note that we carry out two versions of the AltMin: AltMin with FFT and AltMin with DCT. The tensorbased compressed sensing uses TNN as the regularization, in which the tSVD is conducted in every iteration. Our algorithm is based on the bilinear factorization, and we only need to iteratively update the two smaller tensors. For quantitative comparison, we adopt two metrics: the recovery error and the convergence speed.

For recovery error, we use the metric relative square error, defined as .

For the convergence speed, we linearly fitting the measured RSEs across iterations, and then compare the decreasing rate of each method.
4.3 Performance Results
Fig. 3 shows the 3D visualizations of an IKEA chair, an IKEA table, and our corresponding recovery results using AltMin with DCT. For all methods, we use 50% sampling rate, and the maximum iteration number is set to 20. The final recovery error (RSE in logscale) of the chair model is in magnitude, and that of the table model is . We can observe that AltMin with DCT successfully recovers the outlines of two models. Note that we focus on the outlines of the models instead of the whole space, and the recovery results are artificially coloured for a better visualization. The following analysis is based on the experiments of the chair model.
To examine recovery error performance, we fix the maximum iteration numbers of three methods to be 20. Then we vary the sampling rate from 20% to 80% by selecting wireless links randomly [15]. Each sampling rate is measured 5 times, and the average recovery error results are computed. Fig. 4 depicts the RSEs of AltMin with FFT, AltMin with DCT and tensorbased compressed sensing for varying sampling rates. For low sampling rates (20% 25%), tensorbased compressed sensing performs better than AltMin with FFT and AltMin with DCT. For sampling rates varying from 30% to 80%, the RSEs of two AltMin methods decrease significantly, while that of tensorbased compressed sensing decreases very slowly. For 80% sampling rate, the RSE of AltMin with FFT is around in magnitude, and that of AltMin with DCT is a little higher, while the the RSE of tensorbased compressed sensing is at around .
Fig. 5 and Fig. 6 show the convergence rates of AltMin with FFT and with DCT, respectively. The sampling rate is fixed to be , the maximum iteration number is set at 30. A linear fit to the data is also shown. Note that if we set the RSE threshold to be , we only need approximately 27 iterations for both methods.
5 Conclusion
In this paper, we use the transformbased tensor model to formulate the RF tomographic imaging as a tensor sensing problem, which can fully exploit the geometric structures of the threedimensional loss field tensor. Then we propose a fast iterative algorithm AltMin for the low rank tensor sensing. The loss field tensor is factorized as the product of two smaller tensors, and then AltMin alternately estimates those two tensors by LS minimization. The evaluation results on IKEA 3D datasets have demonstrated that AltMin significantly improves the recovery error and convergence speed compared to prior tensorbased compressed sensing.
References
 [1] M.A. Kanso and M.G. Rabbat, “Compressed RF tomography for wireless sensor networks: Centralized and decentralized approaches,” in Int. Con. Distributed Computing in Sensor Systems. Springer, 2009, pp. 173–186.
 [2] A. Liutkus, D. Martina, S. Popoff, and etc., “Imaging with nature: compressive imaging using a multiply scattering medium,” Scientific Reports, vol. 4, 2014.
 [3] J. Wilson and N. Patwari, “Radio tomographic imaging with wireless networks,” IEEE Transactions on Mobile Computing, vol. 9, no. 5, pp. 621–632, 2010.
 [4] T. Matsuda, K. Yokota, K. Takemoto, S. Hara, F. Ono, K. Takizawa, and R. Miura, “Multidimensional wireless tomography using tensorbased compressed sensing,” Wireless Personal Communications, vol. 96, no. 3, pp. 3361–3384, 2017.
 [5] Y. Mostofi, “Compressive cooperative sensing and mapping in mobile networks,” IEEE Transactions on Mobile Computing, vol. 10, no. 12, pp. 1769–1784, 2011.
 [6] Q. Li, D. Schonfeld, and S. Friedland, “Generalized tensor compressive sensing,” in IEEE Int. Con. on Multimedia and Expo (ICME). IEEE, 2013, pp. 1–6.
 [7] X.Y. Liu and X. Wang, “Fourthorder tensors with multidimensional discrete transforms,” arXiv preprint arXiv:1705.01576, 2017.

[8]
Joseph J.L., Hamed P., and Antonio T.,
“Parsing IKEA objects: fine pose estimation,”
International Conference on Computer Vision
, 2013.  [9] F. Adib, Z. Kabelac, D. Katabi, and R.C. Miller, “3D tracking via body radio reflections,” in Networked systems Design and Implementation, 2014, vol. 14, pp. 317–329.
 [10] X.Y. Liu, S. Aeron, V. Aggarwal, X. Wang, and M.Y. Wu, “Adaptive sampling of RF fingerprints for finegrained indoor localization,” IEEE Transactions on Mobile Computing, vol. 15, no. 10, pp. 2411–2423, 2016.
 [11] L. Kong, M. Xia, X.Y. Liu, G. Chen, Y. Gu, M.Y. Wu, and X. Liu, “Data loss and reconstruction in wireless sensor networks,” IEEE Transactions on Parallel and Distributed Systems, vol. 25, no. 11, pp. 2818–2828, 2014.

[12]
P. Jain, P. Netrapalli, and S. Sanghavi,
“Lowrank matrix completion using alternating minimization,”
in
Proc. ACM symposium on Theory of Computing
. ACM, 2013, pp. 665–674.  [13] X.Y. Liu, S. Aeron, V. Aggarwal, and X. Wang, “Lowtubalrank tensor completion using alternating minimization,” arXiv preprint arXiv:1610.01690, 2016.
 [14] D.F. Gleich, C. Greif, and J.M. Varah, “The power and arnoldi methods in an algebra of circulants,” Numerical Linear Algebra with Applications, vol. 20, no. 5, pp. 809–831, 2013.
 [15] X.Y. Liu, Y. Zhu, L. Kong, C. Liu, Y. Gu, A.V. Vasilakos, and M.Y. Wu, “Cdc: Compressive data collection for wireless sensor networks,” IEEE Transactions on Parallel and Distributed Systems, vol. 26, no. 8, pp. 2188–2197, 2015.