Application of backpropagation neural networks to both stages of fingerprinting based WIPS

03/14/2017 ∙ by Caifa Zhou, et al. ∙ ETH Zurich 0

We propose a scheme to employ backpropagation neural networks (BPNNs) for both stages of fingerprinting-based indoor positioning using WLAN/WiFi signal strengths (FWIPS): radio map construction during the offline stage, and localization during the online stage. Given a training radio map (TRM), i.e., a set of coordinate vectors and associated WLAN/WiFi signal strengths of the available access points, a BPNN can be trained to output the expected signal strengths for any input position within the region of interest (BPNN-RM). This can be used to provide a continuous representation of the radio map and to filter, densify or decimate a discrete radio map. Correspondingly, the TRM can also be used to train another BPNN to output the expected position within the region of interest for any input vector of recorded signal strengths and thus carry out localization (BPNN-LA).Key aspects of the design of such artificial neural networks for a specific application are the selection of design parameters like the number of hidden layers and nodes within the network, and the training procedure. Summarizing extensive numerical simulations, based on real measurements in a testbed, we analyze the impact of these design choices on the performance of the BPNN and compare the results in particular to those obtained using the k nearest neighbors (kNN) and weighted k nearest neighbors approaches to FWIPS.



There are no comments yet.


page 1

page 7

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

As key requirements of context awareness and pervasive computing, indoor location based services (ILBSs) as well as the systems to provide indoor positioning have attracted much attention from both academia and industry over the last two decades [1]. Their predicted market value is up to 2.5bn dollars by 2020 [2]. Various indoor positioning systems (IPSs) based on different signals, for instance WLAN/WiFi [3], Bluetooth, radio frequency identification (RFID) [4], light [5], magnetic field [6], ultra-wide band (UWB) [7] and ultrasound/acoustic sound [8, 9], have been investigated as alternatives to global navigation satellites systems (GNSSs) which are unavailable or too inaccurate in the indoor environment [10].

The application of WLAN/WiFi signals has attracted continuous attention due to the widespread deployment of WLANs and availability of WiFi enabled mobile devices. From this perspective, WLAN/WiFi based IPSs (WIPSs) are cost-effective because they often do not require any additional infrastructure and no specific hardware for the purpose of positioning. Fingerprinting based localization is a very promising positioning approach for IPSs because it also works if there is no line-of-sight (LoS) signal propagation between the access points (APs) and the receivers. Methods based on trilateration and triangulation depend on the availability of LoS signals and are negatively affected by non-LoS signal propagation which is common within buildings. In this paper, the authors thus focus on fingerprinting based WIPS (FWIPS).

Generally, a FWIPS involves two stages: a site survey offline stage and a user positioning online stage. In the offline stage, the site survey is conducted to create the radio map (RM) which represents the expected WLAN/WiFi signal strength for all locations within the region of interest (RoI). Often the survey consists of sampling the received signal strength (RSS) from all visible APs at given reference points (RPs) with known locations within the RoI. The collection of all RPs and the corresponding RSS vectors is stored in a fingerprinting database. The raw data in the fingerprinting database are then converted into the radio map which is used for online positioning. Here in this paper the original RM is just a set of RP coordinates and the respective measured RSS values without any mapping or filtering. During the online stage, a user measures the RSS vector and matches it to the RM using some defined similarity metric in the signal space (e.g., Euclidean distance) under the general assumption that the location of the user is embedded in the readings of RSS. In a simple approach, the points whose RSS within the RM are the most ”similar” ones to the user’s RSS values are utilized to estimate the user location (e.g., using the

nearest neighbor (NN) algorithm). The main bottleneck which constrains the widespread commercial application of FWIPS is the heavy workload to build the RM for a large area (e.g., an entire airport or a big mall) and to keep the RM up-to-date [10].

Apart from the separate and time-consuming manual collection of RSS values at known positions two other methods for obtaining the RM are available: unsupervised fingerprinting and partial fingerprinting [11]. For the unsupervised fingerprinting the RM is created by employing an indoor propagation model of radio waves to predict the RSS values within the RoI. This requires accurate information about the structure (e.g., floors, walls, windows and doors) of the building, the materials (e.g., concrete, wood, metal and glass) used for the respective structures as well as the position and configuration of the APs (e.g., power, gain of the antenna and protocols). This approach thus requires a labor intensive site survey or detailed building plans and assumptions, and it yields bad performance in case of invalid assumptions or changes of any of the parameters.

Partial fingerprinting utilizes crowdsourcing to improve the efficiency of RM construction and update [10, 12, 13]. Depending on the degree of user participation there are three types: explicit crowdsourcing-based RSS collection, implicit crowdsourcing-based RSS collection, and partially-labeled fingerprinting. The first two require users to report their locations by marking them manually on a digital map whenever a vector of sampled RSS values is stored or uploaded for RM generation. The crowdsourced RSS readings are used directly with the first method while they are filtered or combined with other resources according to the second method. The third method involves less active participation of the users who only need to agree that RSS values and location information are shared by their mobile device but do not need to manually identify their location on a map. For example [13] proposed an approach reporting the RSS values of APs along with the sampling position estimated automatically from the data of the built-in inertial sensors of the respective mobile device. These crowdsourcing based approaches are less labor intensive (for the provider of the positioning service) but their performance is limited by uncertainties introduced through (i) the application of different devices, (ii) manual position indication by the users, or (iii) location estimation with the built-in inertial sensors.

Another popular approach is constructing the radio map in a fast way by densifying or adapting a sparse radio map comprising only few originally measured RPs and their associated RSS values. In [14] and [15]

the authors investigated infrastructure based approacheswhich require deploying specific hardware for RSS monitoring. The RM is then constructed or updated to match the RSS values observed at stationary monitoring points. The requirement of extra installations for RSS monitoring is in contrast to the potential advantage of FWIPS that the existing WLAN/WiFi infrastructure can be used without need for additional deployment. Non-infrastructure based methods may be used to infer the updated radio map via transfer learning algorithms (e.g., compressive sensing,

-minimization, manifold alignment) using the sparse radio map or crowd sourced RSS readings and the assumption that nearby positions have more similar RSS readings than those far away [16, 17]. Both approaches have been investigated in the literature. The focus, however, was only on building the RM during the offline stage; they were rarely applied to the online stage for location estimation at the same time [10].

In this paper, the authors propose a scheme to employ backpropagation neural networks (BPNNs) to learn the mapping relationship between RP coordinates and RSS vectors for both stages of FWIPS. BPNNs, widely used in machine learning, were so far applied to indoor location estimation in optical, RFID, WLAN/WiFi and dead reckoning based IPSs

[18, 19, 20, 21, 22]. In this paper, BPNN is not only applied to indoor localization (BPNN based localization, BPNN-LA), but also to fast radio map construction starting from a sparse training radio map (TRM) (BPNN based radio map construction, BPNN-RM). Employing BPNN for both stages of FWIPS, especially for the RM construction with low workload, has hardly been investigated in the literature so far. We investigate herein the performance of the proposed scenario compared to two popular fingerprinting localization algorithms (FLAs), NN and weighted NN (WNN). Additionally, we analyze the impact of various choices of BPNN design parameters via numerical simulations and derive proposals regarding these choices.

The structure of the remaining paper is as follows: the principles of an FWIPS are described in Section II. In Section III definitions of a BPNN as well as the proposed scenario of employing BPNN to FWIPS are illustrated. An experimental analysis of the performance using the proposed scheme is presented in Section IV.

Ii Principles of FWIPS

In this section, the authors give more details on the definitions of an FWIPS, including the deployment of RPs, RSS collection and performance evaluation. A typical FWIPS consists of two stages: offline and online as shown in Fig. 1. During the offline stage, the data required to construct the RM are collected within the RoI covered by WLAN/WiFi signals. The radio map is then employed together with RSS measurements recorded by the user device to estimate the user’s location via FLAs within the online stage.

Ii-a Offline Stage

If the RoI is covered by a sufficient number of APs distributed spatially such that several of them are available if the user device occupies any position within the RoI, no modifications are necessary. Should there be too few APs for positioning, additional APs have to be installed as radio sources for the WIPS. In this paper we assume that the signals of APs can be received within the RoI. For the sake of simplicity we assume herein the RoI is rectangular and the APs are regularly distributed across the RoI as visualized in the schematic map given in Fig. 2.

Fig. 1: Overview of an FWIPS
Fig. 2: Schematic arrangement of points involved in the deployment, validation and use of an FWIPS

known locations in the area are selected as RPs. We collect their coordinates in the matrix where the -th column is the column-vector of coordinates of the -th RP (e.g., in a 2D scenario ). In this paper, we use the grid size (see Fig. 2) of the rectangular arrangements of RPs as a measure of the amount of reference data to be provided during the offline stage and thus as a measure of workload and cost. The smaller the grid size, the higher the workload to construct the RM, but the better the anticipated positioning accuracy.

At all the RPs the RSS are sampled and associated with the respective APs using the data extracted from the beacon frames. The results are stored in the matrix , where each of the columns contains the recorded RSS values of the APs, i.e., the fingerprint and each column is associated with one RP. If the coordinates of the RPs are not known and these points are not (yet) marked visibly in the physical space, the coordinates need to be determined along with the recording of the RSS measurements. This can be achieved by employing a suitable positioning technology (e.g., a multi-sensor system involving inertial sensors, or a total station). The coordinates are then again assumed as known. The sampling results and the known RP coordinates can then be combined to represent the original radio map (ORM) with the defined grid size.

Ii-B Online Stage

During the online stage, the -dimensional RSS vector is measured at an unknown location by a user who requests the positioning service. The aim is to calculate from and the RM using an FLA. More details on NN and WNN, two selected FLAs for performance analysis, are presented in the following subsections. Herein we will carry out a performance analysis by actually measuring at known or independently measured locations such that the error of the positions derived from can be assessed. We assume that such measurements are actually carried out at locations. We collect the measured RSS in a matrix and the corresponding positions in the matrix . These two matrices represent the validation dataset (VDS).

Ii-B1 Nn

NN is a method which is widely used in the field of machine learning for classification and clustering. With a selected number of nearest neighbors NN works in two steps:

  • Step 1: find the nearest neighbors in the RSS space via computing the Euclidean distance between and the RSS vectors within the RM. From the view of mathematics the subset of nearest neighbors is calculated with the condition:


    where indicates the number of elements of the set. The corresponding RP locations are collected in the matrix .

  • Step 2: estimate the user location as the average of these locations:


To evaluate the performance of positioning, the error radius , shown in Fig. 2, is defined as the Euclidean distance between the estimated location and the ground truth location of the user:


For a statistical analysis we will later also use the mean and standard deviation of the error radii i.e.,

and derived from all testing points in the VDS:


Ii-B2 WNn

WNN differs from NN only with respect to (2). Instead of the arithmetic mean WNN uses a weighted mean with the respective inverse of the Euclidean distance in the signal space as weight:




To determine an appropriate number we employ the method typically used in the field of machine learning as given in [23]. Correspondingly, the upper bound of is (where returns the maximum integer less or equal to ). The concrete choice of will be discussed later in Section IV-B.

Iii BPNN and their application to FWIPS

An artificial neural network (ANN) mimics the learning process of the neurons of human beings. Technically it transfers input data to output data via interconnected neurons. The key aspects of ANN design and operation are (i) the structure in terms of the nodes, layers and activation functions, and (ii) the learning algorithm. We first present these two concepts herein. Then we discuss the particular training of the ANN causing it to be a BPNN. Finally, we present a general scenario for applying BPNNs to WIPS including RSS sampling, BPNN-LA and BPNN-RM.

Iii-a Design elements of an ANN

(a) A node of an ANN
(b) The basic structure of an ANN
Fig. 3: Schematic view of a single node and the layered structure of a BPNN

Iii-A1 Nodes of the ANN

A node is the elementary unit of an ANN. The node works as shown in Fig. 3(a). The node takes a column vector (from the input data or the preceding layer) as the input, multiplies with the vector of weights , and adds a scalar bias . The result of this operations is used as the argument of a so-called activation function . The evaluation of , i.e., , is the output of the node and represents—together with the outputs of the other nodes of the same layer—the input of the subsequent layer or the output of the ANN. The properties of each node are determined by the activation function, weights and bias.

Iii-A2 Layers of the ANN

The nodes are arranged into three types of layers: input layer, hidden layers and output layer. The input layer which has nodes, transforms the general input into the space . This dimension transformation depends on the specific applications and the design of the ANN. There is no dimension transformation in the input layer in this paper. With a given activation function (e.g., sigmoid or linear) of the input layer, the input domain of is often limited, so that we apply a normalizer to transform any range of the input vector componentsto the domain of . With the activation functions chosen herein this domain is the interval such that the normalizer maps each component of into that interval via an affine transformation consisting of a scaling and translation . The elements of the diagonal matrix and of are determined by the range of input data. The input to the first hidden layer of the ANN is thus calculated by:


All nodes of a specific layer share the same activation function but have different weights and biases. Denoting the weights and biases of the input layer as and , respectively, the output, of the input layer is the input to each node of the first hidden layer.

The required number of hidden layers depends on the application, especially on the non-linearity of the relation between input and output [24]. Generally, there are hidden layers and a different number of nodes in each hidden layer. From the training and convergence perspective, should not be too big, especially in the application to FWIPS [24]. Usually there is just one hidden layer [25]. We will later analyze the performance with up to 3 hidden layers whose respective number of nodes is up to 30 for each of them.

As for the output layer, the number of nodes equals the dimension of the output. Except the input layer, there are layers in total. Here we denote the activation function, the weights and the biases of the layer () as , , , respectively. In the cases of positioning and of radio map construction, equals the dimension of the coordinates and the number of available APs, respectively i.e., and in this paper. The basic structure of the ANN is presented in Fig. 3(b). The design parameters that influence the performance of the ANN are the type of activation function, the number of hidden layers (), the numbers of nodes in the hidden layers (), the weights () and the biases (

) for the nodes. In this paper, we use the sigmoid function and a linear function as the activation functions for the hidden layers and the output layer, respectively. Formally, the output

as shown in Fig. 3(b) is:


Iii-A3 Training of the ANN

The purpose of the training is to determine the weights and biases such that the error is minimized using the training data set while the activation functions, number of hidden layers and numbers of nodes within each layer are fixed. Backward error propagation [26] is an established approach to efficiently carry out this optimization. As for the implementation of training the ANN, a given radio map will be divided arbitrarily into three datasets: training, validation and testing dataset. The training dataset is used to update the weights and biases, the validation set is employed to check the mean square error (MSE) of the output with the updated weights and biases, and the testing data set is used for quality control after completion of the training.

The training process stops when certain conditions are fulfilled. In the ANN implementation used herein three conditions are checked as shown in Table I

, and training stops if any of them is fulfilled: (i) the MSE calculated from the validation dataset is no more than the maximum admissible error; (ii) the number of training epochs

111Here we use the term ’epoch’ instead of ’iteration’ to indicate the training steps because each step typically includes a batch of training points, and each training point requires one iteration. reached the maximum admissible number of epochs; (iii) the MSE calculated from the validation dataset increases continuously over more than the maximum admissible number of epochs with failed validation. The weights and biases as of the stopping epoch are selected as training result for the first two cases, and those of the epoch at which the MSE starts increasing for the third case.

After stopping the training process, the testing dataset is applied. If the MSE of the resulting output is comparable to the one from the validation dataset the training process of the ANN is finished. Otherwise the trained ANN will be treated as unreasonable because of the inconsistency of the MSE between validation and testing dataset. This inconsistency is caused by an inappropriate division of the available data into the three datasets. In this case the training of the ANN will be repeated with a different partitioning of the data until the MSE of the trained ANN is consistent for both validation and testing dataset. The initial values of the weights and biases for all neurons are initialized randomly when starting the training. This is the established method [26]. We will later evaluate the influence of this random initialization on the performance of the ANN in Section IV.

Max. #* epochs 1000 1000
Max. error 0.25  1 
Max. # failed validations 6 6
* #: number of
TABLE I: Training conditions

Iii-B Chain rule for gradient descent optimization of BPNN

To compute the optimal weights and biases of the nodes, gradient descent is applied for minimizing the squared training error . This optimization is carried out iteratively. The weights and biases of epoch depend on those of the previous epoch and on the gradient descent:


where is the index of the node, the index of the weight per node, the learning rate, and are the gradients of in the -space or -space of the layer, respectively. Assuming that the inputs to the activation functions of the layer and their outputs are and respectively, we have:


Therefore, according to the chain rule the gradients w.r.t.

and are:


The first term on the right side of (11) is redefined as . The weights and biases of the layer are then updated according to:


According to the chain rule can be calculated from the corresponding vector of the subsequent layer:


where is a diagonal matrix of derivatives of the activation functions with respect to .


In this way, gradient descent learning works by backpropagation: . To sum up, each epoch of BPNN training works via forward and backward propagation according to:


Iii-C BPNN based radio map construction & localization

On the basis of BPNN we propose an algorithm for radio map construction and indoor localization. The systematic view of the proposed approach is presented in Fig. 4.

Fig. 4: Systematic view of the proposed algorithms

Iii-C1 ORM generation module

At given RPs a surveyor uses the sampling device (e.g. a mobile phone) to collect the RSS from all available APs within the RoI. In this process, the grid size is relatively large to keep the workload low. The coordinates of the RPs and the corresponding RSS vectors are stored in a table which represents the ORM. In order to mitigate the measurement noise, the measurements can be filtered before storing the discrete representation of the spatially continuous signal strength fields as ORM. In the later experiments we will only reduce the impact of noise by averaging multiple RSS measurements taken at each RP within a short time interval.

Iii-C2 BPNN-RM training & generalization module

This module consists of two parts: training of BPNN and RM generation using the trained BPNN. The module is evoked if (i) a denser discrete representation of the RM is required than the one available as ORM or (ii) if a continuous representation of the RM is required such that the (expected) signal strength of any AP and—if need be—also of the corresponding spatial derivatives can be calculated for any location within the RoI. We denote such an RM, derived using the BPNN, as reconstructed radio map (RRM) subsequently. The coordinates of the RPs stored as part of the ORM are normalized and then used as the input data for the estimation of optimum weights and biases according to the algorithm described in the previous section. The signal strengthens corresponding to the above inputs in the ORM are the training targets for this BPNN. The BPNN-RM is trained according to the process presented in Section III-A3.

The RRM generation is the process of generalizing the trained BPNN. Given a specific desired grid size of the RRM a set of corresponding coordinates within the RoI is generated and normalized. The normalized coordinates are the input to the trained BPNN-RM. Combining the output vectors with the above input coordinates yields the RRM.

Iii-C3 BPNN-LA training & generalization module

This module includes two parts: training and generalization (i.e. applying the trained BPNN to localization). During the training, the normalizer transforms the RSS to since the activation functions of the hidden layers are sigmoid in this paper. The normalized RSS vectors and the corresponding RP locations are the training input and training target respectively. The constraints shown in Table I are used as criteria during the training. With a given training dataset, the training of BPNN-LA follows the procedures in Section III-A3. The trained BPNN () is saved for the generalization within the online stage.

Iv Experimental Performance Analysis

Iv-a Testbed

In this section, we test our proposed approach using real measurements from the floor of a building at Harbin Institute of Technology, depicted in Fig. 5222The dataset was created while the first author was with the Communication Research Center, Harbin Institute of Technology, Harbin, P.R. China as master student.. There are 8 APs in the experimental area, which are attached stably to the wall at a height of 2  from the floor. RSS were recorded at points arranged in a regular grid of yielding an ORM with a grid size of 0.25 . It is subsequently annotated as . Some of these points were later used as RPs for positioning or RM generation, others for testing only. For the former purpose a training radio map (TRM) with larger grid size was then obtained by down-sampling from the . The sampling and preprocessing of the RSS values are described in [27].

Iv-B BPNN based indoor localization

In this section an experimental analysis of the quality of localization using BPNN and of the related design parameters is presented. All the following simulations are carried out using MATLAB R2015a on Euler, a high performance computing cluster of ETH. First, an example is given to show how to determine the locally optimal number of layers and nodes with a given TRM (grid size). Then a table shows all the locally optimal parameters w.r.t. the mean error radius for various TRM grid sizes as well as for different numbers of hidden layers. A detailed analysis of parameter selection, computational complexity and cumulative positioning error is presented afterwards. Furthermore, since the weights and biases of the BPNN are initialized randomly, the simulations are carried out 100 times using the same design parameters in order to also take the influence of the random initialization into account. For this purpose we collect the mean and standard deviation resulting from each of the 100 simulations in the vectors and . We define the uncertainty due to random initialization as the standard deviation of and :


where returns the mean value of . As for the number of nearest neighbors for NN and WNN, according to the rule cited in Section II-B we select it according to the number of RPs in the TRM. In this paper, there are 139 RPs within the ORM. Therefore, the maximal value of should be 11 with the grid size of 0.25  and 3 in the case of a grid size of 9 .

Fig. 5: Indication of Testbed

Iv-B1 Locally optimal parameters of

To present how we determine the locally optimal parameters and we take (i.e., ) as an example. With a given TRM, a BPNN with the specific number of hidden layers and neurons is employed to learn the mapping between the RSS vectors and the corresponding RP locations. The trained BPNN is then generalized to the VDS for performance evaluation. The parameters which achieve the minimal mean error radius (MER) are treated as the locally optimal ones according to Section III-C.

(a) Mean error of BPNN-LA with 1 hidden layer (HL1)
(b) Ratio of MER of BPNN-LA with 2 hidden layers (HL2)
Fig. 6: Error of BPNN-LA with

In Fig. 6(a) we show the ratio of MER between BPNN-LA and NN as well as WNN for different numbers of neurons. Since NN and WNN do not have neurons, the variation of ratio in the figure is exclusively due to the variation of quality of the BPNN solution. A ratio1 indicates that the BPNN solution is more accurate than the NN or WNN one. As shown in Fig. 6(a), the MER decreases fast as the number of neurons increases from 1 to 11 for a BPNN with 1 hidden layer. It hardly changes any more if the number of neurons is further increased. The locally optimal number of neurons in this case of 1 hidden layer (HL1) i.e., the one yielding the minimal MER is 21 in this example. However, taking the uncertainty of the MER into account, the plot shows that the MER is stable for . Also, for , the MER of BPNN-LA is slightly smaller than the one obtained using NN and WNN. The standard deviation of the error radius is almost independent of the number of neurons if .

With the same process, we can obtain the locally optimal parameters for multiple hidden layers (MHLs). In Fig. 6(b) we present the ratio of MER between BPNN-LA with 2 hidden layers (HL2) and 1 hidden layer layer (HL1) depending on the number of neurons of the layers (white color indicates that the value is larger than 1.5). It is shown that the MER is larger for HL2 than for HL1 for most combinations of numbers of neurons. Furthermore, we see that the MER of HL2 is almost independent of , the number of neurons in the second hidden layer. Similar results we also found for even higher numbers of hidden layers and for other grid sizes. This indicates that a BPNN with 1 hidden layer is preferable for the application of BPNN to the online stage of FWIPS.

Fig. 7: Comparison of MER with locally optimal number of nodes of BPNN-LA for different TRM grid sizes

Iv-B2 Locally optimal parameters of BPNN-LA for all TRMs

GS ER of NN () ER of WNN () ER of HL1 ER of HL2 ER of HL3
Mean Std Mean Std Mean Std , Mean Std , , Mean Std
0.25 2.58 1.49 2.59 1.49 21 2.530.04 1.490.13 18,15 2.56 1.59 23,12,16 2.57 1.58
0.50 2.64 1.56 2.64 1.57 13 2.660.06 1.600.12 30,16 2.69 1.67 11,6,16 2.67 1.67
1.00 2.81 1.68 2.81 1.69 6 2.860.09 1.710.30 3,16 2.85 1.80 7,24,27 2.97 1.97
1.50 3.01 1.89 3.02 1.89 5 2.960.10 1.730.14 4,7 2.96 1.86 12,26,19 3.13 2.05
2.00 3.00 1.85 3.01 1.85 6 3.020.10 1.890.31 4,13 3.03 2.00 4,9,29 3.03 1.98
2.25 3.01 1.87 3.02 1.88 3 3.050.09 1.650.11 20,12 3.21 2.16 7,14,12 3.18 2.17
3.00 3.04 1.88 3.05 1.90 4 3.040.11 1.760.19 5,14 3.07 2.06 6,25,23 3.18 2.09
4.00 3.20 1.99 3.20 1.99 3 3.230.11 1.840.25 2,14 3.22 1.97 2,5,3 3.17 1.84
5.00 3.16 2.04 3.16 2.04 2 3.210.14 1.740.11 2,8 3.21 2.03 3,23,26 3.34 2.17
6.00 3.50 2.19 3.51 2.19 2 3.310.12 1.860.12 2,8 3.28 2.16 3,27,26 3.51 2.41
6.25 3.29 2.03 3.30 2.04 3 3.260.15 1.840.19 2,23 3.39 2.19 7,6,23 3.43 2.31
7.50 3.37 2.05 3.38 2.05 2 3.340.13 1.790.16 7,7 3.51 2.31 3,5,26 3.40 2.23
9.00 3.53 2.13 3.54 2.14 2 3.490.16 1.900.16 2,4 3.47 2.03 30,15,19 3.73 2.37
TABLE II: Locally optimal parameters of BPNN-LA for several TRMs (GS: grid size; ER: error radius)
Fig. 8:

Cumulative probability of positioning error for BPNN-LA

We report the locally optimal parameters of BPNN-LA with several different TRMs and 3 different numbers of hidden layers in Table II. For this analysis, we extracted 13 different TRMs from with grid size varying from 0.25  to 9 . In the table, we find several patterns: (i) with a given BPNN-LA, for example HL1, the locally optimal number of neurons is proportional to the number of points in the TRM: the larger the number of RPs in the TRM the bigger the locally optimal number of neurons. This pattern is also shared by HL2 and HL3, especially the locally optimal number of neurons of the last hidden layer. One explanation for this pattern is that smaller grid size of the TRM preserves more information of the nonlinearity of the underlying RM, which is a key factor to determining the required number of neurons[24]. (ii) With a specific TRM, HL1 achieves smaller MER as well as standard deviation than HL2 and HL3. This pattern is caused by increasing influence of the random initialization of weights and biases with increasing number of hidden layers [24].

Iv-B3 Influence of

Comparing BPNN-LA with 3 different hidden layers using the locally optimal number of neurons to NN and WNN in terms of the mean error radius, as shown in Fig. 7, we can conclude: (i) HL1 outperforms HL2 and HL3 for almost all the analyzed grid sizes of the TRM. This tendency is consistent with the results reported in [25]. (ii) NN achieves slightly smaller MER than WNN in this example. (iii) Comparing HL1 to WNN, HL1 yields comparable performance with all grid sizes but in the case of particularly large grid size (larger than 6.25 ) HL1 has slightly better performance than WNN.

Iv-B4 Cumulative positioning accuracy of BPNN-LA

In Fig. 8, we present the positioning accuracy of BPNN-LA with 1 hidden layer and the respective locally optimal number of neurons for all tested TRMs as well as that of NN and WNN as empirical distribution functions. With , BPNN-LA outperforms the other  solutions and is even better than NN and WNN with the same TRM. For BPNN-LA HL1 about 68% of the errors are below 2.5 . Over 99% of the estimated locations are within an error radius of 8 which is accurate enough for room level positioning and ILBSs.

Iv-B5 Computational complexity of BPNN-LA

Using the dimension of the RSS vectors (), the number of RPs (), hidden layers () and neurons for each hidden layer (), we can assess the computational complexity for the location estimation per request (i.e., one position required) during the online stage. For NN the computational complexity is . For the generalization of BPNN-LA the computational complexity is with the assumption that the evaluation of the activation functions is negligible. These two computational complexities are comparable. The latter is smaller in the case of large number of RPs (i.e., ). Therefore, BPNN-LA also gains online computational efficiency in this case.

Iv-C BPNN based radio map construction

Now we assume that the BPNN is used to construct a reconstructed radio map RRM of grid size starting from a given radio map TRM of grid size with the purpose of using the RRM for subsequent position estimation within an FWIPS. The underlying idea is that the TRM could result directly from sampling the RSS at a certain (moderate) number of RPs and could be converted into a denser radio map RRM (i.e., ) which ideally yields higher accuracy of the estimated positions than the TRM. Higher accuracy could potentially even be obtained if . When using BPNN-RM for this radio map reconstruction the accuracy of the positions finally obtained depends on the FLA, the quality of the measurements, on , , the number of hidden layers, and the numbers of nodes within the hidden layers. In this section we investigate this relationship for NN and WNN analyzing whether BPNN-RM can be used to increase the quality of the position estimation, in particular for the densification case which would be attractive because it would help to reduce the workload associated with radio map generation. Of course it is possible to also use BPNN instead of NN or WNN for location estimation, as discussed in the previous section. However, we will not focus on this implementation herein.

We first analyze the situation for a small subset of free parameters and then generalize by calculating and discussing the locally optimal parameters for a variety of cases.

Iv-C1 Locally optimal parameters with

Fig. 9: Mean error of NN using BPNN-RM with (1 hidden layer)
ER of NN ER of NN with RRM (1 layer) ER of NN with RRM (2 layers) ER of NN with RRM (3 layers)
Mean Std Mean Std , Mean Std , , Mean Std
0.25 0.25 2.58 1.49 18 3.190.15 2.160.19 29,28 2.92 1.81 28,27,23 2.88 1.76
1.00 1.00 2.81 1.68 28 2.830.08 1.560.11 26,12 2.79 1.55 16,28,11 2.80 1.56
2.00 2.00 3.00 1.85 26 2.900.08 1.620.08 9,7 2.93 1.72 6,25,23 2.90 1.67
4.00 4.00 3.20 1.99 10 3.080.13 1.630.09 7,20 3.23 1.66 2,17,9 3.30 1.76
6.00 6.00 3.50 2.19 15 3.290.16 1.720.07 3,23 3.35 1.75 2,29,6 3.43 1.84
9.00 9.00 3.53 2.13 11 3.380.21 1.740.10 2,30 3.61 1.89 20,22,2 3.55 1.79
0.25 4.00 2.58 1.49 19 3.030.06 1.760.11 30,16 2.95 1.61 12,26,16 2.94 1.59
1.00 4.00 2.81 1.68 19 3.000.08 1.640.08 30,11 2.94 1.63 12,10,16 2.93 1.62
2.00 4.00 3.00 1.85 25 3.040.11 1.610.07 14,8 3.00 1.64 17,7,10 3.02 1.66
6.00 4.00 3.50 2.19 6 3.370.27 1.770.15 3,23 3.56 1.87 2,29,6 3.53 1.85
9.00 4.00 3.53 2.13 2 3.620.50 1.920.35 2,6 3.42 1.80 5,11,13 3.61 1.91
TABLE III: Locally optimal parameters of BPNN-RM for selected combinations of TRM (i.e. ) and RRM(i.e. )

The determination of the locally optimal parameters of BPNN-RM are determined using the same approach as for BPNN-LA above: for a given pair, and , of grid sizes, a BPNN is trained using hidden layers, and nodes; then the VDS (testing points and the corresponding RSS vectors ) are used to calculate the positioning error at each . Repeating this for a variety of parameters the ones yielding the minimum MER are determined as locally optimal ones. We first present an example with (i.e., =7.5 ), with 13 different grid sizes , using BPNN-RM with 1 hidden layer whose number of neurons varies from 1 to 30. The results are visualized in Fig. 9.

The two plots in Fig. 9 depict the MER (left) and the standard deviation (right) of the error radius obtained using NN333The results obtained using WNN instead of NN are virtually identical and not shown therefore., in terms of different number of neurons as well as different reconstruction grid sizes. They show that the optimal number of neurons to achieve the minimum MER is mostly consistent with the number yielding minimum standard deviation. According to the MER in the figure the locally optimal number of neurons for is 2 in the case of densification from using a BPNN-RM with 1 hidden layer.

Iv-C2 Locally optimal parameters of BPNN-RM for a variety of grid sizes

Using the extensive numerical simulations, as before, we have determined the locally optimal number of neurons for RRM generation as judged by the MER after positioning with NN using the RRM. The results are given in Table III for selected pairs of grid sizes and of the radio maps and for the selected numbers of hidden layers. In the case that TRM and RRM have the same grid size (i.e., ), BPNN-RM with 1 hidden layer outperforms BPNN-RM with 2 or 3 hidden layers in terms of the MER for the grid sizes from 2  to 9 . The location accuracy obtained using the RRM is also comparable to the one obtained using the TRM directly. In a few cases, the results are slightly better with higher number of hidden layers. However, figuring in the uncertainty of the empirical results the benefit is not significant. So, we conclude that 1 hidden layer with an optimized number of nodes is sufficient.

Iv-C3 Comparison of the performance related to TRM and RRM

Fig. 10: Comparison of between ORM and varies TRM w.r.t. MER using NN

With the proposed BPNN-RM we expect to reduce the workload of the radio map construction while maintaining the positioning accuracy. Therefore, we present a comparison of the MER for several different grid sizes of TRM and RRM using NN in Fig. 10. We can draw several conclusions from the figure: (i) NN with RRM grid sizes from 0.5 to 5 , trained from a TRM with grid size achieves comparable performance to NN with an ORM of 0.25 grid size. The maximal grid size of the TRM with which we obtained comparable MER as with an ORM of 0.25 grid size is 2 . This means that only 1/8 of the workload for radio map generation is required when reconstructing the radio map for NN using BPNN-RM instead of using the ORM directly for NN. (ii) Comparing the BPNN-RM results to , and , the reduction of MER is up to 10%, 20% and 40%, respectively. BPNN-RM with WNN leads to similar conclusions.

Iv-C4 Cumulative error probability of NN and WNN using RRM

(a) Cumulative error probability for NN with RRM
(b) Cumulative error probability for WNN with RRM
Fig. 11: Cumulative probability of NN and WNN with RRM

In this section, we compare the cumulative error probability of the positions estimated using NN and WNN for 4 different TRMs and 6 RRMs which are reconstructed using the trained BPNN-RM from . As shown in Fig. 11, we find that: (i) From 75% to over 85% of the errors are within 4 . (ii) The positioning accuracy of both NN and WNN using the selected RRMs is higher than using the respective TRM with equal grid size when considering errors larger than 2.5 . (iii) The RRMs yielding the best performance with NN and WNN are and ; 92% and 95% of the errors are smaller than 4.5  when using them. This positioning accuracy is better than the one obtained using directly the i.e., a much denser radio map associated with much higher workload for construction. (iv) BPNN-RM reduces the workload for creating the radio map by almost 90% when collecting only the data required for instead of and still obtaining better results by converting the into a RRM with a grid size of e.g., 2.25 . This improvement results from the capability of the BPNN to filter the noise in the measured signal strengths used for RM generation.

V Conclusion

The authors propose a scenario to apply BPNN to both stages of FWIPS: BPNN-LA for localization in the online stage and BPNN-RM for radio map reconstruction in the offline stage. BPNN-LA with 1 hidden layer (HL1) outperforms NN, WNN and BPNN with multiple hidden layers in terms of the mean error radius. 90% of the positioning errors are within 4  using HL1 trained by the 0.25  grid size radio map. As for BPNN-LA with multiple hidden layers (2 and 3 hidden layers analyzed herein), they yielded higher mean error radius than HL1. A trained BPNN-LA with one hidden layer is computationally more efficient during the online stage than NN and WNN, especially in case of a large number of reference points in the radio map.

We have tested the benefit of BPNN-RM for converting an originally sampled radio map into a reconstructed radio map of possibly different grid size. In particular, the positioning errors after application of both NN and WNN have been analyzed. The reduction of the mean error radius attributed to RM reconstruction was found to be up to 40%. As for the reduction of the workload required to build the RM, BPNN-RM reduces it by almost 90% since it allows using instead of while still obtaining equal or even slightly better performance.

We expect that the results can be generalized to other fingerprinting based IPSs (e.g., IPSs based on Bluetooth, magnetic field) and WIPSs which are deployed in the heterogeneous RoI (e.g., the airports and big malls). We will investigate this further by exploring BPNN-LA/RM deep learning and assessing the performance for more general real world settings where RPs are not arranged in a regular grid and the RoI is not dominated by free space such that the RSS-fields are more complex than in our examples. We expect BPNN to be even more beneficial in such cases while likely requiring more neurons in the hidden layers than in the cases presented herein.


The authors thank Konrad Schindler for granting access to a high performance computing cluster at ETH for the purpose of the extensive numeric simulations used herein. The doctoral research of the first author is financed by the Chinese Scholarship Council (CSC).


  • [1] G. D. Abowd, “What next, ubicomp?: Celebrating an intellectual disappearing act,” in Proceedings of the 2012 ACM Conference on Ubiquitous Computing, ser. UbiComp ’12.   New York, NY, USA: ACM, 2012, pp. 31–40. [Online]. Available:
  • [2] “Wi-Fi Indoor Location in Retail Worth $2.5 Billion by 2020,”, 2016, [Online; Accessed: 2016-06-06].
  • [3] P. B. Padmanabhan and V. N., “RADAR: An in-building RF based user location and tracking system,” Proceedings IEEE INFOCOM 2000. Conference on Computer Communications. Nineteenth Annual Joint Conference of the IEEE Computer and Communications Societies (Cat. No.00CH37064), vol. 2, no. c, pp. 775–784, 2000. [Online]. Available:
  • [4] E. Lohan, K. Koski, J. Talvitie, and L. Ukkonen, “Wlan and rfid propagation channels for hybrid indoor positioning,” in Localization and GNSS (ICL-GNSS), 2014 International Conference on, June 2014, pp. 1–6.
  • [5] H. Liu, H. Darabi, P. Banerjee, and J. Liu, “Survey of wireless indoor positioning techniques and systems,” Systems, Man, and Cybernetics, Part C: Applications and Reviews, IEEE Transactions on, vol. 37, no. 6, pp. 1067–1080, 2007.
  • [6] B. Li, T. Gallagher, A. G. Dempster, and C. Rizos, “How feasible is the use of magnetic field alone for indoor positioning?” in Indoor Positioning and Indoor Navigation (IPIN), 2012 International Conference on, Nov 2012, pp. 1–9.
  • [7] T. Gigl, G. J. Janssen, V. Dizdarević, K. Witrisal, and Z. Irahhauten, “Analysis of a uwb indoor positioning system based on received signal strength,” in Positioning, Navigation and Communication, 2007. WPNC’07. 4th Workshop on.   IEEE, 2007, pp. 97–101.
  • [8] M. Hazas and A. Hopper, “Broadband ultrasonic location systems for improved indoor positioning,” Mobile Computing, IEEE Transactions on, vol. 5, no. 5, pp. 536–547, 2006.
  • [9] A. Mandal, C. V. Lopes, T. Givargis, A. Haghighat, R. Jurdak, and P. Baldi, “Beep: 3d indoor positioning using audible sound,” in Consumer communications and networking conference, 2005. CCNC. 2005 Second IEEE.   IEEE, 2005, pp. 348–353.
  • [10] S. He and S. H. G. Chan, “Wi-Fi fingerprint-based indoor positioning: Recent advances and comparisons,” IEEE Communications Surveys and Tutorials, vol. 18, no. 1, pp. 466–490, 2016.
  • [11] K. Majeed, S. Sorour, T. Al-Naffouri, and S. Valaee, “Indoor localization and radio map estimation using unsupervised manifold alignment with geometry perturbation,” IEEE Transactions on Mobile Computing, vol. PP, no. 99, pp. 1–1, 2015.
  • [12] J.-g. Park, B. Charrow, D. Curtis, J. Battat, E. Minkov, J. Hicks, S. Teller, and J. Ledlie, “Growing an organic indoor location system,” Proceedings of the 8th international conference on Mobile systems applications and services MobiSys 10, no. June, p. 271, 2010. [Online]. Available:
  • [13] C. Wu, Z. Yang, Y. Liu, and W. Xi, “WILL: Wireless indoor localization without site survey,” IEEE Transactions on Parallel and Distributed Systems, vol. 24, no. 4, pp. 839–848, 2013.
  • [14] A. M. Bernardos, J. R. Casar, and P. Tarrío, “Real time calibration for RSS indoor positioning systems,” 2010 International Conference on Indoor Positioning and Indoor Navigation, IPIN 2010 - Conference Proceedings, no. September, pp. 15–17, 2010.
  • [15] M. M. Atia, A. Noureldin, and M. J. Korenberg, “Dynamic online-calibrated radio maps for indoor positioning in wireless local area networks,” IEEE Transactions on Mobile Computing, vol. 12, no. 9, pp. 1774–1787, 2013.
  • [16] S. Pan, J. Kwok, Q. Yang, and J. Pan, “Adaptive Localization in a Dynamic WiFi Environment through Multi-view Learning.”

    National conference on artificial Intelligence

    , pp. 1108–1113, 2007. [Online]. Available:
  • [17] S. Sorour, Y. Lostanlen, S. Valaee, and K. Majeed, “Joint indoor localization and radio map construction with limited deployment load,” Mobile Computing, IEEE Transactions on, vol. 14, no. 5, pp. 1031–1043, 2015.
  • [18] B. P. Statistik, “Neural network based indoor positioning technique in optical camera communication system,” Katalog BPS, vol. XXXIII, no. 2, pp. 81–87, 2014. [Online]. Available:$\$
  • [19] B. Wagner, D. Timmermann, G. Ruscher, and T. Kirste, “Device-free user localization utilizing artificial neural networks and passive RFID,” 2012 Ubiquitous Positioning, Indoor Navigation, and Location Based Service, UPINLBS 2012, 2012.
  • [20] J. Xu, H. Dai, and W.-h. Ying, “Multi-layer neural network for received signal strength-based indoor localisation,” IET Communications, vol. 10, no. 6, pp. 717–723, 2016. [Online]. Available:
  • [21] M. M. Soltani, A. Motamedi, and A. Hammad, “Enhancing Cluster-based RFID Tag Localization using artificial neural networks and virtual reference tags,” International Conference on Indoor Positioning and Indoor Navigation, no. October, pp. 1–10, 2013. [Online]. Available:
  • [22] M. Edel and E. Koppe, “An advanced method for pedestrian dead reckoning using BLSTM-RNNs,” 2015 International Conference on Indoor Positioning and Indoor Navigation, IPIN 2015, no. October, pp. 13–16, 2015.
  • [23]

    S. R. Kulkarni and G. Harman, “Statistical learning theory: A tutorial,”

    Wiley Interdisciplinary Reviews: Computational Statistics, vol. 3, no. 6, pp. 543–556, 2011. [Online]. Available:
  • [24] H. Larochelle, Y. Bengio, J. Louradour, and P. Lamblin, “Exploring Strategies for Training Deep Neural Networks,” Journal of Machine Learning Research, vol. 1, pp. 1–40, 2009.
  • [25] M. Brunato and R. Battiti, “Statistical learning theory for location fingerprinting in wireless LANs,” Computer Networks, vol. 47, no. 6, pp. 825–845, 2005.
  • [26] M. T. Hagan, H. B. Demuth, and M. H. Beale, “Neural Network Design,” pp. 1–1012, 1995. [Online]. Available:
  • [27] C. Zhou, L. Ma, and X. Tan, “Joint semi-supervised rss dimensionality reduction and fingerprint based algorithm for indoor localization,” in Institute of Navigation (ION GNSS+2014), 27th International Technical Meeting of The Satellite Division Conference on, September 2014, pp. 3201–3211.