Iterative Bayesian-based Localization Mechanism for Industry Verticals

01/14/2020 ∙ by Henrique Hilleshein, et al. ∙ University of Oulu 0

We propose and evaluate an iterative localization mechanism employing Bayesian inference to estimate the position of a target using received signal strength measurements. The probability density functions of the target's coordinates are estimated through a Bayesian network. Herein, we consider an iterative procedure whereby our predictor (posterior distribution) is updated in a sequential order whenever new measurements are made available. The performance of the mechanism is assessed in terms of the respective root mean square error and kernel density estimation of the target coordinates. Our numerical results showed the proposed iterative mechanism achieves increasingly better estimation of the target node position each updating round of the Bayesian network with new input measurements.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Over the last few years, an explosion of Machine Type Communications (MTC) has imposed stringent requirements on the operation of 5G NR and beyond systems. In such dense deployment scenarios with thousands of machines with low computational power and limited energy availability, it becomes crucial to efficiently share meager radio resources so as to enable Ultra-Reliable Low-Latency Communications (URLLC). The position of the devices connected to the wireless network is useful information to reach such requirements. There are various technologies for positioning systems. For instance, the GPS is the most popular technology that can effectively find the position of a thing when outdoors, but it does not work properly indoor, for this reason Indoor Positioning Systems (IPS) are needed [10].

Physical features of the wave propagation can be used to find the position of a target, such as Received Signal Strength Indicator (RSSI), Angle Of Arrival (AOA), and Time Of Arrival (TOA). Many studies about IPS have already been carried out with those physical features, such as RSSI [15, 6], AOA [14, 17], and TOA [13, 4]. Each such feature has distinct trade-offs in terms of cost, accuracy and complexity. It is well known the RSS information is a less accurate positioning metric, though it allows for using off-the-shelf devices, for example, WiFi, bluetooth, UWB [12].

Thus, RSSI becomes particularly advantageous, because it can be employed without changing the current infrastructure. To find the target position, RSS-based techniques typically use fingerprinting through offline learning [6, 18] or trilateration [15, 9]. The figerprinting procedure not only requires measuring the RSS from several points and maintaining large measurement data sets, but also updating the fingerprint database whenever the communication ambient changes. An indoor environment usually is dynamic, so a method like trilateration where offline learning is not needed is appreciated.

Authors in [10, 1]

first emphasize the importance of developing IPSs that do not rely on large measurement database, and then introduce a mechanism that uses Bayesian probabilistic models to estimate the target position without any prior knowledge of the environment. Those works estimate the position of a target through Bayesian networks represented by Directed Cyclic Graphs (DAGs). The Bayesian network inference is made by Markov Chain Monte Carlo (MCMC) technique. In this work, the same method will be used to find the target position, but once the position is estimated, this information is used for future estimations. The idea is to give to the system a better accuracy and a memory of the past without storing all RSSI measurements done by the access points. The work

[3] describes methods of how a Bayesian networks structure can be updated when new measurements are received. In this work, we use the posterior distribution of the last estimation as the prior distribution each time new measurements are received. The goal of this work is to improve the estimation of the position of an indoor target when applying Bayesian networks with the use of previous estimations of the target’s position.

The reminder of this paper is organized as follows. Section II-A introduces the probabilistic graphical models, while Section II-B details the theory behind using Bayesian networks to model such localization problems. Section III presents the deployment scenario under investigation and mechanism implemented. The simulation and results of the mechanism are presented in Section IV. Then, we draw conclusions and final remarks in Section V.

Ii Bayesian Inference

A Bayesian Network is a statistical model that represents the interdependence between random variables using a directed acyclic graph, and allows for drawing inference about related events conditional on our prior knowledge. Based on this model, we can make predictions of how an event behaves and we can check if our assumption reflects with the real world. Moreover, when new information is acquired about an event, we can update those prior assumptions and possibly reduce the uncertainty about the event

[11]. Bayesian inference is a useful tool dealing with IPS because we make inferences about the target’s position based on our prior knowledge of the system and check if our prior belief reflects the reality. Next, we describe Bayesian Inference can be used to carry out indoor positioning in industry vertical deployment scenarios.

Ii-a Probabilistic Graphical Models (PGMs)

PGMs describe the underlying interdependence between the random variables in a statistical model, and thus concisely represent the corresponding joint distributions relating those variables

[7]. In this work, we will use DAG to represent the joint distribution utilized for the positioning estimation. Note that cyclic paths that leads a node to itself are not allowed in DAGs. In a Bayesian network, each node represents a random variable (RV) that is assumed to be conditionally independent of any other node which is not a direct descendant given its own parents [2]. It means that a RV is conditionally dependent on its own parents as shown next,

(1)

where is the set of RVs of the joint distribution and is the set of the parents of the RV . As a result, the conditional distribution to any RV in the graph is given by,

(2)

where is a child of and is all the children of .

Herein, we employ the MCMC method to carry out Bayesian inference to estimate the position of the target node, as Bayesian’s analyses are usually done through the MCMC method [11]. The MCMC method is a generic computational approach used to sample arbitrary distributions [8] where the sampler start with some initial values based on the prior information known about the variables, and then cycles through the graph using an algorithm to simulate each variable

according to its respective conditional probability distribution

[10]. In this work, the MCMC is used to find the conditional distribution of each in the graph. Succinctly, the MCMC algorithm should generate a Markov chain whose limiting distribution is equal to the desired distribution [8]. We decided to use No-U-Turn Sampler (NUTS) as the MCMC algorithm because NUTS does not have random walk behavior and it is at least as efficient as Hamiltonian Monte Carlo (HMC) [11]. NUTS is an extension of the algorithm Hamiltonian Monte Carlo (HMC) by avoiding random walk behavior and sensitivity to correlated parameters [5]

. The MCMC sampler uses the Bayes’ theorem to find the the conditional distribution of each RV, as described next.

Ii-B Estimating Posterior Distributions

The Bayes’ theorem uses the conditional probability to create statistical models conditional on the observations [11]. In fact, Bayes’ theorem permits updating our prior belief about the model based on more evidence provided by new information (RSS measurements). The outcome of it is a posterior, which is a probability distribution. From [11], the posterior distribution is given by,

(3)

where is the observed data and is the assumption of the system. Moreover, represents our prior belief about the parameters values of the model before observing any data , and it is called prior distribution, while yields the likelihood of verifying our prior belief given the observed data . is the evidence and it is used as normalization factor.

In this work, we use Bayes’ theorem to find a probabilistic distribution that portrays the data received in a way that allows to estimate the position of a target through a Bayesian network. To find the posterior distribution, first it is needed to establish suitable assumptions, by choosing the prior distribution in (), that describes our prior knowledge about the RV of interest. As aforesaid, the prior distribution initializes the MCMC sampler algorithm. The number of samples and how fast the posterior distributions converges depends on both the input data and selected prior distribution [11].

This work builds upon the results in [10, 1], by using the knowledge of the past in an iterative procedure so as to find better estimations with lower uncertainty. To do that, we carry out the Bayes network inference repeated times using the posterior distribution of a previous iteration as the prior distribution for subsequent updates. It is worth mentioning that new measurement data is is fed into the model at each new iteration. Thus, at every new iteration, the proposed mechanism updates the prior distribution describing each DAG node with the corresponding posterior distribution estimate from the previous iteration. It can be seem as a system that uses a feedback loop, where the posterior information is used to define the next prior distribution.

Iii Deployment Scenario and Localization Mechanism

In this section, we describe both the test scenario and localization mechanism employed to estimate the target position.

Iii-a Evaluation Scenario and Channel Propagation Model

The evaluation scenario under investigation is presented in Fig. 1. It represents a squared warehouse with a side of meters. The simulation is done with four access points and each one is in the one of the corners of the warehouse, hence the location of the access points are known. Line of Sight (LOS) is assumed between the target and access points. The measurements of the RSSI made from each access points are considered uncorrelated amongst themselves.

Fig. 1: Illustration of the proposed scenario. Filled squares are the access points and the circle represents the target position.

By the combination of the known location of the access points with the assumption that the radio links are degraded by a log-distance shadowed path loss model, it is possible to estimate the position of a target with at least three access points. Note that the RSS-based localization mechanism does not require anchors to be synchronized. The radio link RSS follows a decay function given by,

(4)

where is the signal strength received at the th access point, is the received power in the reference distance, in this case m, is the path loss coefficient, is the euclidean distance between the target and the th access point, and

is shadowing with zero-mean normal distribution and variable standard deviation

[1].

The access points send the RSSI information of the target to a server in the edge of the network. The server can estimate the position of a target after receiving a minimum amount of measurements from the access points. The estimation then can be stored to be used in the next estimation as prior knowledge of the position of the target, and the used measurements deleted. To understand how the estimation of the position was formulated, we show next the DAG representation.

Iii-B Localization Mechanism

The mechanism has multiple random variables whose interdependence is represented by the graphical model in Fig. 2. The symbols inside rectangles correspond to constant values, they are the coordinates of the access points. The assumptions of the random variables are based on our prior knowledge and it is represented by,

(5)

where and are the variables that represent the target node position, is the distance between the target and the th access point, is the transmission power in an reference position (assumed to be 1m from the transmitter), is the path loss exponent and is the standard deviation associated to the th access point measurements [1].

The access points send the measurements to a server in the edge of the network, and this server runs the algorithm proposed to estimate the target’s position. The first estimation (iteration) uses the first batch of measurements data acquired by the anchor nodes and applies the MCMC sampling with the mechanism described in (III-B). As the position of the target is completely unknown and the target can be in any coordinate, the prior knowledge about the position is a flat distribution. The outcome of the estimation is the posteriors of the RVs. The posterior distribution is the updated belief of the system about the target’s position. In this work, we use the posterior distribution when making new estimations as explained next.

Fig. 2: Bayesian probabilistic model of the RSS-based localization mechanism.
(a)
(b)
(c)
Fig. 3: Kernel density estimation for the RSS-based localization mechanism using (a) , (b) and (c) iterations.

Iii-C Iterative Bayesian Networks

As mentioned before, Bayesian inference allows to make estimations based on our current knowledge about an event. When we estimate the position of a target, we have a new knowledge about the target’s position, and we can use it to enhance the next estimation. We now describe the sequential update procedure used to improve the Bayesian network underlying the proposed localization mechanism. Succinctly, the posterior distribution from an arbitrary iteration is reused as the prior distribution of the following iteration. Note that the posterior distribution incorporates the evidence of previous observations (measurements statistics), therefore it reduces the need to maintain large measurements data sets.

On the other hand, a biased learning process results of reusing the posterior distribution as the prior of a subsequent iteration, which may also add error to this iterative procedure [3]. Indeed, the updating procedure works as a feedback loop wherein the preceding output is always used to improve our belief for the next estimation. As previously discussed, the MCMC method provides numerical approximations to the posterior distribution, thus it is not a pure analytical approach. However, the sampling algorithm is still gradient-based, thus we resort to analytical priors (e.g.

coordinates have bivariate Uniform distribution) to initialize the underlying Bayesian network. We consider the typical approach where the measurement error follows a zero-mean normal distribution with standard deviation

.

The Algorithm 1 describes the mechanism algorithm, and it shows that from the second iteration and forth all RVs use its respective posterior mean and standard deviation of the previous iteration as the prior knowledge of their distribution. As the prior distribution is biased by the previous posterior, the posterior distribution obtained by the MCMC technique can converge in a wrong position, and at the same time the mechanism stop adapting to new data. It also happens when using maximum a posteriori estimation

[3]. In this case, the standard deviation is arbitrary multiplied by two to allow the algorithm to explore the sampling space.

Data: RSSI measurements
Result: Posterior distribution of coordinates
initialization;
while  Max number of estimations do
        Check measurements buffer;
        if Data length Minimum length then
               Check if it is the first estimation;
               if  then
                      Use priors distribution of equation (5);
                      PosteriorDistributionsVector[] = Bayes estimation using the Data and the prior distributions;
                     
              else
                      Prior distributions considered to be normal;
                      The mean and standard deviation used are taken from PosteriorDistributionsVector[];
                      PosteriorDistributionsVector[] = Bayes estimation using the Data and the prior distributions;
                     
               end if
              ++;
              
       else
               Do nothing;
              
        end if
       
end while
Algorithm 1 Iterative Bayesian Network

Iv Performance Analysis

An exhaustive simulation campaign was carried out to assess the performance of the positioning mechanism. The updating prior procedure is repeated over 20 iterations. Each iteration has RSSI measurements samples of each access point. The estimation of the posterior distributions that represent the target’s position (, ) was done using NUTS algorithm [16]. The simulation follows the scenario described of a warehouse with four access points. Each access point measures the RSSI received from the target in a independent way, and the data is sent to a server in the edge of the network. When the server receives a minimum amount measurements from the access points, the estimation of the position is made using the proposed mechanism.

Fig. 3 shows the outcome of the mechanism, when using (a) , (b) and (c) iterations, where iteration actually means that no update of the Bayesian network was carried out, therefore the prior distribution of the target node position is flat (no prior knowledge) as further described in (III-B). Fig. 4 illustrates the progression of the posterior distribution of the coordinate by using the sequential update procedure. As can be seen, not only the mean value gets closer to the actual position, but also the inherent uncertainty becomes lower with more iterations. Fig. 5 presents the RMSE of the mean value of the target node coordinate . It shows the error and also where occurs the convergence of the mechanism used. After five iterations there is no significant enhancement of the estimation, so the final posterior distribution converges after five iterations. For both and coordinates, the RMSE for and estimations is approximately cm and cm, respectively. The use of prior estimations provides a lower RMSE of around cm or than not using it.

Fig. 4: Posterior distribution progression of the coordinate.
Fig. 5: RMSE of the mean of .

V Conclusions and Final Remarks

In this contribution, we introduce an iterative procedure based on probabilistic graphical models (Bayesian Networks) in order to estimate the target node position in the deployment scenario of interest. The proposed mechanism iterates the underlying Bayesian network by updating priors whenever new measurement data becomes available. When compared to the typical approach which does not update the prior distributions, this procedure improves the estimation. Our results show that after only five iterations, the system converges and there is no more improvement on performing further iterations.

Acknowledgments

The research leading to these results has received funding from the Academy of Finland through the projects 6Genesis Flagship (Grant No. ).

References

  • [1] C. H. M. de Lima, J. Saloranta, and M. Latva-aho (2019-08) Collaborative positioning mechanism using bayesian probabilistic models for industry verticals. In Proceedings of 2018 16th International Symposium on Wireless Communication Systems (ISWCS)., Cited by: §I, §II-B, §III-A, §III-B.
  • [2] N. Friedman, D. Geiger, and M. Goldszmidt (1997-11-01)

    Bayesian network classifiers

    .
    Machine Learning 29 (2), pp. 131–163. External Links: ISSN 1573-0565, Document, Link Cited by: §II-A.
  • [3] N. Friedman and M. Goldszmidt (1997) Sequential update of bayesian network structure. In

    Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence

    ,
    UAI’97, San Francisco, CA, USA, pp. 165–174. External Links: ISBN 1-55860-485-5, Link Cited by: §I, §III-C, §III-C.
  • [4] F. Gustafsson and F. Gunnarsson (2003-04) Positioning using time-difference of arrival measurements. In 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings. (ICASSP ’03)., Vol. 6, pp. VI–553. External Links: Document, ISSN Cited by: §I.
  • [5] M. D. Hoffman and A. Gelman (2014) The No-U-Turn sampler: adaptively setting path lengths in hamiltonian monte carlo.. Journal of Machine Learning Research 15 (1), pp. 1593–1623. Cited by: §II-A.
  • [6] B. Jang and H. Kim (2019-Firstquarter) Indoor positioning technologies without offline fingerprinting map: a survey. IEEE Communications Surveys Tutorials 21 (1), pp. 508–525. External Links: Document, ISSN 1553-877X Cited by: §I, §I.
  • [7] D. Koller and N. Friedman (2009) Probabilistic graphical models: principles and techniques. MIT press. Cited by: §II-A.
  • [8] D. P. Kroese and Z. I. Botev (2011) Handbook of monte carlo methods / dirk p. kroese, thomas taimre, zdravko i. botev. Book, Wiley New Jersey (English). External Links: ISBN 9780470177938 0470177934 Cited by: §II-A.
  • [9] W. Liu, Y. Xiong, X. Zong, and W. Siwei (2018-Sep.) Trilateration positioning optimization algorithm based on minimum generalization error. In 2018 IEEE 4th IDAACS-SWS, Vol. , pp. 154–157. External Links: Document, ISSN Cited by: §I.
  • [10] D. Madigan, E. Einahrawy, R. P. Martin, W. -. Ju, P. Krishnan, and A. S. Krishnakumar (2005-03) Bayesian indoor positioning systems. In Proceedings IEEE 24th Annual Joint Conference of the IEEE Computer and Communications Societies., Vol. 2, pp. 1217–1227 vol. 2. External Links: Document, ISSN 0743-166X Cited by: §I, §I, §II-A, §II-B.
  • [11] O. Martin (2016) Bayesian analysis with python. Packt Publishing Ltd. Cited by: §II-A, §II-B, §II-B, §II.
  • [12] S. Mazuelas, A. Bahillo, R. M. Lorenzo, P. Fernandez, F. A. Lago, E. Garcia, J. Blas, and E. J. Abril (2009-10) Robust indoor positioning provided by real-time RSSI values in unmodified WLAN networks. IEEE JSTSP 3 (5), pp. 821–831. External Links: Document, ISSN 1932-4553 Cited by: §I.
  • [13] A. Naeem, N. U. Hassan, M. A. Pasha, C. Yuen, and A. Sikora (2018-Sep.) Performance analysis of tdoa-based indoor positioning systems using visible led lights. In 2018 IEEE 4th IDAACS-SWS, Vol. , pp. 103–107. External Links: Document, ISSN Cited by: §I.
  • [14] D. Rodríguez-Navarro, J. L. Lázaro-Galilea, Á. De-La-Llana-Calvo, I. Bravo-Muñoz, A. Gardel-Vicente, G. Tsirigotis, and J. Iglesias-Miguel (2017) Indoor positioning system based on a PSD detector, precise positioning of agents in motion using AoA techniques. Sensors 17 (9). External Links: Link, ISSN 1424-8220, Document Cited by: §I.
  • [15] M. E. Rusli, M. Ali, N. Jamil, and M. M. Din (2016-07) An improved indoor positioning algorithm based on RSSI-trilateration technique for internet of things (IOT). In 2016 International Conference on Computer and Communication Engineering (ICCCE), Vol. , pp. 72–77. External Links: Document, ISSN Cited by: §I, §I.
  • [16] J. Salvatier, T. V. Wiecki, and C. Fonnesbeck (2016-04) Probabilistic programming in python using PyMC3. PeerJ Computer Science 2, pp. e55. External Links: ISSN 2376-5992, Link, Document Cited by: §IV.
  • [17] S. Wielandt and L. D. Strycker (2017) Indoor multipath assisted angle of arrival localization. Sensors 17 (11). External Links: Link, ISSN 1424-8220, Document Cited by: §I.
  • [18] S. Yiu, M. Dashti, H. Claussen, and F. Perez-Cruz (2017) Wireless RSSI fingerprinting localization. Signal Processing 131, pp. 235 – 244. External Links: ISSN 0165-1684, Document, Link Cited by: §I.