Ensuring Privacy in Location-Based Services: A Model-based Approach

02/24/2020 ∙ by Alireza Partovi, et al. ∙ University of Notre Dame 0

In recent years, the widespread of mobile devices equipped with GPS and communication chips has led to the growing use of location-based services (LBS) in which a user receives a service based on his current location. The disclosure of user's location, however, can raise serious concerns about user privacy in general, and location privacy in particular which led to the development of various location privacy-preserving mechanisms aiming to enhance the location privacy while using LBS applications. In this paper, we propose to model the user mobility pattern and utility of the LBS as a Markov decision process (MDP), and inspired by probabilistic current state opacity notation, we introduce a new location privacy metric, namely ϵ-privacy, that quantifies the adversary belief over the user's current location. We exploit this dynamic model to design a LPPM that while it ensures the utility of service is being fully utilized, independent of the adversary prior knowledge about the user, it can guarantee a user-specified privacy level can be achieved for an infinite time horizon. The overall privacy-preserving framework, including the construction of the user mobility model as a MDP, and design of the proposed LPPM, are demonstrated and validated with real-world experimental data.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 12

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

As a result of recent technological advances in sensing and tracking, and the widespread of mobile devices with significant computational and communication capabilities, the location-based applications become increasingly popular. Example of LBS applications in smartphones are mobile navigation, ride-sharing, location-aware social networks, and location-based contextual advertising, and dining recommendation.

Even though LBS are providing great benefits to the users, the exposure of the user’s location raises major personal privacy concerns. The LBS servers or third party systems receive and store user location data could use them to infer user’s precise location and track his point of interest, giving rise to a variety of malicious activities [chatzikokolakis2017methods]. Examples of these threats are tracking threats: an adversary identifies user’s mobility pattern and predict his future locations [jan2000using], identification threats: an adversary uses user’s locations to infer his identity from an anatomized database [de2013unique], and profiling threats: an adversary uses user’s location of interest for profiling him in variety of sensitive information such as political view and health condition [ashbrook2003using].

Large body research has studied these privacy issues and various privacy protection methods have developed to allow users to utilize LBS while limiting the leakage of users’ confidential information. These methods are known as location privacy protection mechanisms (LPPMs) that can be roughly divided into two main classes: identity anonymization techniques and location perturbation techniques [chatzikokolakis2017methods]. Anonymization techniques protect user privacy by dissociating the user’s real identity from his location-based information. This usually is done by a third party anonymizer that replaces the identity of users with temporary identifiers, namely pseudonyms [beresford2004mix]. However, it turns out that merely removing or replacing user identity does not provide a strong privacy since spatio-temporal characteristics of the data can still help an adversary to track and re-identify the anonymous users [hoh2006enhancing]. Thus, in addition to concealing users’ identity, it is also important to obfuscate the users’ position.

Location obfuscation mechanisms protect users privacy by deliberately degrading precision of users’ location information in a way that the service can still be carried out to some acceptable extent without revealing users’ true location [riaz2018location]. Generally, this is achieved by spatial obfuscation techniques such as adding some noise [andres2012geo, elsalamouny2016differential], using dummy locations [niu2013pseudo, lu2008pad] or by spatial cloaking that essentially enlarges the user’s queried region [gruteser2003anonymous].

These methods offer location privacy by increasing the adversary uncertainty about the user’s current position. However, often a strong adversary has prior knowledge about the user’s movement pattern and can strategically update her belief based on the user’s service queries and eventually can reduce her uncertainty over the user true locations [shokri2011quantifying, chatzikokolakis2017methods]. Therefore, in addition to concealing and obfuscating the user’s current position, the LPPM should take into account the adversary inference capability for the current and future observations. Furthermore, the LPPMs proposed in these methods typically assume the adversary prior knowledge does not violate the user desired privacy level which may not be a valid assumption if the adversary has accurate background knowledge about the user’s behavior. For instance, a database derived from social media applications shows that the user’s check-in locations combined with other publicly available information such as the popularity of these locations, can effectively be utilized by an adversary to build an strong background knowledge about the user mobility pattern [ahuja2019utility].

To address these concerns, in this paper, we aim to provide a model-based location privacy for a user who makes continuous queries from the LBS server. Without loss of generality, we assume that the adversary is the LBS server who has knowledge of the user mobility pattern but is not capable of observing the user’s real-time true locations. We design a LPPM that offers all-time privacy protection without having access to the adversary prior knowledge, and furthermore, we show that even if the prior knowledge does not meet the user-specified privacy level, the LPPM can still deceive the adversary to eventually respect the user’s privacy standard. Motivated by the event-driven nature of the mobile user’s mobility model in LBS, we propose to use a model-based location privacy-preserving framework. In particular, we construct a Markov decision process that represents the user mobility patterns and the LBS utility model. In order to characterize the user location privacy, we adapt the notion of probabilistic current state opacity (CSO) that has been studied for the MDPs and discrete event systems [Wu2020] and introduce a related new notion called privacy that captures the adversary uncertainty on the user’s current location. In this setup, the user locations are the MDP states, and therefore location privacy is protected if and only if the constructed MDP meets the privacy criteria.

The rest of this paper is organized as follows. In the subsequent section, we discuss the related works. Section 3 describes the process of constructing a MDP representing the user mobility patterns and the LBS utility model. Section 4 introduces the adversary thread model, and privacy metric, and furthermore studies the limitation of other popular location privacy notations including entropy, expected inference error, and differential privacy for the localization attacks. Section 5 presents the location privacy-preserving problem based on the proposed privacy metric. Sections 6 provides the LPPM design process and addresses its computational complexity. Section 7 demonstrates applicability of the proposed LPPM on an experimental dataset. The paper is concluded in Section 8.

2 Related Works

Location privacy has been an active field of research over the past decade and various LPPMs have been proposed to protect user privacy. An early approach for preserving privacy is to replace the identity of a user with a pseudonym [pfitzmann2001anonymity]. Some other methods propose to frequently change the pseudonyms when the users are within areas called mix-zones [beresford2004mix]. However, these approaches may fail in the context of LBS, since an adversary can de-anonymized and re-identify the anonymous users by correlating his reported location’s information and the user background information [hoh2006enhancing]. As a consequence, in addition to anonymizing users’ identities, the users’ locations also should be perturbed before being supplied to the LBS server.

A common technique to perturb users’ location is to reduce the precision of the location information in the spatial and temporal domains. This can often be achieved by cloaking techniques that essentially reduce the granularity of the users’ information. The anonymity privacy protection model is introduced based on the cloaking technique [sweeney2002k, gruteser2003anonymous]. This method makes the users’ identity indistinguishable within a group of other users that are in the same spatial cloaking area. The anonymity privacy condition, however, only helps on user query anonymity, that is a private property to protect the association between users and queries, while it does not prevent the disclosure of the link between the user and his spatio-temporal data, namely location privacy. Additionally, the anonymity privacy notation was shown to be vulnerable to the presence of an adversary with certain prior knowledge about users visiting locations [shokri2010unraveling].

Sometimes being indistinguishable from the other members of a group is not sufficient to guarantee users privacy. For instance, when the entire group has the same privacy concern for being in a sensitive location that may leak their confidential information To cope with this problem, some papers propose to provide location privacy by making the user’s sensitive location indistinguishable from other landmarks. [gruteser2004protecting] introduces a area cloaking mechanism that ensures a user’s sensitive location is concealed by a region that covers at least other sensitive areas. [bamba2008supporting, xue2009location] proposes principle of location diversity. This mechanism provides location privacy by ensuring the user query can be linked to at least

semantically different location objects, such that each of these has a probability

to be the true one. These spatial cloaking methods simply obfuscates the user sensitive location into an uncertainty region, and therefore are bound to fail in the presence of a strong adversary with inference capability. Such an adversary with prior knowledge about the user’s location can utilize the user’s queries to identify the user’s true location [chatzikokolakis2017methods]. In contrast, in this paper, we aim to provide location privacy independent of the adversary prior knowledge.

Differential privacy-based approaches protect user privacy independent of the adversary prior knowledge by adding controlled noise to the query outcome. [dewri2012local] proposed to combine differential privacy with anonymity. Following this work, [andres2012geo, andres2013geo] generalized the differential privacy with arbitrary metric and developed a Planar Laplace mechanism to achieve

geo-indistinguishability. In these approaches, the LPPM ensures location privacy by restricting the adversary knowledge gain about the true location. However, in presence of continues queries, when a user releases the perturbed locations, he may not know how close the adversary’s estimate will get to his secret locations, despite the differential privacy ensures that the relative gain of knowledge for the adversary is bounded.

LPPMs based on distortion privacy address this issue [shokri2011quantifying]. These methods characterize user privacy based on the error of inferring the user’s secret location from the reported location information. This, however, requires an assumption of knowing the adversary prior information and is not robust to the adversaries with arbitrary prior knowledge [shokri2015privacy].

Recently a model-based LPPM is introduced in [WU201433]. This work presents location privacy in the context of a formal method, where the user mobility pattern is modeled by a deterministic discrete event system with the state representing the user’s locations. In this scheme, location privacy is characterized by current-state opacity notation, and opacity enforcement technique is used to guarantee the user location privacy. The proposed LPPM, however, relies on an unrealistic security assumption that the user and the adversary are behaving deterministically [mathew2012predicting].

To address these issues, we propose to use a model-based LPPM that characterizes the location privacy based on a Bayesian adversary who has access to the user mobility model. Through this setup, the LPPM can model the adversary inference dynamic and track the adversary knowledge over the user secrete locations. Furthermore, we drive the necessary and sufficient conditions over the adversary inference dynamic that guarantees the user-defined privacy level can be achieved. By incorporating this information in the LPPM design, we develop a privacy-preserving mechanism that randomizes the obfuscated reported locations such that the adversary estimation of the user’s secret locations never violates the user-defined privacy requirement.

3 User Mobility Model

We consider a mobile user who is required to share his location to receive some information from the service providers. Due to the user privacy concerns, the true locations of the user are required to be kept private to the user, and noisy locations are released to the service provider, which can be visible to an adversary. The adversary, therefore, is assumed to have knowledge of all historically released locations from the users, and hence can associate each mobile user with a mathematical model representing his mobility patterns. Examples are [wang2019next] and [montazeri2016achieving]

that considered Markov chain, and

[xiao2017loclok] that uses a hidden Markov chain to represent the user mobility patterns.

In our LPPM framework, the users’ location correlations and the quality of service are modeled by a MDP which is assumed to be public and hence accessible by adversaries. In the following, we will formally define MDP, and show how a user mobility pattern can be represented by a MDP. In Section 7, we will further illustrate this procedure based on a real-world dataset.

Definition 1.

A Markov decision process (MDP) is a tuple where is a set of states, and is a set of actions. , is the probability of transition from state to with action . The initial state distribution is . Given , and , the utility function is . We denote the set of available actions at state by .

We assume the MDP has a finite state space, finite action space, and bounded reward function. Throughout the paper, we often use as shorthand of , as shorthand for , as the cardinality of set , and

as a set difference. Let’s denote probability distribution over set

by , and we call

probability vector

has a uniform distribution, if

for all .

3.1 User Point of Interests

LBS often store users’ mobility traces that contain temporal and spatial data of users’ visiting places. These location traces can be used for statistical analyses to obtain users’ typical mobility patterns and in particular to extract the users’ points of interest (POIs), that the adversary can use to infer a variety of confidential information about the user [ashbrook2003using]. In this paper, we consider a Markov decision process mobility model, where its states are the users’ POIs. Let’s denote a user POIs by a finite set , where each represents a predetermined spatial data of POIs.

Remark 1.

To simplify the presentation, we define the state set only based on the user’s spatial information, however, it would be easy to incorporate other information of user mobility patterns such as timestamps. For instance, we can include timestamps , in the state set as , where represents a finite-time horizon.

3.2 Location Release Mechanism

Generally location obfuscation is achieved by spatial obfuscation techniques such as using dummy locations [niu2013pseudo, lu2008pad], spatial cloaking [gruteser2003anonymous, chow2011spatial], or by adding noise [andres2012geo, elsalamouny2016differential]. In our privacy protection framework, the location obfuscation mechanism for each user’s POI can be seen as an action of reporting the obfuscated position of that location. We therefore consider the MDP action set as a set of user’s obfuscated spatial information that are reported to the LBS server and hence are observable to the adversary.

Remark 2.

One of our objective here is to demonstrate, even if each user’s POIs individually is protected by a location obfuscation mechanism, an adversary can still use the user mobility model to improve his estimation about user’s true location and perform a localization attack.

3.3 Utility of Service

Inherently there is a utility loss associated with obfuscation mechanism. Here, we only require that the utility of service associated with the selected location obfuscation mechanism can be modeled as a real-valued and bounded MDP utility function . In particular, given a user’s POI , and the a location obfuscation mechanism , characterizes the service quality loss of reporting instead of the exact location of . For the sake of completeness, through out the paper, we evaluate our privacy protection model using the cloaking-region method [niu2013pseudo].

3.4 User Mobility Model Illustration

In this section, we aim to illustrate the proposed MDP model based on a student mobility pattern on the campus of Notre Dame University. We consider a student with a mobile device moving between the university libraries, and a LBS server that provides a service to the student. Figure 1 shows the Notre Dame university libraries map. We consider the libraries’ locations as the student POIs and use the student movements to define the transition between the POIs. Without loss of generality, we assume the student uses the spatial cloaked region method [gruteser2003anonymous] to obfuscate his true locations. Regions in the figure are the cloaked regions that are precomputed by the anonymizer [gruteser2003anonymous]. The student broadcasts the cloaked region associated with his source location to the LBS server and transits to the next location.

Figure 1 shows the constructed MDP model for the user mobility pattern. The states are the libraries’ locations and the transition label shows the cloaking region information. The MDP transition function represents the probability of selecting the available cloaking regions at each POI. For instance, if the student at state uniformly selects a cloaking region , and moves to the state , we define . We assume the user starts from Hesburgh library, labeled as in the Figure 1, that sets the initial state distribution as

. The quality loss function

depends on the application and area of the cloaked region. In this example, the student is querying for information in a specified area, and the obfuscation mechanism blurs the user location by increasing the area of retrieval that indicates the quality loss grows with the area of the cloaked regions. Therefore, a quality loss metric for this setup can be defined by [ku2009privacy], where is a function that computes area of the specified region.

Fig. 1: Left: Notre Dame university library map. The blue marked regions are the university libraries, the student walking paths are marked with black lines, and the spatial cloaking regions for each library are shown with dashed line. Right: MDP user mobility model.

4 Privacy Notation

To evaluate an LPPM framework, it is important to define the location privacy metric and the adversary threat model. In the following, we assume the user’s POIs contain confidential information that should be protected from an adversary, and accordingly, define the adversary threat model and the location privacy metric.

4.1 Adversary Inferece Model

We assume a LBS server is potentially an adversary who can have access to the user’s mobility traces and other public information to construct the user’s mobility model [shokri2011quantifying]. The perturbed location are also reported to the LBS server than can be visible to the adversary. Therefore, we consider an adversary who has a knowledge of the user’s mobility model , and is capable of observing the reported perturbed locations , however, the users’ true POIs or the states , are not observable to the adversary.

The adversary objective is to perform a localization attack that is to infer the user presence at his POIs [shokri2011quantifying]. In the localization attacks, the adversary obtains users obfuscated locations that are generated probabilistically by the LPPM. Since for any observed obfuscated location , there are potentially many user’s POIs that may have produced , the outcome of the adversary localization attack is a probability distribution over the user true POIs [shokri2011quantifying]. Formally, given an observation , the adversary outcome for any is in the form of a posterior distribution . We call this posterior distribution the adversary belief and we denote it by that is a function , such that . Furthermore, the adversary can have some background knowledge about users’ mobility habit before starting his observation. This side knowledge also can be encoded as a probability distribution over the state set [ashbrook2003using] that is defined by , such that .

The adversary initially at , has a prior belief , and at each time instant , when a set of perturbed locations are observed, the adversary updates its belief with Bayes rule by computing its posterior distribution, given by:

(1)

where, is the probability of at time .

4.2 Overview of Related Privacy Notations and Their Limitations

Given the adversary threat model, LPPMs are designed and being evaluated based on an assumed location-privacy metric. Selection of an effective privacy metric however, highly depends on the specification of the user’s location privacy requirement. Here, we first provide an overview of related location privacy notations to motivate our proposed privacy metric. For a comprehensive review of the privacy metrics, see [wagner2018technical].

Throughout this section, we will be using the user mobility model in Figure 1 as a running example. We use the adversary inference model in (1), and assume its prior belief , is a uniform distribution. As it is mentioned earlier, we consider an adversary with a localization attack model, and therefore, we focus on related privacy metrics that measure user’s presents-absence disclosure to an adversary. Note that, we will not evaluate the utility of service associated with these privacy metrics since our objective here is to only study the limitation of these metrics to capture the outcome of the adversary localization attacks.

Let’s assume the state is the user’s secrete area and it is critical to ensure the adversary can not determine the user presence at this area. We denote the user’s secrete POIs by . We furthermore consider the user’s LPPM as an obfuscation mechanism that determines the random mapping between the user’s actual POIs , and the cloaked regions . Note that, we assume the adversary can accurately measure the probability distribution of any obfuscated location , and hence, we set in the adversary inference model (1).

Fig. 2: The entropy of the adversary belief in (3). The left vertical axis represents entropy of the adversary posterior, and the right one is the adversary belief over the user secret area .
Fig. 3: The adversary expected inference error (5). The left vertical axis represents the minimum expected adversary inference error, and the right one is the adversary belief over the user secret area .
Fig. 4: Adversary belief with privacy condition (7). The dashed line is the privacy upper bound . The left vertical axis represents for , and the right one is the adversary belief over .

4.2.1 Entropy

A common approach to define the user location privacy is to use entropy to quantity the adversary localization attack’s outcome [ni2019anonymous]

. Entropy quantifies the uncertainty associated with predicting the value of a random variable. In the location privacy, it can be interpreted as how well the adversary can determine the user’s position among other possible locations. More specifically, entropy of the adversary belief

, is defined by:

(2)

The maximum adversary uncertainty is when is a uniform distribution that implies, the higher entropy is, the lower certainty for the attack’s outcome. The absolute value of entropy however does not necessarily indicates how accurate the adversary attack’s outcome is. For instance, an adversary can identify the user current location with high probability, but the remaining low probability of other possible locations still results a high value of entropy [toth2004measuring]. To further illustrate this limitation, we compute the obfuscation mechanism that maximizes the adversary entropy . Formally,

(3)

where . Figure 4 demonstrates the adversary entropy over the user’s POIs , and the adversary belief over the user’s secrete location . As the figure illustrates, although the absolute value of did not change drastically, the adversary becomes significantly more confident about the user presence at . Note that, for the MDP in Figure 1, the maximum adversary entropy is . The slight reduction of is due to the constraints imposed by the user’s mobility model on the obfuscation mechanism .

4.2.2 Inference Error

Privacy metrics based on expected inference error are widely used to characterize location privacy [shokri2011quantifying]. These approaches quantify how close the attack’s outcome is to the true user’s locations. Given the attack’s outcome , this privacy metric is defined by:

(4)

where can be Hamming distance or Euclidean distance between and , or any other metric that captures the privacy loss. Here, is the estimated user’s location, and is the user’s true location. In this privacy metric, the high estimation error correlates with low location privacy, which indicates the adversary objective is to minimize the expected error of any attack’s outcome. The objective LPPM for this setup is to maximize the adversary inference error given the user constraint on the service quality loss [shokri2012protecting]. This privacy metric defines user privacy as a global performance metric that is averaged over all locations. Hence, this metric does not explicitly characterize the the user privacy at a certain location which can be critical for the localization attacks. Similarly, we design an LPPM to illustrate this limitation. Let’s consider as a Euclidean distance between centroid of area and . We compute the following obfuscation mechanism that maximizes (4) for :

(5)

Figure 4 demonstrates how the expected inference error has evolved with respect to the adversary belief over the user secret location . As the simulation result is illustrated, the LPPM could maintain the expected inference error of the adversary close to the initial error, but has increased over time that indicates the adversary becomes more certain about the user presence at .

4.2.3 Differential Privacy

Differential privacy is a rigorous mathematical framework that provides a provable privacy guarantee to protect individual data in a database. This is achieved by adding controlled noise to the query outcome such that, the presence (or absence) of an individual in the database will have a negligible impact on the perturbed reported answer. Differential privacy techniques have been used in the context of LBS by considering the user’s location as the sensitive information and using the location obfuscation mechanisms to produce the query noise. These obfuscation mechanisms are designed such that changing from one location to another nearby location makes the probability distribution of the reported locations to only change to a certain extent [elsalamouny2016differential].

Depending on the exact notion of indistinguishability between the user’s locations, several differential privacy models have been introduced. In geo-indistinguishability models [andres2012geo], the indistinguishably between two adjacent locations proportionally increases with the distance between them. Formally, the obfuscation mechanism satisfies geo-indistinguishability if . This property means, the (log of) the ratio between the probability of reporting the obfuscated location , at any pair of user’s locations, is always bounded by .

Another popular model based on the differential privacy is location privacy [elsalamouny2016differential]. In this model, indistinguishably is defined for any arbitrary locations within a distance of a predefined value . Formally, location privacy holds for the mechanism if for all such that This characterization implies the (log of) the ratio of between the probability of reporting the perturbed location in any two adjacent locations is at most . These two privacy notation are correlated [elsalamouny2016differential], and have been extended to characterize other type of location indistinguishably [chatzikokolakis2017methods]. However, our objective here is to evaluate the differential-privacy based metrics for the described localization attack. Therefore, for the sake of simplicity and clarity, we focus our analysis on the specific model of privacy. Particularly, we select such that holds for any pair of states . Differential privacy in general and privacy, in particular, are solely described by the obfuscation mechanism, and hence, they can guarantee the user privacy independent of the adversary prior knowledge. However, [elsalamouny2016differential] has shown that it might not be possible to achieve privacy for any adversary prior knowledge while receiving a reasonable quality of service. Therefore, it is important to characterize privacy as the adversary knowledge gain. More precisely, let’s consider as the adversary posterior and as the prior belief. The obfuscation mechanism satisfies privacy at time , if and if the adversary posterior and prior belief satisfy [elsalamouny2016differential]:

(6)

As (6) implies, privacy provides location privacy by restricting the relative gain of the adversary knowledge about the user’s true location. However, this model may not be able to quantify how close the adversary belief gets to the user’s true location over time. We illustrate this limitation through the student mobility model in Figure 1. For any , and time , let’s rewrite (6) as:

(7)

where can be obtained from the adversary inference model (1). we set , and find that makes the adversary belief , to satisfy (7) at any time . Figure 4 demonstrates how evolves for the secrete location . As it is depicted in the figure, the relative adversary knowledge gain over , captured by for any , meets the privacy criteria (7). However, the absolute value of the adversary belief over the secrete location, , is kept increasing over time. Hence, although the relative adversary knowledge gain is upper bounded by , and is safe in a sense of privacy, the adversary can still combine the reported obfuscated locations and the user mobility model to further improve his knowledge about the user secrete location.

Roughly speaking, the underlying limitation of these metrics to capture the adversary knowledge over a certain location relies on quantifying the privacy based on the location indistinguishably. Motivated by probabilistic current state opacity (CSO) metric [Wu2020], we propose to define the user location privacy as the absolute value of the adversary belief over the user’s secrete area.

4.3 Location Privacy in Belief Space

To quantify the level of protection offered by an LPPM against the adversary, we propose to use a probabilistic current state opacity metric (CSO). In the CSO framework, the goal is to protect the system states that contain sensitive information, namely secret states, against an external observer or an adversary [Wu2020]

. The adversary maintains a belief over the system secrets through Bayesian inference, and the system is considered CSO safe if the adversary confidence that a secret has been observed is bounded. In the LPPM framework, in analogs to secret states, the users may only need to protect a subset of POIs which contains users semantic information that can be used by an adversary for profiling or other attacks. Examples include health clinics, places with religious significance, etc.

[ashbrook2003using]. Therefore, we propose to use the CSO definition to quantify users’ POIs privacy against an adversary with the intention of localization attack. Formally, suppose there is a subset of states in the MDP , representing sensitive POIs of a user, , that LPPM would like to conceal from the adversary. Note that, for , the problem is trivial.

Our CSO inspired location privacy metric is required that the adversary belief of being in the secrete states , is upper bounded by a constant . We denote this privacy notation as a privacy and formalize it in the following definition.

Definition 2.

Given user mobility pattern modeled by a MDP , set of secrete POIs, , and , is private under privacy-preserving policy , if

(8)

This notation of privacy characterizes inference evaluation of the adversary observation over the user’s current true location for all time. In this setup, the location privacy of the user is preserved if the adversary confident is lower than some desired threshold. Note that, in general, we cannot prevent the adversary from having prior information about the user’s mobility pattern, and therefore we assume the system is -private at . However, our goal is to control the additional information that the adversary obtains by observing the released location information by the user.

5 Location-Privacy Protection Problem

In our framework, the user location protection mechanism against the adversary localization attack is based on the randomization of the obfuscation mechanisms that is provided by a privacy-preserving policy.

5.1 Privacy-preserving Policy

A stationary stochastic privacy-preserving is a function , that assigns a probability distribution over the actions of each state . We denote the policy decision matrix as , where each element of in th row and th column is defined by . MDP is said to follow a policy , if for any state , it draws an action from . Therefore, for any , and , we have . Note that induces a Markov chain (MC) [puterman2014markov] with transition matrix , where its elements for any are:

(9)

Let

be a row stochastic matrix having elements

for any . The induced MC transition matrix in (9), can be written as [el2015finite]:

(10)

where is a column vector of all ones, is a vector with the th element as and the rest are all zeroes, and is a element-wise Hadamard product. For each state , let’s denote as the probability of the system being at state at time , and let be the vector of for all . The state probability distribution of MC evolves according to the following dynamic equation:

(11)
Remark 3.

Considering stationary privacy-preserving policy, helps the LPPM to avoid recomputing the policy for every change of the adversary knowledge that dramatically reduces the computational overhead on the user’s mobile device.

When the policy is stationary, the induced MC transition matrix, , is time-independent. For finite state MC with transition matrix , the stationary distribution, , is a row probability vector satisfying:

(12)

that implies if after some time , , then for all , we have . is called ergodic, if there exists a unique invariant and strictly positive stationary distribution , such that independent of the initial distribution, , the state probability distribution, , converges to , i.e., [puterman2014markov, §8.3].

Definition 3.

A MDP is called unichain, if for each policy , the Markov chain induced by is ergodic [puterman2014markov].

Here, we assume the MDP representing the user mobility pattern is a unichain MDP. This indicates that for any policy , the induced MC, , has a single recurrent class plus a possibly empty set of transient states (see [puterman2014markov, §8.3] for an elementary exposition of classification of MDP). A recurrent state in the user mobility model is translated as a user POI , which is accessible from all the POIs that are, in turn, accessible from the . Therefore, restricting to a single recurrent class of POIs, implies the user can travel to any POI from any other POIs which we believe is not a restrictive assumption on the user mobility model.

5.2 Performance Metrics

Performance metric measures overall efficiency of the framework for a given LPPM and quality loss model. When the privacy preserving policy frequently makes a decision in a long-run, it might be more preferable to compare LPPMs on the basis of their average expected quality loss. We therefore consider average quality loss criterion as as a performance metric [puterman2014markov]. Formally, for any stationary policy , and initial distribution , the expected total quality loss is defined as follows:

(13)

This expression may be undefined when the limit does not exist, however it known that for unichain MDP, the existence of this limit is guaranteed [puterman2014markov, §8.2]. Here, LPPM objective is to minimize the overall quality loss while the user privacy is preserved according to the privacy condition (8).

5.3 Problem Formulation

One the objective of the LPPM is to find an optimal privacy-preserving policy that minimizes . Formally, , and the optimal privacy-preserving policy is . If there is no constraint on the states and actions, this problem becomes a well-studied unconstrained MDP planning, and it can be solved by value-iteration algorithms [puterman2014markov]. However, here we have location privacy constraints that have to be considered in the policy synthesis. The problem of privacy-preserving optimal policy synthesis is given below.

Problem 1.

Given a unichain MDP with average total quality loss criterion , synthesize an optimal privacy-preserving policy that minimizes , and ensures is private for .

6 Optimal Privacy-preserving Policy Synthesis

In this section, we study the design of privacy-preserving optimal policy for Problem 1

. Our underlying idea is to formulate the policy synthesis problem into a linear programming (LP) problem. Given a MDP

controlled by the policy , the average quality loss metric (13) can be defined by [puterman2014markov, §8.4], where is the unique stationary distribution of . Let’s denote joint state-action distribution as , and in a matrix form, by , where . The expected total quality loss metric (13) then can be re-written as:

(14)

which is a linear function of [puterman2014markov, §8.4]. The stationary distribution requirement (11) for similarly can be driven based on , defined by:

(15)

The adversary belief also can be modeled as function of state-action distribution . We considered an adversary who has access to the history of released user’s locations, and therefore estimates the distribution of the observed actions as . The adversary then updates its belief for any state , based on a Bayesian rule [Bo8392381], that is defined by the following equation:

(16)

However, since we consider a stationary policy , the user released locations are solely the function of the user’s POIs, and therefore the adversary estimation of in (1), denoted by , is time independent. Hence, we have , that makes the adversary update dynamic model to take the following form:

(17)

Let’s be the vector of adversary belief over state set at time instant . The adversary belief can be seen as a state probability distribution of a MC with a dynamic expressed in the following Lemma.

Lemma 1.

The adversary belief evolves according to the following MC dynamic:

(18)

where the adversary transition matrix is

(19)
Proof.

For all , let’s define the elements of as:

then (18) follows from definition of given in (17). Furthermore, all the elements of are non-zero, and , and therefore is a stochastic matrix and hence (18) is a Markov chain model. ∎

In order to enforce the privacy requirement on with stationary policy , we first find the necessary and sufficient conditions that makes to be private. Let’s denote

as a zero matrix of appropriate dimension, and

as for all elements of matrices , and .

Theorem 1.

For every satisfying , is private under policy for , if and if there exists and such that:

(20)
Proof.

We first find the necessary and suffocate conditions on that ensures if , then for all . This requirement can be written as:

(21)

This condition implies that any prior belief that respects the privacy condition, it makes the posterior belief to satisfy the privacy requirement. In a standard form, it can be written as:

(22)

where , , and . In order to evaluate the feasibility of prime optimization problems, let’s first introduce slack variable as . Since , then we can always find , and such that , has a solution. Hence, the prime optimization problem is feasible, and dual of this optimization problem takes the form:

(23)

In a similar way, let’s denote slack variable , that results:

(24)

It is clear that the quality constraints has a solution for , and that implies the strong duality holds, i.e., . Let’s denote , then from equality condition (24), we have , that implies dual of the optimization takes the form:

(25)
(26)

This is equivalent to set of LP problems, given by:

(27)

where is a vector with the th element as and the rest are all zeroes, and . Here by equivalent we mean if there exists a feasible solution that maximizes (25), that must also maximize all the linear programming problems in (27). Note that (27) is obtained by multiplying to (25), and substituting with (26). Since strong duality holds and duality gap is zero, the necessary and sufficient condition is using the LP problem expressed in (21). Hence the optimal solution of (27) must satisfy:

(28)

Therefore, if there exists , satisfying (28), that must be the solution of dual problem (27), and since strong duality holds, it is the solution of (21) that guarantees for , and hence is private . Note that, a partially similar scheme of the proof was used by [accikmecse2015convex] for the safety control Markov Chain. Here, we have polytopic constraints which is a different problem formulation from [accikmecse2015convex]. This proves the necessary and sufficient conditions. ∎

Inequality (20) characterizes a condition over that if satisfies implies: if MDP is initially private at , it stays private forever. We encode this condition as a constraint over policy in linear programming that minimize the average quality loss metric given in (14). The following LP solves Problem 1.

(29)

Note that, we have redefined our objective function , and the adversary inference model based on the state-action variable . If is an optimal solution that minimizes (29), then for any , and , it induces a stationary state distribution , and a stationary optimal policy given by .

Remark 4.

Our privacy protection framework is designed for an individual mobile user, and the privacy-preserving policy is designed to be performed at the user’s mobile device without the collaboration of a trusted third party or other users. This property of our framework is an important advantage compared to other types of solutions that rely on the intervention of a trusted centralized service, such as spatial anonymity [gruteser2003anonymous].

6.1 LPPM with unsafe adversary prior belief

To this end, we have assumed the adversary prior belief over the secrete user’s POIs, does not violate the location privacy, i.e., . This assumption may not be valid if the adversary has a strong background knowledge about the user mobility model, or the LPPM designer selects a tight upper bound over adversary belief that makes unsafe [christin2016privacy]. Here, we aim to design a LPPM that drives the adversary belief to a private region even if her prior belief does not meet the privacy criteria.

Our LPPM design process for this setup relies on the invariant property of ergodic Markov chain that guarantees the state probability distribution converges to a stationary distribution independent of the initial distribution of the states. Hence, if the adversary inference model is an ergodic MC, it guarantees the adversary belief over the user’s secrete POIs, will converge to a unique stationary distribution. The LPPM objective then is to force the adversary belief to converge to a stationary distribution that respects privacy requirement.

Lemma 2.

If MDP is unichain, then for any policy , and any stationary distribution , respecting (11), is an ergodic Markov chain.

Proof.

Let’s consider a stationary policy , and denote for all . The adversary inference model (17), can be written as , that implies . This model captures a MDP with transition matrix , with a stationary stochastic policy that is state independent, i.e., for all . Hence, since is transition function of the unichain MDP , according to the Definition 3, the policy , must induce an ergodic Markov chain with transition matrix . ∎

Subsequently, for the adversary inference model (17), there exists a unique stationary distribution , satisfying:

(30)

and , independent of . We define asymptotic location privacy based on the adversary stationary belief given in below.

Definition 4.

Given user a mobility pattern modeled by a unichain MDP , and a set of secrete POIs, , is asymptotically private under policy , if

(31)

Due to ergodic property of , asymptotic privacy requirement (31) can be simply written as . This requirement characterizes a region in the adversary belief space that the user’s secrete POIs are considered safe. Therefore, even if the adversary prior belief is not in this region, i.e., , the LPPM can still attempt to achieve asymptotic privacy requirement (31), by designing a policy that drives the adversary belief to eventually reaches asymptotic private region, , and stays there. Formally, policy can enforce user mobility model to be asymptotically private, if there exist a stationary distribution satisfying (11), and the following set is not empty.

(32)

In this setup, in order to minimize the average quality loss , we design an optimization problem with objective function (14), and consider (32) as a constraint. The main challenge, however, is the bilinear form of (30) over and , that makes the problem non-convex. Formally:

(33)

Similarly, if is the optimal solution of (33), then for any , and , it induces , and the optimal stationary policy .

6.2 Computation Overhead

The proposed LPPM framework designs the privacy-preserving policies based on the linear programming (29), where the dimension of the problem is defined based on the number of optimization variables, , and the number of constraints . In (29), the constraints (20) and (19) can be merged into a single inequality. Therefore, we have constraints, and optimization variables. The most common tool in practice to solve linear programming is simplex algorithms that is quite efficient. In particular, the number of iterations seemed polynomial in and [spielman2004smoothed], but the worst-case complexity is proven to be exponential [klee1970good].

The asymptotic privacy-preserving synthesis problem (33) is a bilinear matrix inequality (BMI) optimization problem. It is proven that a general BMI constrained optimization problem is NP-hard, but despite this theoretical barrier, various approaches have been developed in the literature to tackle these optimization problems. BMI optimization problems can be solved by forming a sequence of semidefinite programming (SDP) relaxations [vandenberghe1996semidefinite], or with other general nonlinear optimization methods such as sequential quadratic programming (SQP) [boggs1995sequential].

7 Experimental Results

In this section, we study effectiveness of the proposed LPPM on a realistic case study involving publicly available data. Particularly, we select Geolife database [zheng2010geolife] to conduct our experiment.

7.1 Dataset Description

This database is a GPS trajectory dataset collected by Microsoft Research Asia over three years by 182 users. During this project, a wide range of users’ outdoor movements are recorded, including daily life routines like commuting to work, and activities such as shopping, dining, and cycling. The users’ movements in this dataset are represented by a series of tuples containing latitude, longitude, and timestamps. The dataset contains mobility traces with the total number of

million locations, however, the variance of the number of locations per user and the total duration of the trajectories are very high. In our experiment, we have filtered the dataset to keep only users’ trajectories with more than 500 locations and a duration of at least 1 year. After this process, the final dataset contains 18 users’ data, and among them, we picked the user

who has a large set of mobility traces.

7.2 MDP Construction Process

The MDP construction algorithm is depicted in Algorithm 1. We first extract the user’s POIs from his mobility traces by adapting the clustering algorithm called Density-Joinable cluster (DJ-Cluster) [gambs2012next] for our experiment. Following this method, a user’s POI is defined by the centroid of an area where the user frequently visits and spends a given amount of time. The maximum diameter of this area, denoted by MaxRadi, the minimum duration of stay in this area, denoted by MinStay, and the maximum distance between the areas, denoted by MinDist, are required parameters to characterize the user’s POIs.

DJ-Cluster algorithm has three phases. The first phase is to preprocess the user’s mobility traces to extract the stationary points. Given a predefined constant , we delete all the traces in which the user’s speed is greater than MinSpeed. The second phase is to construct a set of clusters from the remaining mobility traces and then merge the user’s locations which are in MaxRadi radius of the clusters’ centroid. The third phase is to merge the computed clusters in which their centroid are within MinDist distance. Once, the clustering process is finalized, for each cluster , we compute the user spending time in each cluster in hours, and denote it by . Given a predefined positive constant , we remove any cluster in which . The remaining clusters form the user’s POIs that represented by the states set in the MDP . The described procedure is shown in the first part of the Algorithm 1 (lines 1-4).

Figure 7 illustrates the user mobility traces. We set to extract user stationary points, and use orthodromic distance to compute the distance between the stationary points to form a cluster. The extracted POIs are shown in Figure 7 with red marks representing the centroid of each cluster. Without loss of generality, here we assume the initial distribution , otherwise .

The next is to define the MDP actions set . As it is discussed in Section 3.2, we consider the location releasing mechanisms as a set of actions. Here, we use spatial cloaking techniques to conceal the user’s POIs. In particular, we utilize the area cloaking method proposed in [gruteser2004protecting]. This technique provide location privacy by blending the user secrete location into a region that covers at least other sensitive areas. Thus, the user’s secret location at each POI area becomes indistinguishable among other POIs. Here, we consider all the extracted POIs are potentially sensitive areas. For a sake of simplicity, we set , and consider a curricular cloaking area. Following area cloaking method [gruteser2004protecting], for any POI , we construct a cloaked region , with radius , such that it contains , and at least another POI . The area cloaking method iterates over each POI to find the smallest that meets the area cloaking privacy requirement. Figure 7 illustrates the constructed cloaked areas for two POIs that are represented by blue circles. The POIs extracted from the user mobility traces are considered as the user’s true location and are not accessible by the adversary. The cloaking areas however, represent the user’s service query and hence are observable to the adversary. Given any pairs of , and a cloaked region , the transition probability can be interpreted as the probability that the user reports area to the LBS, and transits from to . Following [gambs2012next], we first compute as the frequency of the user traveling from to , i.e. , where is the total number of time the user traveled from state , and is number of time the user has traveled from to . Then, for any , and , we define if the cloaking area conceals , and otherwise .

The quality loss function for this model is associated with the area of the cloaked regions. More precisely, we assume the user is querying for information only when he is inside the POIs area, and the area cloaking mechanism obfuscates the user’s location in the query by increasing the area of retrieval. Hence the user’s quality of service is expected to degrade proportionally with the area of the reported cloaked region [ku2009privacy]. Therefore, for any POI , and a cloaked region , the utility function can be defined by . Note that, if is not available at , the LPPM policy must avoid reporting when the user is inside , and hence, we set , where representing a large quality loss. The described procedure is depicted in the second part of the Algorithm 1 (lines 5-9).

Input: , .
Result: User mobility MDP
Delete all the traces that user’s speed , and store the stationary points at StPoint set. Merge all points in StPoint which are in MaxRadi distance of each other and store as a cluster set . Merge any pair of clusters which their distance is . Compute user spending time, , for each cluster , and remove all the clusters which , and store as the state set . Set the initial distribution . For all , construct a cloaked region with radius such that the -area clocking criteria is met [gruteser2004protecting]. Store all in set . For any pair of states , compute the transition probability . For any cloaked region , and any pair of states , compute the MDP transition probability by:
where means the all the points of cluster are inside the cloaked region . For any state , and cloaked region , compute the quality loss function as , if , and otherwise set .
Algorithm 1 MDP construction algorithm.
Fig. 5: Example of real-world mobility traces of a mobile user. The solid black line represents the traces of user movements, the red marks are the extracted POIs, and the blue circles are examples of the cloaked areas.
Fig. 6: The dashed line represents . Respectively, the blue dash-dot, black solid, the purple plus-sign lines represent for the optimal policy , that provides the best utility without considering privacy, and that guarantees privacy, and asymptotic privacy requirements.
Fig. 7: Respectively, the blue dash-dot, black solid, the purple plus-sign lines represent for the optimal policy , that provides the minimum quality loss without considering privacy requirement, and that guarantees privacy, and asymptotic privacy requirements.

7.3 Simulation with Real-world Dataset

The constructed MDP has states, and actions, and all the states respect area cloaking privacy requirement [gruteser2004protecting]. We found one of the user’s extracted POI is nearby a bank, labeled by , and another one is nearby a bus station, labeled by . We consider them as the user’s secret POIs, and therefore, we define . We assume the adversary knows the user’s mobility model , and maintains a prior belief over the user’s POIs. The adversary observes the reported cloaked areas and uses inference model (1) to infer the user presence at . Quantify by privacy metric (8), the LPPM protects user privacy by randomizing the selection of cloaked regions to suppress the adversary belief on .

Our objective here is to evaluate the privacy level and the expected total quality loss when the user incorporates the policy in his location releasing mechanism. Let’s consider as the desired privacy level, and denote . In order to demonstrate the adversary can improve his knowledge over the user’s secrete locations, we set for all , and for any , which also indicates .

We first design a policy that minimizes the user’s average quality loss without considering the privacy requirement. In particular, we use our proposed LP (29) without privacy constraint (8). As it can be seen in Figure 7, although initially is private, the adversary belief over secret state , increases over time and eventually violates the desired privacy condition. Intuitively, the adversary becomes more confident over the user presence at the area of , although all the user’s POIs are concealed individually with the cloaking regions. The average quality loss associated with this policy is . We then synthesize the privacy-preserving policy using LP (29) with the privacy level . Demonstrated by Figure 7, the LPPM can suppress the adversary belief to meet the desired privacy level. This level of user privacy, however, comes with a price. The average quality loss has increased to when the user uses the privacy-preserving policy. To further illustrate this trade-off, let’s define the user’s quality loss at time by . Figures 7 shows how evolves over time for different policies. As it is depicted in Figures 7 and 7, although the proposed LPPM can suppress the adversary belief over the secret locations, the user continuously receives lower quality of service when the privacy-privacy policy is incorporated in the LPPM.

Now let’s consider a scenario that the adversary prior belief violates the privacy condition, implying the adversary background knowledge over the user presence at is unsafe. In this case, the LPPM can still deceive the adversary by manipulating her belief over secrete locations to eventually satisfy the defined privacy criteria. Let’s assume the adversary prior belief over the user’s secret POIs is , and for other POIs , is defined as , indicating the violation of defined privacy, i.e., . The LPPM objective here is to design the policy that forces the adversary belief to meet the asymptotic privacy requirement (31). To achieve this, we synthesize using the proposed BMI given by (33). The trajectory of , and the associated are respectively shown in Figure 7 and 7. As it is demonstrated, although the adversary prior belief violates the privacy condition, the LPPM can deceive the adversary by gradually suppressing his belief over the user’s secrete POIs , and asymptotically satisfies the desired privacy requirement.

The proposed privacy-preserving policy synthesis is simulated in MATLAB on a PC with Intel(R) Core(TM) i7-8650 CPU 1.9GHz 16GB RAM running on Windows 10 professional OS. Finding the optimal privacy LPPM, expressed as a LP (29), took second, and synthesis of the optimal asymptotic privacy LPPM, given as a BMI problem (33), took seconds.

8 Conclusion

In this paper, we have designed and demonstrated a model-based privacy-preserving framework that guarantees a user-defined privacy requirement for an infinite time horizon while minimizing the quality loss of service received by the user. In this regard, a MDP is constructed to capture the user mobility pattern and the LBS utility model. Given the MDP with state representing the user’s locations, we adapt the probabilistic current-state opacity notion to introduce new location privacy notion, privacy, which characterizes the user privacy against a Bayesian adversary with localization attack model. Through this setup, we illustrated that even if each user location is concealed from the adversary, she still can utilize the user mobility model to further reduce his uncertainty over the user’s secret locations. Given this privacy concern, we developed a LPPM that randomizes the obfuscation mechanisms to protect user privacy against the adversary with such an inference capability. The overall privacy-preserving framework is demonstrated and validated on an experimental dataset.

Acknowledgments

This work was supported in part by the National Science Foundation under Grant Grant IIS-1724070, and Grant CNS-1830335, and in part by the Army Research Laboratory under Grant W911NF-17-1-0072.

References