Mapping the Privacy-Utility Tradeoff in Mobile Phone Data for Development

08/01/2018
by   Alejandro Noriega-Campero, et al.
MIT
0

Today's age of data holds high potential to enhance the way we pursue and monitor progress in the fields of development and humanitarian action. We study the relation between data utility and privacy risk in large-scale behavioral data, focusing on mobile phone metadata as paradigmatic domain. To measure utility, we survey experts about the value of mobile phone metadata at various spatial and temporal granularity levels. To measure privacy, we propose a formal and intuitive measure of reidentification riskx2014the information ratiox2014and compute it at each granularity level. Our results confirm the existence of a stark tradeoff between data utility and reidentifiability, where the most valuable datasets are also most prone to reidentification. When data is specified at ZIP-code and hourly levels, outside knowledge of only 7 retrieval of the remaining 93 specified at municipality and daily levels, reidentification requires on average outside knowledge of 51 retrieve the remaining 49 directly erodes its value, and highlight the need for using data-coarsening, not as stand-alone mechanism, but in combination with data-sharing models that provide adjustable degrees of accountability and security.

READ FULL TEXT VIEW PDF

page 1

page 7

page 9

page 16

06/08/2018

Mobile Phone Metadata for Development

Mobile phones are now widely adopted by most of the world population. Ea...
03/23/2018

DeepMood: Modeling Mobile Phone Typing Dynamics for Mood Detection

The increasing use of electronic forms of communication presents new opp...
05/22/2019

The tradeoff between the utility and risk of location data and implications for public good

High-resolution individual geolocation data passively collected from mob...
04/29/2022

A hybrid privacy protection scheme for medical data

Healthcare data contains sensitive information, and it is challenging to...
03/10/2020

A Practical Approach to Navigating the Tradeoff Between Privacy and Precise Utility

Due to the recent popularity of online social networks, coupled with peo...
09/18/2020

Pretty Good Phone Privacy

A fundamental property of today's cellular architecture—in order to rece...

1 Introduction

Large-scale datasets of human behavior are likely to revolutionize the way we develop cities, fight disease and crime, and respond to natural disasters. However, these consist of sensitive information, such as citizens’ geo-location, purchasing behavior, and socialization patterns. Moreover, numerous studies have shown adversarial methods that can successfully associate sensitive information in anonymized datasets to individual identities—i.e., reidentification [1, 2, 3, 4, 5, 6, 7, 8, 9]. Hence, understanding and managing the risk to privacy of these datasets still preconditions their broad use and potential impact.

In this work, we consider mobile phone metadata as a paradigmatic example of what is colloquially referred to as ‘big data’. Due to its high granularity, high dimensionality, passive data generation process, and high potential value, mobile phone metadata represents most characteristic features of novel data types at the core of ‘big data’. Other such types include GPS tracks, web browsing, financial behavior, genetic data, and satellite imagery; which share in common a high potential societal value and concern for individuals’ privacy.

1.1 Mobile Phone Data for Development

Metadata is data about data. In the context of mobile phone usage, this represents a record that a call was made—including a time stamp and geographic location, with precision determined by the location of cell towers—but no information on the content of the call itself. Mobile phone metadata is most commonly referred to as CDRs (Call Detail Records). Table 1 shows dummy examples of call detail records of a couple of phone calls.

caller ID receiver ID tower ID time
299C20B41B32B5GH76C343 AEA595D43E2C9EE20EC12R 768 16-12-03 16:50
299C20B41B32B5GH76C343 C721FD9F5A8902BD1EE9C4 981 16-12-24 19:56
B8673E7C673FC9EZ958FB6 3ACC4FDDD29B45ZX1A2012 255 16-12-24 20:34
Table 1: Example of CDR records.

Relevant characteristics of CDRs are: 1) the caller and receiver identities are pseudonymized, i.e., their phone numbers are replaced by anonymous pseudonyms (e.g., through hashing); and 2) the geographic location of towers used for each communication provide an approximation of the user’s location.

CDRs are a particularly pervasive and relevant data source for development and humanitarian response purposes. They are generated by standard telecommunication infrastructure, and collected by mobile phone companies on an ongoing basis. Moreover, handsets and airtime are becoming cheaper, leading to increased penetration and representativity, which by 2013 approached 89% in developing countries and 96% globally[10].

There are several ways in which the location information in mobile phone metadata is analyzed and used. For example, it is possible to build dynamic maps of population density and population mobility in real time, over areas as large as countries, and at high geographic and individual detail [11]. This information in turn has valuable applications in a wide range of development and humanitarian action domains, such as: disaster response upon earthquakes and floods [12, 13], epidemic analysis of malaria and influenza outbreaks [14, 15], socio-economic and poverty mapping in both the developed and developing worlds [16, 17, 18], transportation systems development [19], and improving national statistics [20].

1.2 Privacy Risk in Mobile Phone Metadata

In the recent past, privacy provided by pseudonymization coupled with institutional non-disclosure agreements (NDAs) has served as basis to allow sharing of large CDR datasets. However, research has recently shown adversarial methods that successfully associate sensitive information in the datasets to individuals’ identities, even under pseudonymization of all personal identifiers [1, 2, 3, 4, 5, 6, 7, 8, 9].

A seminal study on reidentification of CDRs analyzed mobility data from 1.5 million mobile phone subscribers in a small western country, where the location of an individual was specified hourly and with a spatial resolution given by the geographic distribution of the carrier’s antennae [1]. It demonstrated that outside knowledge of just four random spatio-temporal points were enough to uniquely identify 95% of individuals in the database. Furthermore, the study showed that data can be coarsened in order to reduce the likelihood of re-identification. This coarsening, more properly named spatial and temporal generalization, is a key technique applied to data to preserve privacy, allowing companies, NGOs, and public organizations to balance privacy risks with data’s potential for positive societal impact.

2 Methods

2.1 Assessing Privacy

Concepts and Vocabulary

Datasets contain attributes such as name, telephone, address, income, health status, location, items purchased, and websites visited. These attributes can be classified as:

direct identifiers, quasi-identifiers, or sensitive attributes. For example, in an anonymous health database, where names and social security numbers (direct identifiers) are pseudonymized, a prying third party with access to the database could attempt to know the medical condition (sensitive attributes) of Jane by using auxiliary information of her ZIP code and age (quasi-identifiers) to single her out. We denote the set of auxiliary information about quasi-identifiers of person by ; and refer to the subset of individuals whose records match as the equivalent class of given , denoted by . Jane is reidentified if .

Traditional measures to protect privacy have focused on guaranteeing that an attacker, even with full knowledge of an individual’s quasi-identifiers, is unable to reidentify her uniquely[21], or extract information about her[22, 23]. For example, a privacy approach of widespread use in the last decade is k-anonymity [21], where the granularity of quasi-identifiers is gradually reduced, thus increasing the size of equivalent classes, until the requirement is met. These approaches, however, are incapable of coping with most behavioral datasets due to their high-dimensionality[24], raising the need for appropriate ones.

People’s online activity leaves a comprehensive data trace behind, which coupled with the advent of technologies for pervasive sensing, amount to an unprecedented instrumentation of our societal systems. Notably, data at the core of today’s “big data” is high-dimensional. Examples are human mobility data, banking and credit card data, consumer behavior, web browsing, online social networks, and genetic data.

High-dimensional datasets contain only a sparse sample of the space of possible records, which, similar to fingerprints, often entails that individual records are unique. To illustrate how sparsity can be exploited for reidentification, consider a very large database of song lyrics. The space of all possible song lyrics—permutations of a bounded number of words—is extremely large. Thus, given a sequence of only 3 or 4 words, we are likely to identify a song uniquely among thousands. In practice, research has shown high reidentifiability in varied high-dimensional datasets, from mobile phone records and credit card transactions to online movie reviews [1, 4, 7].

Measures of Reidentifiability

Measures for assessing reidentification risk in high-dimensional data must recognize sensitive attributes themselves as quasi-identifiers, and vice-versa, moving to a paradigm of partial adversary knowledge as basis of reidentification. One such measure is unicity [1]. The unicity of database is calculated as the percentage of users in who are reidentified by using randomly selected data points from each user’s records; i.e., the percentage of users whose equivalent class satisfies . For instance, it was shown that outside knowledge of four calls was enough to reidentify 95% out of 1.5 million individuals in a CDR dataset ().

Here we elaborate on previous work and propose the following two metrics of privacy in high-dimensional datasets. We aim at metrics that are meaningful and intuitive, as well as rooted in the formal framework of information theory.

Information cost. Similar in spirit to unicity, we define the information cost of reidentification in as the average quantity of outside information that suffices to reidentify users in . Let denote the number of data points drawn from user ’s records needed to reidentify her, then information cost of D is defined as , where is the number of users.

Information ratio. In addition, we define the information ratio of as the average fraction of a user’s data that is required to reidentify her. Let with denote the amount of ’s data in , then the information ratio of is given by . Relevantly, the information ratio summarizes not only the amount of information needed for reidentification, but also the amount of information gained by an adversary once a user is reidentified; where    is the average information gain. This feature of the information ratio is highly relevant, as it enables stakeholders to reflect over preferences accounting for both key elements of privacy risk: information requirement and information gain.

These measures connect with information theory through the core concept of average information content—i.e., the entropy of multivariate distributions [25]. In particular, the higher the entropy of a dataset, the higher the information content of any bit of adversary knowledge, and hence the fewer bits of information required for reidentification (lower information cost and ratio). Moreover, the measures convey a meaningful and intuitive interpretation, which may help a broader audience reflect upon and assess both the likelihood and potential harm that reidentification entails. Below we apply these measures to the case of CDRs at several spatio-temporal granularity levels.

Figure 1: Reidentification results. Reidentification results for datasets at varying levels of spatial and temporal granularity: (a) information cost and (b) information ratio

. All 95% confidence intervals are non-overlapping (except for pairs (

, ) and (, ), as shown in Additional File 1).

2.2 Assessing Utility

In order to assess the usefulness of mobile phone data at the various spatial and temporal generalization levels, we collected data from a quantitative survey targeted to experts with experience in research and analysis of mobile phone data for development and humanitarian action. In particular, the survey’s population were experts who took part in D4D-Senegal 2014, the open innovation data challenge based on anonymous records of Orange’s mobile phone users in Senegal [26]. Thirty two D4D experts—members of academia and research institutions around the globe—opted in to respond the survey. Notably, the pool represented a diversity of twenty five research institutions, across fourteen countries and five continents; and a spread of domain foci in health, transportation and urban planning, national statistics, and others (see Table 2).

Number of experts 32
Number of institutions 25
Continents North America, South America, Asia,
Africa and Europe.
Countries Belgium, Cameroon, Canada, Chile, China
France, Germany, India, Italy, Japan,
Spain, Sweden, United Kingdom, USA.
Domain focus of respondents Health 20.5%, Transportation and Urban
Planning 34%, National Statistics 20.5%,
and Others 25%
Table 2: Experts data.

The survey asked experts to consider a scenario in which they were provided with CDRs from a large metropolitan region in the developing world, including all call communications of a large representative sample of the population in the region. Experts rated on a scale from 1 to 10 the usefulness of such data in their research domains, if provided generalized at the various spatio-temporal granularity levels shown in Figure 1 (screenshots in Additional File 2).

3 Results

3.1 Reidentification Results

We analyzed a mobile phone dataset comprising phone calls of 1.4M people across a large metropolitan region in the developing world over a month in 2013. From it we derived generalized datasets for each combination of spatial and temporal granularity levels .

The spatial granularity levels used were ZIP code, district, and municipality polygons, which partition the space in 56, 156, and 2130 polygons with average areas of 101, 36, and 3 km respectively. The temporal granularity levels used were time slices with duration of 1 hour, 6 hours, 12 hours, and 24 hours. For example, under dataset —generalized at ZIP code and 6 hours granularity— a call issued at 4 pm from ZIP code 02139 by one user is indistinguishable from a call issued at 7 pm by another user in the same ZIP code. We computed the information cost and information ratio of reidentification associated to each generalized dataset . Figure 1 shows the results.

We observe that for the most granular dataset, , it takes on average 2.6 bits—i.e., data points—to reidentify an individual, which represents 7% of that individual’s data ( and ). This means that a prying third party with outside knowledge of 7% of an individual’s data could reidentify her and obtain the remaining 93%. In contrast, we observe that for the least granular dataset , reidentification requires an average of 32 data points, or 51% of an individual’s data ( and ). Hence, if is published, a prying third party requires on average outside knowledge of about 51% of an individual’s data to reidentify her and obtain the remaining 49%.

3.2 Utility Results

Figure 2 shows the experts’ assessment of data utility for each granularity level. We observe that data usefulness decays as the data is generalized spatially and temporally, with values ranging from 9.3 to 4.0 for the most and least granular datasets ( and ).

Figure 2: Utility results. Usefulness of mobile phone metadata per generalization profile. Grey bars denote bootstrapped 95% confidence intervals. Spatial granularity levels are Z = ZIP code, D = district, and M = municipality.

3.3 Privacy-Utility Tradeoff

Figure 3 shows results on the privacy-utility tradeoff. Each point represents a generalized dataset assessed on usefulness and reidentification risk, where the optimal position corresponds to the top right corner—high usefulness and hard reidentification.

We observe a sharp tradeoff between usefulness and privacy. The most granular dataset is the most valuable, with usefulness score of 9.3; however, it is also the dataset most prone to reidentification, where on average a third party with outside knowledge of only 7% of an individual’s data can reidentify the individual and gain the remaining 93% of personal information. Conversely, the least granular dataset is the least valuable, with usefulness score of 4; yet it’s also the dataset least prone to reidentification, where on average a third party requires outside knowledge of 51% of an individual’s data to gain the remaining 49% of personal information.

Figure 3: Privacy-Utility trade-off in mobile phone data. Utility vs. reidentification risk in mobile phone data for development, across spatial and temporal granularities . The more useful the dataset, the less auxiliary information is needed to reidentify its individuals. Conversely, while data generalization increasingly hinders reidentification, it strongly diminishes datasets’ value.

Figure 3 also shows that the tradeoff is not strict. Generalization levels such as and are Pareto-suboptimal, or dominated. For example, and have similar usefulness, however an adversary requires about 65% more outside information to reidentify an individual in than in ( and ).

The tradeoff in Figure 2 implies that, while generalization increasingly hinders reidentification, it strongly undermines data utility. This highlights the complimentary roles of coarsening and data sharing models in enabling use while controlling risks. For example, datasets most prone to reidentification—such as and , with reidentification information ratio —could be shared only under strict models, such as precomputed indicators, or the use of open algorithm platforms [27, 28]. Figure 2 also implies that even highly coarse datasets can be vulnerable to reidentification, and hence should not be made fully public. However, we may want to share more broadly datasets posing more moderate reidentification risks—such as , with reidentification information ratio —, through models similar to those used in D4D challenges [26], where data is accessed by a limited number of semi-trusted parties under non-disclosure agreements (NDAs). Similarly, datasets posing moderate-high reidentification risks—such as those with —, could be shared under additional control mechanisms, such as remote access with adjustable disclosure controls via Q&A architectures, and/or accountability and deterring incentive schemes [29]. See [30, 31] for details and discussion on modern data sharing models and protocols.

4 Conclusions

The present work shows for the first time the notorious trade-off between the societal value of mobile phone data for development and humanitarian action, and the reidentification risk to which individuals in it are exposed. Because data generalization directly erodes data’s value, it cannot be regarded as a silver bullet solution for preserving privacy in high-dimensional datasets [32]. Yet, coupled with data-sharing models that provide adjustable degrees of accountability and security, it may help find the right balance between privacy and utility.

This work assessed data utility as the value provided to experts in the analysis of mobile phone data for development and humanitarian action. This approach is particularly germane when considering purpose-specific data sharing, such as in the case of poverty mapping, transportation planning, or assisting response efforts upon natural disasters. We anticipate future work focusing on the trade-offs of mobile phone data usage in alternative domains, such as marketing and credit scoring.

The formal measures of reidentification risk here proposed can provide meaningful and intuitive summaries of the information requirements, and information gains, associated with reidentification. Ultimately, we hope this work helps promote participation of broader audiences in reflecting upon data privacy tensions, as societal preferences are indispensable inputs for resolving where systems should sit along the privacy-utility spectrum.

References

  • [1] Y.-A. de Montjoye, C. A. Hidalgo, M. Verleysen, and V. D. Blondel, “Unique in the crowd: The privacy bounds of human mobility,” Scientific reports, vol. 3, 2013.
  • [2] M. Gramaglia and M. Fiore, “On the anonymizability of mobile traffic datasets,” arXiv preprint arXiv:1501.00100, 2014.
  • [3] Y. Song, D. Dahlmeier, and S. Bressan, “Not so unique in the crowd: a simple and effective algorithm for anonymizing location data.,” in PIR@ SIGIR, pp. 19–24, Citeseer, 2014.
  • [4] Y.-A. de Montjoye, L. Radaelli, V. K. Singh, et al., “Unique in the shopping mall: On the reidentifiability of credit card metadata,” Science, vol. 347, no. 6221, pp. 536–539, 2015.
  • [5] A. Cecaj, M. Mamei, and F. Zambonelli, “Re-identification and information fusion between anonymized cdr and social network data,” Journal of Ambient Intelligence and Humanized Computing, vol. 7, no. 1, pp. 83–96, 2016.
  • [6] A. Boutet, S. B. Mokhtar, and V. Primault, Uniqueness Assessment of Human Mobility on Multi-Sensor Datasets. PhD thesis, LIRIS UMR CNRS 5205, 2016.
  • [7] A. Narayanan and V. Shmatikov, “Robust de-anonymization of large sparse datasets,” in Security and Privacy, 2008. SP 2008. IEEE Symposium on, pp. 111–125, IEEE, 2008.
  • [8] K. El Emam, E. Jonker, L. Arbuckle, and B. Malin, “A systematic review of re-identification attacks on health data,” PloS one, vol. 6, no. 12, p. e28071, 2011.
  • [9] G. Wondracek, T. Holz, E. Kirda, and C. Kruegel, “A practical attack to de-anonymize social network users,” in Security and Privacy (SP), 2010 IEEE Symposium on, pp. 223–238, IEEE, 2010.
  • [10] UNITU, “Facts and figures,” tech. rep., United Nations International Telecommunication Union, 2013.
  • [11] P. Deville, C. Linard, S. Martin, M. Gilbert, F. R. Stevens, A. E. Gaughan, V. D. Blondel, and A. J. Tatem, “Dynamic population mapping using mobile phone data,” Proceedings of the National Academy of Sciences, vol. 111, no. 45, pp. 15888–15893, 2014.
  • [12] L. Bengtsson, X. Lu, A. Thorson, R. Garfield, and J. Von Schreeb, “Improved response to disasters and outbreaks by tracking population movements with mobile phone network data: a post-earthquake geospatial study in haiti,” PLoS Med, vol. 8, no. 8, p. e1001083, 2011.
  • [13] UNGP, “Using mobile phone activity for disaster management during floods,” tech. rep., UN Global Pulse, 2013.
  • [14] A. Wesolowski, N. Eagle, A. J. Tatem, D. L. Smith, A. M. Noor, R. W. Snow, and C. O. Buckee, “Quantifying the impact of human mobility on malaria,” Science, vol. 338, no. 6104, pp. 267–270, 2012.
  • [15] E. Frias-Martinez, G. Williamson, and V. Frias-Martinez, “An agent-based model of epidemic spread using human mobility and social network information,” in Privacy, Security, Risk and Trust (PASSAT) and 2011 IEEE Third Inernational Conference on Social Computing (SocialCom), 2011 IEEE Third International Conference on, pp. 57–64, IEEE, 2011.
  • [16] N. Eagle, M. Macy, and R. Claxton, “Network diversity and economic development,” Science, vol. 328, no. 5981, pp. 1029–1031, 2010.
  • [17] J. Blumenstock, G. Cadamuro, and R. On, “Predicting poverty and wealth from mobile phone metadata,” Science, vol. 350, no. 6264, pp. 1073–1076, 2015.
  • [18] J. E. Steele, P. R. Sundsøy, C. Pezzulo, V. A. Alegana, T. J. Bird, J. Blumenstock, J. Bjelland, K. Engø-Monsen, Y.-A. de Montjoye, A. M. Iqbal, et al., “Mapping poverty using mobile phone and satellite data,” Journal of The Royal Society Interface, vol. 14, no. 127, p. 20160690, 2017.
  • [19] M. Berlingerio, F. Calabrese, G. Di Lorenzo, R. Nair, F. Pinelli, and M. L. Sbodio, “Allaboard: a system for exploring urban mobility and optimizing public transport using cellphone data,” in

    Joint European Conference on Machine Learning and Knowledge Discovery in Databases

    , pp. 663–666, Springer, 2013.
  • [20] E. Jahani, P. Sundsøy, J. Bjelland, L. Bengtsson, Y.-A. de Montjoye, et al., “Improving official statistics in emerging markets using machine learning and mobile phone data,”

    EPJ Data Science

    , vol. 6, no. 1, p. 3, 2017.
  • [21] S. L, “k-anonymity: a model for protecting privacy,” International Journal of Uncertainty Fuzziness and Knowledge-Based Systems (10), 2002.
  • [22] A. Machanavajjhala, J. Gehrke, D. Kifer, and M. Venkitasubramaniam, “l-diversity: Privacy beyond k-anonymity,” in null, p. 24, IEEE, 2006.
  • [23] N. Li, T. Li, and S. Venkatasubramanian, “t-closeness: Privacy beyond k-anonymity and l-diversity,” in Data Engineering, 2007. ICDE 2007. IEEE 23rd International Conference on, pp. 106–115, IEEE, 2007.
  • [24] A. Noriega-Campero et al., Balancing utility and privacy of high-dimensional datasets: mobile phone metadata. PhD thesis, Massachusetts Institute of Technology, 2015.
  • [25] D. J. MacKay, Information theory, inference and learning algorithms. Cambridge, United Kingdom: Cambridge university press, 2003.
  • [26] Y.-A. de Montjoye, Z. Smoreda, R. Trinquart, C. Ziemlicki, and V. D. Blondel, “D4d-senegal: the second mobile phone data for development challenge,” arXiv preprint arXiv:1407.4885, 2014.
  • [27] T. Hardjono, D. Shrier, and A. Pentland, “Opal/enigma,” in TRUST:: DATA: A New Framework for Identity and Data Sharing, ch. 3, pp. 79–99, ,: Visionary Future LLC, 2016.
  • [28] “OPAL: open algorithms for better decisions,” 2018.
  • [29] Z. Wan, Y. Vorobeychik, W. Xia, E. W. Clayton, M. Kantarcioglu, R. Ganta, R. Heatherly, and B. A. Malin, “A game theoretic framework for analyzing re-identification risk,” PloS one, vol. 10, no. 3, p. e0120592, 2015.
  • [30] G. D’Acquisto, J. Domingo-Ferrer, P. Kikiras, V. Torra, Y.-A. de Montjoye, and A. Bourka, “Privacy by design in big data: an overview of privacy enhancing technologies in the era of big data analytics,” arXiv preprint arXiv:1512.06000, 2015.
  • [31] Y.-A. de Montjoye and e. al, “Privacy-conscientious use of mobile phone data,” tech. rep., 2017.
  • [32] A. Narayanan and E. W. Felten, “No silver bullet: De-identification still doesn’t work,” White Paper, pp. 1–8, 2014.