Google COVID-19 Community Mobility Reports: Anonymization Process Description (version 1.0)

04/08/2020 ∙ by Ahmet Aktay, et al. ∙ Google 0

This document describes the aggregation and anonymization process applied to the initial version of Google COVID-19 Community Mobility Reports (published at http://google.com/covid19/mobility on April 2, 2020), a publicly available resource intended to help public health authorities understand what has changed in response to work-from-home, shelter-in-place, and other recommended policies aimed at flattening the curve of the COVID-19 pandemic. Our anonymization process is designed to ensure that no personal data, including an individual's location, movement, or contacts, can be derived from the resulting metrics. The high-level description of the procedure is as follows: we first generate a set of anonymized metrics from the data of Google users who opted in to Location History. Then, we compute percentage changes of these metrics from a baseline based on the historical part of the anonymized metrics. We then discard a subset which does not meet our bar for statistical reliability, and release the rest publicly in a format that compares the result to the private baseline.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Definitions

Location History users

The metrics in these reports are based on the data of Google users who have opted in to Location History [2], (“LH users”), a feature which is off by default.

Differential Privacy [3]

Let be a positive real number and A be a randomized algorithm that computes a metric. In the context of this report, A is considered -differentially private if for all input datasets and that differ in one user’s contributions and for all subsets of :

Granularity levels

The metrics are aggregated per day and per geographic area. There are three levels of geographic areas; in this paper, we call these granularity levels.

  • Granularity level 0 corresponds to metrics aggregated by country / region.

  • Granularity level 1 corresponds to metrics aggregated by top-level geopolitical subdivisions (e.g. US states).

  • Granularity level 2 corresponds to metrics aggregated by higher-resolution granularity (e.g. U.S. Counties).

Granularity levels 1 and 2 are defined differently in different countries, to account for knowledge of local public-health needs. Note that in general, the geographic area represented gets smaller as the Granularity number increases. No metrics are published for geographic regions smaller than 3km.

2 Generating anonymized metrics

We are releasing aggregated, anonymized data that is designed to ensure that no personal data, including an individual’s location, movement, or contacts, can be derived from the resulting metrics. To that end, we anonymize the statistics with differential privacy. We query the underlying data using our open-source differential privacy library 

[4], which adds Laplace noise [5] to protect each metric with differential privacy.

2.1 Daily visits in public places

We count the number of unique LH users who visited a public place of a given category in a given day at each granularity level. There are seven different categories derived from the data: retail, recreation, eateries (reported as part of “Retail & recreation”); groceries, pharmacies; transit; and parks. We add Laplace noise to each count according to the following table.

Granularity level Scale of Laplace noise Corresponding parameter
0 0.11
1 0.11
2 0.22
Table 1: Noise parameters used for the daily visits in public places metrics

For each location (at all geographic levels), each LH user can contribute at most once to each category. We also bound the contribution of each LH user to 4 category,location pairs per day and per geographic level, using a process similar to the one described in this paper [6]: if an LH user contributes to more than 4 pairs in a given day and given geographic level, we randomly select 4 of them, and discard the others.

For example, suppose that on the same day, an LH user goes to public places in all 7 categories in two distinct neighboring countries. This makes a total of 14 category,location pairs at country level. We would randomly discard 10 of these pairs when computing country-level statistics.

This process does not significantly affect data accuracy: in the US, at county level, of LH users contribute 3 or fewer category,place pairs per day on average. Thus, each daily place visit is protected by differential privacy with , and the total daily contribution of each user with a maximum of .

2.2 Residential

For the purposes of this analysis, we use signals like relative frequency, time and duration of visits to calculate metrics related to places of residence. We calculate an average amount of time spent at places of residence for LH users in hours. This computation is performed for each day and geographic area, using the differentially private mean mechanism from our open-source library [7]. This mechanism works as follows:

  • We compute the amount of time spent at place of residence in a given day and geographic area in hours by summing up the individual values per user offset by 12, so all individual values fall into the range . We then add Laplace noise to this sum; the scale of the noise is indicated in the table below. We denote the real sum S, and noisy sum NS.

  • We compute the count of unique users who spent any time at residences in a given day and geographic area. We then add Laplace noise to this count; the scale of the noise is indicated in the table below. We refer to the real count C, and the noisy count NC.

  • Finally, we compute the ratio NS/NC for each day and each geographic area, add 12 as offset and clamp it to the range hours/day.

For example, at county-level, NS is obtained by first sampling a random number from a Laplace distribution of scale 109.1, and then adding that number to S. In the table below, we also indicate the standard deviation

of the noise added to each value.

Granularity level Scale of Laplace noise: Scale of Laplace noise: Corresponding
sum (total hours/day) count (number of users) parameter
0 0.11
1 0.11
2 0.22
Table 2: Noise parameters used for the residential metrics

Each user can contribute to at most one region per granularity level, which protects these metrics by differential privacy with total budget across all granularities. A description of the differentially private mean mechanism implemented and a proof of its privacy guarantees is described in [8] (Algorithm 2.4).

2.3 Workplaces

For the purposes of this analysis, we use signals like relative frequency, time and duration of visits to calculate metrics related to places of residence and places of work of LH users. We calculate how many LH users spent more than 1 hour at their places of work. This computation is performed for each day and geographic area. Then, we add Laplace noise to each count according to the following table.

Geographic level Scale of Laplace noise Corresponding parameter
0 0.11
1 0.11
2 0.22
Table 3: Noise parameters used for the work places metrics

The count is aggregated by places of residence of LH users. Since each user can contribute to at most one geographic area per granularity level, these metrics are protected by differential privacy with .

3 Generating the report from the anonymized metrics

The metrics described above are generated for each day, starting on 2020-01-01. They are then used to generate the percentage changes relative to day of the week published in the reports. All operations described below use only the output of the differentially private mechanisms described in the previous section; so they do not consume any privacy budget.

Additional privacy protections

We discard all metrics for which the geographic region is smaller than 3km, or for which the differentially private count of contributing users (after noise addition) is smaller than 100. Geographic regions smaller than 3km may be merged such that the union of their area is above the 3km threshold. This merging does not occur across country boundaries, except for the Vatican City and Italy.

3.1 Computing percentage changes from a baseline

For each individual metric generated using the mechanisms described above, we compute the ratio between the metric for a given day D and the same metric computed for the baseline period. The reference baseline is defined in the following way.

  • We consider the 5-week range from 2020-01-03 through 2020-02-06.

  • Within this 5-week range, we consider the 5 days with the same day of week as D. For example, if D is 2020-03-20, D is a Friday, so we consider the 5 Fridays in this 5-week range (Jan 3 to Jan 31, inclusive).

  • We compute the median of the differentially private metrics for these 5 baseline days.

  • This median metric is the baseline metric for D.

We then compute and publish the ratio between the metric for D and the baseline metric, as a percentage.

3.2 Removing unreliable metrics for Residential, Workplace, Transit, and Parks

In some regions, the noise added to obtain differential privacy can reduce the confidence that we are capturing a meaningful change, typically when there is not a lot of data for the metric. When, because of this uncertainty, the percentage change for one of these metrics has a chance (or higher) of being wrong by more than absolute percentage points, we do not publish it and instead include an asterisk denoting that there is not enough data available to present privacy-safe information. More precisely:

  • Before releasing a ratio metric/baseline, we compute confidence intervals for the metric and its baseline. Let us denote [metric_min, metric_max] and [baseline_min, baseline_max] these respective confidence intervals.

  • We compute the ratios metric_min/baseline_max and metric_max/baseline_min.

  • If one of these ratios differs from the differentially private ratio by more than 10 absolute percentage points, we do not publish the corresponding percentage changes.

If the last condition is not satisfied, then the probability of being wrong by more than 10 absolute percentage points in each direction is lower than 2.5%. By union bound, this means that there is at most a 5% risk of being wrong by more than 10 absolute percentage points. Note that the confidence intervals are based on an already differentially private value and on public data (the scale and shape of the noise), so no privacy budget is consumed by this operation.

References