I Introduction
The proliferation of wearable devices, such as Fitbit and Apple Watch, enables the continuous collection of personal health data including heart rate, walking steps, and sleep condition. The personal health data can be a good indication for users to keep track of their ﬁtness and can be further shared with healthcare providers for various purposes. For example, users could share data with an insurance company for a lower premium, and ﬁtness advisor for a better health plan. In these cases, users prefer to share minimum amount of information to healthcare providers. From [1], the disclosure of unnecessary health data may result in severe privacy violations. We consider a scenario where a healthcare provider requires a user to provide the health data collected during the next two weeks. The user needs to consider two factors, i) utility, the disclosed data must be useful; ii) privacy, the disclosure must consume less than a privacy budget.
Health data collected from wearable devices has following unique properties. First, it contains signiﬁcant health patterns, which may imply health conditions. The patterns need to be reserved in the privacy protection algorithm. Second, health data is generated continuously. The usefulness of data varies from day to day. Generally, when the data is not useful, the data does not need to be disclosed. On the other hand, if the data is useful, the data need to be disclosed with a privacy constraint. Given a privacy budget for two weeks, for example, the budget should be adaptively arranged on a daily basis. As such, the utility of the disclosed data can be maximized while the privacy goal is achieved.
Differential Privacy [2], proposed by DWork, is a popular paradigm to provide privacy in the data release. A common way to achieve differential privacy is to perturb data with noise [3, 4]. Most existing literatures has mainly focused on the onetime release of static data [5, 6, 7, 8, 9]. However, in health releasing scenario, data has to be collected and released continuounsly due to the power limit of wearable devices. Several studies[10, 11, 12, 13] have been focused on realtime data releasing with differential privacy guarantee. In [14], Wang et al. proposed a scheme achieving event privacy. However, their schemes have limitations. Its decision on data usefulness only depends on the data dynamics and ignore the health condition of the user. Thus it does not fit in our case.
In this paper, we propose ReDPoctor for Realtime edoctor health data releasing with differential privacy to solve our problem. The contributions of this paper can be summarized as follows.
We proposed a practical releasing scheme ReDPoctor which guarantees
day privacy, a new privacy level deﬁnition in the continuous data stream. Its key modules include adaptive sampling, adaptive budget allocation, DPPartition, perturbation, feature extraction and ﬁltering.
The design of ReDPoctor achieves better accuracy and privacy level. It uses partition algorithm to protect health pattern to improve the accuracy while using adaptive sampling and budget allocation algorithm which takes health condition and data dynamic into account to improve privacy level.
We prove that our scheme satisﬁes day privacy and do experiments on real collected wearable device data. Compared to others, we have better results on utility and privacy guarantee.
Ii Preliminaries
Iia Differential Privacy
A mechanism which satisfies Differential Privacy should guarantee that the query result remains approximately the same if a single record is added or deleted.
Definition 1 (Differential Privacy[2]): A randomized mechanism gives differential privacy if for all data sets and differing on at most one, and all ,
(1) 
is the privacy budget. A smaller means more noise and stronger privacy level.
Laplace mechanism is the most common one to guarantee differential privacy.
IiB day Privacy
day differential privacy is a concept improved from [10], which is a new way to define privacy level over infinite stream information. It guarantees that for any successive events happened in a window of days; the privacy leakage level is no more than .
We model the data stream as an infinite stream tuple , where is the element of , i.e. . The stream prefix of at represents as .
Definition 2 (neighboring): Let to be a positive integer. Two stream prefixs , are neighboring, if

for each pair with , it holds that , are neighboring (e.g., have at most one row different);

for each ,,, with ,and , it holds that .
Definition 3 (day Privacy): Let be a mechanism that takes as input a stream prefix of arbitrary size. Let be the set of all possible outputs of . Then we call that satisfies day differential privacy if for all sets , all neighboring stream prefixes , , and all , it holds that
(3) 
Theorem 2[10]: Let be a mechanism that takes as input stream prefix , where = , and outputs a transcript . Suppose that we can decompose into mechanisms , . . . , , such that = , Let be differential private for some . Then, will satisfy differential privacy if
(4) 
It means we could view as the whole privacy budget in a day sliding window and any budget falls out of the window could be recycled and reused.
Iii ReDPoctor: Realtime health data releasing with wday privacy
Consider the scenario where the user has a wearable device to monitor his health data. Also, there exists an Edoctor that the wearable tracking device would release heart rate data to the server in hospital from time to time. When the user goes to the hospital, the doctor can pull out the data and do the analysis. However, the dilemma is, how could we design the health histogram releasing mechanism to only release useful data for diagnosing needs while maintaining the privacy? One common way is to perturb the data with noise. But applying unifying noise to the original data will cause the decreasing precision of histogram. Besides, there are many patterns in the original histogram that could be buried in too much noise. The solution is to design a mechanism that could preserve the desired patterns and protect the privacy.
In this section, we present a realtime health data releasing with day differential privacy. Figure 1 shows an overview of the proposed scheme, which contains six modules: Partitioning, Perturbation, Feature Extraction, Adaptive Sampling, Adaptive Budget allocation, Filtering.
Firstly, adaptive sampling mechanism adjusts the sampling rate based on data dynamics and health condition, which perturbs histograms at sampling day and approximate the nonsampled day with perturbed histograms at last sampling day. Then budget allocation mechanism dynamically allocates the privacy budget at sampling days. The first two steps make sure the nonsampled points can be approximated without any budget allocation. Thus, given a fixed more precious privacy budget can be allocated to the histogram needed to be released and reduce the errors caused by Laplace noise and improve overall accuracy. Then, the DPPartitioning mechanism could preserve desired patterns for health diagnose. Then Laplace mechanism is used to perturb the partitioned histogram. At last,filtering mechanism helps to improve the accuracy of the released data.
The followings are the main components of the proposed scheme in details.
Iiia Adaptive Sampling
When a user publishes all the histograms at every day, it will introduce large noise and affect the utility of the released histograms. Here comes the seemingly nonnegotiable tradeoff between the accuracy and privacy of the histogram releasing. Thus, sampling will be a great method to deal with such a dilemma that we sample the important histogram at certain selected days and leave the nonsampled ones to be approximated. Since the nonsampled histograms do not cost any privacy budget, the selected one can be allocated more budget and improve their accuracy.
Several earlier researchers have proposed methods to adjust sampling rate but didn’t fit in our scenario of health data. DSAT[12] failed to apply in health data because it uses a fixed sampling rate which is unrealistic in realtime health monitoring. Another approach by Wang [14] fails to fit in health monitoring because it ignores the health condition of the user as a dynamic factor which could affect the sampling rate.
In this paper, we proposed a new adaptive sampling mechanism, which takes the current health condition, histogram dynamics, and remaining budget into consideration. Suppose the current sample day is and the last sample day is . The heart rate records are respectively. We use Pearson correlation coefficient as the feedback error:
(5) 
Here we choose to use the released histogram instead of the raw histogram to protect the privacy. It may introduce a little error which is relatively small compared to the privacy it provides.
The PID error is defined as:
(6) 
where the ,, are the proportional gain, the integral gain, and the derivative gain.
Proportional term: The first term is proportional to the current error where is the feedback error, and the parameter is the set point. We set as 5% experiments as the maximum tolerance of the feedback error.
Integral term: The second term stands for the accumulation of past error where is the integral gain and the is how many samples are taken into account.
Derivative term: The third term just determines the slope of error over time and predicts the future error.
Intuitively, the sampling interval should be small if user’s health condition changes rapidly. However, if the remaining budget is small, sampling at the next day will introduce a high perturbation error. A more reasonable choice is to use a relatively large sampling interval so that previously allocated budget could be recycled and to approximate the histogram with the previous publication.
Besides histogram dynamics and remaining privacy budget, another factor we need to consider is the health condition of the user. Imagine two users have same histogram dynamics and remaining privacy budget but one in sick condition and another one in good health. Applying same sampling method are not applicable because the sick user apparently needs more concerns and needs to release histograms more frequently than the healthy one. One rule for health data releasing is that we should never sacrifice the user’s health for privacy. We use to denote user’s health condition which can get from the feature extraction module.
Combined all the three factors, the next sampling rate is defined as below:
(7) 
where and is the next and last sampling interval respectively. And is the scale of Laplace noise where is the remaining budget. is the scale factor to adjust the sampling interval. Consequently, the sampling interval will increase when the or and decrease otherwise.
IiiB Adaptive Budget Allocation
The definition of the day privacy requires the total budgets within the sliding window of equals a certain value .
For the sampling day , firstly, we have to calculate the remaining budget in the window . Note that if is not a sampling day, then it equals zero. Then, inspired by RescueDP, we allocate the remaining budget based on the sampling interval. When the sampling interval is small, it can be inferred that the histogram changes rapidly or the user is the sick condition. Moreover, we can infer there will be a large number of sampling points in the time windows. Then, we allocate a small portion of the remaining privacy budget to the coming sampling point so that there will be more privacy left for future use. Fortunately, natural logarithm could quantify such a relationship. Define the portion as:
(8) 
where the is the scale factor to adjust the budget portion and the limits the maximum value of a portion. So the allocated budget portion will increase as the sampling interval increase. Meanwhile, it slows down when the interval is large enough. Finally, we calculate the budget simply by applying the portion to the remaining budget as ,where the limits the maximum value of budget because excessive privacy budget could achieve little improvement to the utility of histogram.
IiiC Partitioning
Health data histogram is different from other ordinary histograms. Without suitable partition, health data histogram could easily lose their important features or patterns, which are crucial for diagnoses, during aggregation and randomization. The main goal is to design an algorithm to preserve the desired pattern of heart rate in releasing the histogram. We use partition algorithm to protect certain patterns. In our case, we mainly focus on two patterns: small but rapid change and slow but large change.
Before partition, the database records will be aggregated into data bins on a 10 minutes basis. Then the bins will be partitioned into the set of buckets based on the value, the structure and the threshold of the original bins database. Since the buckets structure may reveal information, and one could infer private information in the database due to the small changes in the database. To prevent such privacy leakage, we decide to use part of the privacy allocated for the sampling point to protect the threshold of the partition. Here we use a constant as the scale to denote the portion of privacy budget for partition.
The algorithm of partitioning with differential privacy is in Algorithm 2. Before the start of the algorithm, several variables need to be declared: Variables are the value of bin of histogram database and the bucket, respectively. Integers are the indexes of the current bin and the current bucket and the size of the current bucket,respectively. holds the value of last bin. The indicates the maximum and minimum value of current bucket. And three thresholds which are learned from public information and are set based on user setup:

: the maximum difference between the maximum and minimum value in one bucket, accords to slow but large change

: the maximum instant change of heart rate between adjacent bins. Normally, this threshold is smaller than because the change between two adjacent bins may actually be smaller than , but since it happened in a very small period of time, it must be preserved. It accords to rapid change.

: the maximum size of each bucket in case of the oversize of a bucket.
Due to the privacy requirement of the partition algorithm, we add Laplace noises to and threshold parameters and get and .
The partition process could be easily understood. In the beginning, it put the first bin into the first bucket and move to next bin. Then the algorithm checks all the threshold requirement, if they are all met then the current bin will be put into the same bucket. Otherwise, a new bucket will be created. The first checked threshold is due to its smaller value. If the threshold is breached, two single bin buckets need to be created, each containing the adjacent sudden change bins so that their values won’t be averaged later. Based on the size of the current bucket, three cases are considered. Moreover, the second and third threshold will be tested and either the new bucket will be created, or the current bucket will be enlarged.
IiiD Perturbation
The results from the previous step buckets then will be randomized by simply adding noise which following Laplace distribution at each sampling point.
After suitable partition, we firstly have to average the bins in the same bucket first. Then, we just add Laplace noise to the average value of bins of every bucket. Suppose the minimum possible change in the query result from neighborhood databases is and the remaining portion for randomization is . So Laplace noise for sampling day will be
(9) 
where is the average value of bucket .
IiiE Filtering
In order to eliminate the error introduced by using released data in adaptive sampling and budget allocation mechanism, we use Particle filter improve the accuracy of releasing histogram by estimating the perturbed histogram. We chose Particle filter instead of Kalman filter because in
[11], it is proved that although the Particle filter cost much more time and has greater complexity, it achieves more accuracy. Moreover, when comes to protect the health data, accuracy weighs better importance than algorithm complexity. In the final releasing histogram at the , it releases posterior estimates of particle filter at sampling points and prior estimates at nonsampling points. Due to the space limit, we omit the details of filtering. Please refer to[11] for details.IiiF Feature Extraction
Then we need to level the health condition by extracting features from the released histograms. Here we adopt the simplest model just for explanation and focus on four features of four typical rhythms for potential heart disease: : the number of time when the user’s heart rate has a rapid increase or decrease in a short period, which could be explained as the signal of heartattack; : the number of time when the user’s heart rate has a great increase or decrease in a long time, which could be explained as the signal of palpitation; : the time when the user’s heart rate keeps above maximum threshold, which could be explained as the signal of angina; : the time when the user’s heart rate keeps below minimum threshold, which could be explained as the signal of sinus bradycardia；
Then we define the health condition at as:
(10) 
where are the standard tolerant values from medical references. So the calculated health condition could be used in the adaptive sampling mechanisms. Since the feature extraction is based on the released histogram, so it does not cost any privacy budget, either.
IiiG Privacy Analysis
Theorem 3: Partitioning process satisfies differential privacy at the .
Proof: Let the , be the neighboring databases and the be the output. To prove partition process is differential private, we need to prove: . Suppose the maximum difference in the value of bins in two neigboring databases is bounded by . For each bucket, we have to meet the bound and . And according to the sequential composition property of DP, taking . So the inequality can be transformed into:
We try to solve the inequalities separately in order to find the required Laplace distribution. Suppose the changed record between the neighbouring databases falls into the bucket .
For the first inequality, the changed record may effect and or an ordinary bin’s count of . If the changed value only affects ordinary bins. Clearly, . If the changed value effects either or , we need to find the suitable Laplace scale() in order to have this change tolerated. Suppose the and are changed by . Take and . Here we only consider the change of .
When :
When :
And we discuss the above inequation in three cases:

:

:
Let ,then

:
Taking , the inequality above holds. Thus, the first inequality holds so is sufficient for differential privacy.
Due to the space limit, we omit the details of second inequality. Because it is similar to the first part. So we can get the proof of privacy for directly.
Theorem 4: The ReDPoctor satisfies day differential privacy.
Proof: According to Axiom 2.1.1 in [15], postprocessing perturbed data maintain privacy as long as it does not use the sensitive information. Since among all the components, only the partition and perturbation process access to the raw data, while the others operate on the perturbed data. Thus, if we can prove that these two mechanisms together satifsfies day differential privacy, the ReDPoctor will satisfy day differential privacy.
According to Theorem 4, as previous proved, at , the partition process statisfies differential privacy. According to Theorem 1, at , the perturbation process satisfies differential privacy for applying Laplace noise. So for any , the ReDPoctor provides differential privacy. Since the adaptive budget allocation mechanism guarantees for any sliding window that . Consequently, ReDPoctor satisfies day privacy.
Iv Experimental Evaluation
In this section, we evaluate the performance of ReDPoctor on real health data. We have conducted real experiments on captured heart rates from wearable devices attached to a hospital patient during three months.
In the experiments, we set , , and for the PID controller. In Adaptive budget allocation, we set . In Partitioning, we use , , as the thresholds. Because heart rate usually changes between 50 and 200 and we track our wday window as 14 days. So we define our sensitivity . Without explanation, we set and for all databases.
We use Mean Absolute Error(MAE) and Mean Relative Error( MRE) as the utility metrics to evaluate the performance of our scheme. The bound is set to 0.05% of in order to mitigate the effect of extra small bins which could result from the takeoff of the watch.
Utility vs Privacy: Figure 2 investigates how MAE and MRE change with various values and makes the comparison between ReDPoctor and BA and BD[10]. We can see that with the increasing of , both MAE and MRE of the dataset decrease. It is natural because a larger means smaller boise. Also, We can see that MAE and MRE both are smaller than BD and BA over the whole time period.
The better utility performance of ReDPoctor contributes to three reasons. First, the ReDPoctor adaptively adjust the sampling and allocate the privacy budget more appropriately. Within the fixed total budget, it samples the days with useful data and allocates more budget to them. Second. the ReDPoctor has a more available budget for perturbation than other methods at any day window. In BD and BA, part of the budget is used for calculating the similarity. Third, the proper partition mechanism recognizes the patterns and improves the accuracy of released data.
Utility vs : In figure 3, we compare ReDPoctor with BA and BD while varying values. We can see that the MAE and MRE of BD and BA increase greatly when increases. When increases, in order to ensure the total budget less than , BA may skip the day which may contain useful data and results larger errors. In contrast, ReDPoctor is more stable because it takes the window size and remaining budget into consideration and adaptively change the budget of next sampling point.
Effect of Partitioning: We also conduct two experiments of ReDPoctor on the same dataset with and without partition to evaluate the effects of our partition mechanism. We can see from the results of Table 1 that the partition reduces MAE and MRE significatly. Therefore, we can conclude that partition can not only preserve the patterns but also improves the utility of released data.
With Partition  Without Partition  

MAE  156  355 
MRE  0.23  0.36 
V Conclusions
In this paper, we proposed ReDPoctor, a realtime health data releasing scheme with day differential privacy achieving both utility and privacy guarantee. We designed a framework for ReDPoctor consisting of mechanisms of adaptive sampling, adaptive budget distribution, partition, perturbation,ﬁltering, and feature extraction. The privacy analysis proves that ReDPoctor satisﬁes day differential privacy. Experiments on real health data show that ReDPoctor outperforms other methods and achieves both utility and privacy required.
Acknowledgement
This work was supported by NSFC under grant 61402405 and Zhejiang Natural Science Foundation under grant No. LR16F020001.
References
 [1] J. Lane and C. Schur, “Balancing access to health data and privacy: A review of the issues and approaches for the future,” Health Services Research, vol. 45, no. 5p2, pp. 1456–67, 2010.
 [2] C. Dwork, “Differential Privacy,” in Proc. of ICALP, 2006, pp. 1–12.
 [3] ——, “Differential privacy: A survey of results,” in Proc. of TACM, 2008, pp. 1–19.
 [4] C. Dwork and K. Nissim, “Privacypreserving datamining on vertically partitioned databases,” Proc. of CRYPTO, vol. 3152, pp. 528–544, 2004.
 [5] C. Dwork, F. Mcsherry, K. Nissim, and A. Smith, “Calibrating noise to sensitivity in private data analysis,” in Theory of Cryptography Conference, 2006, pp. 265–284.
 [6] X. Duan, C. Zhao, S. He, P. Cheng, and J. Zhang, “Distributed algorithms to compute walrasian equilibrium in mobile crowdsensing,” IEEE Transactions on Industrial Electronics, vol. 64, no. 5, pp. 4048–4057, 2017.
 [7] X. Xiao, G. Bender, M. Hay, and J. Gehrke, “iReduct: differential privacy with reduced relative errors,” in Proc. of ACM SIGMOD, 2011, pp. 229–240.
 [8] J. Xu, Z. Zhang, X. Xiao, Y. Yang, G. Yu, and M. Winslett, “Differentially private histogram publication,” The VLDB Journal, vol. 22, no. 6, pp. 32–43, 2013.
 [9] G. Kellaris and S. Papadopoulos, “Practical differential privacy via grouping and smoothing,” Proceedings of the VLDB Endowment, vol. 6, no. 5, pp. 301–312, 2013.
 [10] G. Kellaris, S. Papadopoulos, X. Xiao, and D. Papadias, “Differentially private event sequences over infinite streams,” Proc. of the VLDB Endowment, vol. 7, no. 12, pp. 1155–1166, 2014.
 [11] L. Fan and L. Xiong, “An adaptive approach to realtime aggregate monitoring with differential privacy,” IEEE Transactions on Knowledge and Data Engineering, vol. 26, no. 9, pp. 2094–2106, 2014.
 [12] H. Li, X. Jiang, L. Xiong, and J. Liu, “Differentially private histogram publication for dynamic datasets: An adaptive sampling approach,” in Proc. of CIKM, 2015, pp. 1001–1010.
 [13] S. He, D.H. Shin, J. Zhang, J. Chen, and Y. Sun, “Fullview area coverage in camera sensor networks: dimension reduction and nearoptimal solutions,” IEEE Transactions on Vehicular Technology, vol. 65, no. 9, pp. 7448–7461, 2016.
 [14] Q. Wang, Y. Zhang, X. Lu, and Z. Wang, “RescueDP: Realtime spatiotemporal crowdsourced data publishing with differential privacy,” in Proc. of IEEE INFOCOM, 2016, pp. 1–9.
 [15] D. Kifer and B. R. Lin, “Towards an axiomatization of statistical privacy and utility,” in Proc. of PODS, 2010, pp. 147–158.
Comments
There are no comments yet.