1 Introduction
Organizations and companies are becoming increasingly interested in collecting user data and telemetry to make datadriven decisions. While collecting and analyzing user data is beneficial to improve services and products, users’ privacy poses a major concern. Recently, the concept of Local Differential Privacy (LDP) has emerged as the accepted standard for privacypreserving data collection [1, 2, 3]. In LDP, each user locally perturbs their sensitive data on their device before sharing the perturbed version with the data collector. The perturbation is performed systematically such that the data collector cannot infer with strong confidence the true value of any user given their perturbed value, yet it can still make accurate inferences pertaining to the general population. Due to its desirable properties, LDP has been adopted by major companies to perform certain tasks, including Google to analyze browser homepages and default search engines in Chrome [2, 4], Apple for determining emoji frequencies and spelling prediction in iOS [5, 6], and Microsoft to collect application telemetry in Windows 10 [7].
While LDP is popularly used for the aforementioned purposes, one domain that is yet to embrace it is cybersecurity. It can be argued that this nuanced domain has the potential to greatly benefit from an LDPlike protection mechanism. This is because many security products rely on information collected from their clients, with the required telemetry ranging from file occurrence information in file reputation systems [8, 9, 10] to heterogeneous security event information such as system calls and memory dumps in the context of Endpoint Detection and Response systems (see [11] for a survey on core behavioral detection techniques used by such systems). Nevertheless, clients are often reluctant to share such data fearing that it may reveal the applications they are running, the files they store, or the overall cyber hygiene of their devices. Providing an LDPlike protection would offer clients formal privacy guarantees and help convince them that sharing their data will not cause privacy leakages.
Recent interest in LDP led to the development of accurate and optimized protocols, such as RAPPOR and OLH, that are also used in the design of more sophisticated LDP algorithms [2, 4, 12, 13, 14, 15]. However, existing LDP protocols suffer from problems that hinder their deployment in the cybersecurity domain. First, although they are accurate for large populations (e.g., hundreds of thousands of clients), their accuracy suffers when client populations are smaller. Population size is not necessarily a problem for the likes of Google Chrome and Apple iOS with millions of active users, but it does cause problems in a domain such as cybersecurity where small population sizes are common. For example, if a security analyst is analyzing the behavior of a particular malware that targets a certain system or vulnerability, only those clients who are infected by the malware will have meaningful observations to report, but the number of infections could be limited to less than a couple of thousand users globally due to the targeted nature of the malware. In such cases, we need a new scheme that allows the security analyst to make accurate inferences while simultaneously giving adequate privacy to end users. Second, existing protocols consider a limited set of primitive data types. To the best of our knowledge, currently no protocol supports perturbation of item sequences (with either ordinal or nonordinal item domains) to offer privacy with respect to sequence length and content simultaneously. Sequences are more difficult to handle compared to the data types of singleton items or itemsets since they are not only highdimensional, but also contain an ordering that must be preserved for tasks such as pattern mining. Yet, sequential data is ubiquitous in cybersecurity, e.g., security logs, network traffic data, file downloads, and so forth are all examples of sequential data.
In this paper, we propose the notion of Condensed LDP (CLDP) and a suite of protocols satisfying CLDP to tackle these issues. In practice, CLDP is similar to LDP with the addition of a condensation
aspect, i.e., during the process of perturbation, similar outputs are systematically favored compared to distant outputs using condensed probability. We design protocols satisfying CLDP for various types of data, including singletons and sequences of ordinal and nonordinal items. We show that CLDP can be satisfied by a variant of the Exponential Mechanism
[16], and employ this mechanism as a building block in our OrdinalCLDP, ItemCLDP, and SequenceCLDP protocols. Our methods are generic and can be easily applied for privacypreserving data collection in a variety of domains, including but not limited to cybersecurity. Under the prevalent adversarial threat model of LDP, we show that our CLDP protocols can give equivalent (or better) protection than LDP protocols if CLDP’s privacy budget is set accordingly. Under this setting, our protocols provide higher utility than LDP protocols by yielding accurate insights for population sizes that are an order of magnitude smaller than those currently assumed by the stateoftheart LDP approaches.We also perform extensive experiments to evaluate the effectiveness of our protocols on realworld case studies and public datasets. Our experiments show that proposed CLDP protocols outperform existing LDP protocols in key tasks such as frequency estimation, heavy hitter identification, and pattern mining. Using data from Symantec, a major cybersecurity vendor, we show that CLDP can be used in practice for use cases involving ransomware outbreak detection, OS vulnerability analysis, and inspecting suspicious activities on infected machines. In contrast, existing LDP protocols can either not be applied to these problems or their application yields unacceptable accuracy loss. To the best of our knowledge, our work is the first to apply the concept of local privacy to the nuanced domain of cybersecurity.
2 Background and Problem Setting
We consider the local privacy setting, where there are many clients (users) and an untrusted data collector (server). Each client possesses a secret value. The client’s secret value can be an ordinal item (e.g., numeric value or integer), a categorical item, a nonordinal item, or a sequence of items. The server wants to collect data from clients to derive useful insights; however, since the clients do not trust the server, they perturb their secrets locally on their device before sharing the perturbed version with the server. Randomized perturbation ensures that the server, having observed the perturbed data, cannot infer the true value of any one client with strong probability. This resilience to reverseengineering gives clients privacy and plausible deniability. At the same time, the scheme allows the server to derive useful insights from aggregate perturbed data by analyzing aggregate populationlevel statistics and information pertaining to the general population, thus improving datadriven decisions and product quality.
2.1 Local Differential Privacy
The stateoftheart scheme currently used and deployed by major companies such as Google, Apple, and Microsoft in this local privacy architecture is LDP [2, 4, 7, 6, 5]. In LDP, each user perturbs their true value using an algorithm and sends to the server. The formal privacy guarantee satisfied by is given by the following definition.
Definition 1 (Ldp).
A randomized algorithm satisfies local differential privacy (LDP), where , if and only if for any inputs in universe , we have:
where denotes the set of all possible outputs of algorithm .
Here, is the privacy parameter controlling the level of indistinguishability. Lower yields higher privacy. Several works were devoted to building accurate protocols that satisfy LDP. These works were analyzed and compared in [3], and it was found that the two currently optimal protocols are based on: (i) OLH, the hashing extension of the GRR primitive and (ii) RAPPOR, the Bloom filterbased bitvector encoding and bit flipping strategy. Next, we briefly present these protocols.
Generalized Randomized Response (GRR). This protocol is a generalization of the Randomized Response survey technique introduced in [17]. Given the user’s true value , the perturbation function outputs with probability:
where denotes the size of the universe. This satisfies LDP since . In words, takes as input and assigns a higher probability to returning the same output . With remaining probability, samples a fake item from the universe uniformly at random, and outputs this fake item.
Optimized Local Hashing (OLH). When the universe size is large, it dominates the denominator of and , thus the accuracy of GRR deteriorates quickly. The OLH protocol proposed in [3] handles the large universe problem by first using a hash function to map into a smaller domain of hash values and then applying GRR on the hashed value. Formally, the client reports:
where is a randomly chosen hash function from a family of hash functions. Each hash function in the family maps to a domain , where denotes the size of the hashed domain, typically . We use as the optimal value of found in [3].
Randomized Aggregatable PrivacyPreserving Ordinal Response (RAPPOR). RAPPOR was developed by Google and is used in Chrome [2]. In RAPPOR, the user’s true value is encoded in a bitvector
. The straightforward method is to use onehot encoding such that
is a lengthbinary vector where the
’th position is 1 and the remaining positions are 0. When is large, both communication cost and inaccuracy cause problems, hence RAPPOR uses Bloom filter encoding. Specifically, is treated as a Bloom filter and a set of hash functions is used to map into a set of integer positions that must be set to 1. That is, , , and the remaining positions are 0.After the encoding, RAPPOR uses perturbation function on to obtain perturbed bitvector as follows:
where is analogous to the notion of sensitivity in differential privacy [18], i.e., how many positions can change in neighboring bitvectors at most? In onehot encoding, ; in Bloom filter encoding, . Then, the perturbation process considers each position in independently, and the existing bit is either kept or flipped when creating .
2.2 Utility Model and Analysis
The most common use of LDP is to enable the data collector learn aggregate population statistics from large collections of perturbed data. Much research has been invested in tasks such as frequency estimation (identify proportion of users who have a certain item) and heavy hitter discovery (identify popular items that are held by largest number of users) [3, 19, 20, 13]. Utility is measured by how closely the privately collected statistics resemble the actual statistics that would have been obtained if privacy was not applied.
Frequency estimation and heavy hitter discovery are important tasks in the cybersecurity domain as well. For example, by monitoring the observed frequencies of different malware using aggregates of privatized malware reports, a cybersecurity vendor such as Symantec can identify largescale malware outbreaks and create response teams to address them. Furthermore, analyzing the heavy hitter operating systems that are most commonly infected by the malware will enable Symantec understand OS vulnerabilities as well as the fraction of clients in its user base that are impacted. Relevant findings may also be used by Symantec when developing its nextgeneration antimalware defenses.
A novel challenge posed by the cybersecurity domain, however, is accurately supporting small user populations. Typically, existing LDP literature assumes the availability of “large enough” user populations in the order of hundreds of thousands or millions of users [2, 3, 13, 5]. Yet, population sizes in the cybersecurity domain are typically much smaller. For example, consider a security analyst analyzing the behavior of a specific malware by studying infected user machines. It is often the case that malware targets a specific computing platform or software product, limiting the total number of infections to less than a couple of thousand users globally. We designed the frequency estimation experiment in Figure 1
to illustrate how the utility of existing LDP protocols suffer under such small populations. We sampled each user’s secret value from a Gaussian distribution with mean
= 50 and standard deviation
= 12, and rounded it to the nearest integer. The goal of the data collector is to estimate the true frequency of each integer. We run this experiment for varying number of users between 1,000 and 100,000 and graph the error in the frequency estimations made by the data collector.We observe from Figure 1 that although more recent and optimized protocols improve previous ones by decreasing estimation error (e.g., OLH outperforms RAPPOR, which outperforms GRR); in cases with small user populations (e.g., 1000, 2500, or 5000 users) the improvements offered by more recent LDP protocols over previous ones are only 1020%, whereas our proposed CLDP approach provides a remarkable 6070% improvement. Furthermore, with 2500 users, estimation error is larger than 80% even for the stateoftheart OLH algorithm, which is optimized for frequency estimation [3]. In contrast, our proposed CLDP solution is able to handle small user populations gracefully, with estimation errors lower than half of OLH’s errors.
As shown by this analysis, the low utility levels of existing LDP protocols under small user population sizes constitute an important obstacle and drawback towards the deployment of LDP in the cybersecurity domain. This motivates us to seek alternative approaches and protocols for privacypreserving data collection under this challenging scenario.
2.3 Threat Model
The main goal of local privacy schemes is to offer confidentiality and plausible deniability with respect to the user’s secret. Hence, the threat stems from an untrusted third party (including the data collector) inferring the true value of the user with high confidence, from the perturbed value (s)he observes. Following this prevalent threat model [2, 21], our goal is to stop an adversary , even if they can fully observe perturbed output , from inferring the user’s true secret . Observe that, given , the optimal attack strategy for is:
(1)  
(2) 
where
denotes the prior probability of
, and denotes a locally private perturbation function (such as LDP). Then, worstcase privacy can be measured using the maximum posterior confidence (MPC) the adversary can achieve over all possible inputs and outputs:(3) 
This establishes a mathematical framework under which we can quantify worstcase adversarial disclosure of LDP protocols and CLDP protocols, for both informed adversaries (with prior knowledge ) and uninformed adversaries (e.g., by canceling out the term, or equivalently, setting for all ).
2.4 Problem Statement
The problem we study in this paper can be stated as follows. Our goal is to design a locally private data collection scheme such that: (i) it gives at least as strong privacy protection as existing LDP protocols in the above threat model, (ii) while doing so, it provides higher accuracy and data utility than existing protocols, especially for small user populations, and finally (iii) it offers extensibility and generalizability to support complex data types such as different types of singleton items, itemsets, and sequences that can be observed in the cybersecurity domain.
3 Proposed Solution
In this section, we introduce our proposed solution aproach to the problem stated above. We start with the notion of Condensed Local Differential Privacy (CLDP).
3.1 Condensed Local Differential Privacy
Let denote the finite universe of possible values (items) and let be a distance function that takes as input two items and measures their distance. We require to satisfy the conditions for being a metric, i.e., nonnegativity, symmetry, triangle inequality, and identity of discernibles. Then, CLDP can be formalized as follows.
Definition 2 (Cldp).
A randomized algorithm satisfies condensed local differential privacy (CLDP), where , if and only if for any inputs :
where denotes the set of all possible outputs of algorithm .
LDP and CLDP follow the same general structure and principle, but differ in how their privacy parameters (, ) and indistinguishability properties work. Similar to LDP, CLDP also satisfies the property that an adversary observing will not be able to distinguish whether the original value was or . In CLDP, indistinguishability is controlled also by items’ distance in addition to . As such, it constitutes a metricbased extension of LDP, such that if and only if for all ,, then CLDP is equivalent to LDP with ; otherwise, CLDP is a generalization of LDP with arbitrary .
Note that metricbased extensions of differential privacy have been studied in the past under certain settings such as aggregate query answering in centralized statistical databases [22], geoindistinguishability in locationbased systems [23, 24], and protecting sensitive relationships between entities in graphs through edge differential privacy [25]. In contrast, we propose the metricbased CLDP extension in the data collection setting. Our data collection setting poses novel research challenges due to: (i) the distributed (local) privacy scenario, unlike centralized DP assumed in aggregate query answering and graph mining in which user data is collected in the clear first and privacy is applied after the data has been stored in a centralized database, (ii) data types that are different than tabular datasets, locations, and graphs, and (iii) establishing relationships and comparison between CLDP and LDP under the assumed threat and utility models.
Since existing LDP protocols do not satisfy CLDP, we need new mechanisms and protocols supporting CLDP. We show below that a variant of the Exponential Mechanism (EM) [16] satisfies CLDP. EM is used in the remainder of the paper as a building block for more advanced CLDP protocols.
Exponential Mechanism (EM). Let be the user’s true value, and let the Exponential Mechanism, denoted by , take as input and output a perturbed value in , i.e., . Then, that produces output with the following probability satisfies CLDP:
Theorem 1.
Exponential Mechanism satisfies CLDP.
Proof.
Provided in Appendix A. ∎
3.2 Privacy Guarantees of LDP and CLDP
We study the resilience of LDP and CLDP against the prevalent threat model from Section 2.3. We are interested in keeping the MPC of CLDP equal to or lower than the MPC of LDP, so that we can ensure CLDP gives equal or better protection than LDP under this threat model. Another benefit of this analysis is to establish a link between the privacy budgets and .
Question: Let , and be given. If there is an LDP protocol currently in place with privacy budget and we are interested in switching to CLDP, how should the value of be selected to achieve equal or better protection than LDP according to the MPC threat model?
Answer: Based on the quantification of the adversary’s maximum (worstcase) posterior confidence from Equation 3, the requirement to have the MPC of CLDP less than or equal to that of LDP can be written as:
(4) 
where denotes LDP perturbation and denotes CLDP perturbation. Using , , and , we can compute the right hand side, and then search for the largest such that the left hand side remains smaller than the right hand side, iteratively by incrementing in each iteration and recomputing the left hand side in each iteration.
Practical Analysis: To demonstrate the practicality of the relationship we establish above and derive insights, we solve Equation 4 under three example settings. In all settings, we assume is the set of integers between and measures absolute value distance between two integers. We use
= Uniform, Gaussian and Exponential distributions with the corresponding distribution parameters given in Figure
2. Our rationale is that eachrepresents a different type of skewness: Uniform has no skewness, Gaussian is symmetrically skewed around the mean, and Exponential has positive skew. Users’ secrets are samples from these distributions rounded to the nearest integer. Distribution parameters are chosen so that users’ secrets fall within the universe of
with nonnegligible tail probabilities. In Figure 2, we provide the results of our study for a wide range of values used in the literature: 0.25 4. Note that this figure is obtained purely by solving Equation 4, and does not require a real simulation or execution of the protocol involving end clients, i.e., the solution can be used in setup time before any data collection occurs.Figure 2 allows us to derive several interesting insights. The first important conclusion is that and are positively correlated – as the privacy requirement of LDP is relaxed, we can also relax that of CLDP. Second, it is often the case that . When making it so that LDP and CLDP give equivalent protection, in order to compensate for the added term of decreasing indistinguishability of distant items under CLDP, we have to use much smaller in CLDP compared to of LDP. Third, we observe that the relationship between and depends on
, e.g., if the data follows a Uniform distribution, CLDP must use a stricter
than the other two distributions. The reason is because there are two determining factors in calculating adversarial confidence: and . For skewed distributions, becomes the dominating factor and since the same is shared by LDP and CLDP, the behavior of perturbation functions have relatively less impact on adversarial confidence. In contrast, for the Uniform distribution, since becomes the dominating factor and its value is high for the tailends of the domain in the case of CLDP, we must use lower (stricter) in CLDP to match the adversarial confidence in LDP.In summary, for the adversarial threat model under consideration, given the parameters of , , and under LDP, if we choose according to the guideline of Equation 4, the maximum adversarial confidence under our CLDP protocols will be equal to or lower than that under LDP protocols. The analysis exemplified above can be applied in protocol setup time (before data collection) to convert to .
4 CLDP Mechanisms and Protocols
In this section, we present protocols that can be used in practice to collect data while achieving CLDP. We present three protocols: OrdinalCLDP, ItemCLDP, and SequenceCLDP, to address different types of client data.
4.1 OrdinalCLDP for Ordinal Items
Our first protocol is OrdinalCLDP, which addresses data types that stem from finite metric spaces, i.e., is discrete and finite, and there exists a builtin distance metric . This setting covers a variety of useful data types: (i) discrete numeric or integer domains where can be the absolute value distance between two items, (ii) ordinal item domains with total order, e.g., letters and strings ordered by dictionary order A B C …, and (iii) categorical domains with treestructured domain taxonomy where distance between two items can be measured using the depth of their most recent common ancestor in the taxonomy tree [26, 27]. In these scenarios, item order and are naturally defined and enforced.
In OrdinalCLDP, each client locally applies the Exponential Mechanism (EM) implementation shown in Algorithm 1, and uploads the perturbed output to the server. Notice that , , , and user’s true value are all inputs to the algorithm. The algorithm’s output is sent to the collector, thereby concluding the protocol in a single round without blocking.
4.2 ItemCLDP for NonOrdinal Items
Our second protocol is ItemCLDP in which each user still holds a singleton true item, but the items come from an arbitrary with no predefined or total order. For example, if consists of OS names, an order of MacOS Ubuntu Windows is neither available nor initially justifiable. This nonordinal item setting has been assumed in recent LDP research for finding popular emojis, emerging slang terms, popular and anomalous browser homepages, and merchant transactions [6, 5, 2, 3, 20]. We propose ItemCLDP in a generic way to maximize its scope and cover such existing cases. Parallel to previous works, our goal is to uphold relative item frequencies to learn popularity histograms and discover heavy hitters. To this end, we propose that a desirable perturbation strategy should replace a popular item with another popular item, and an uncommon item with another uncommon item. This achieves our goal of upholding relative item frequencies, as the expected behavior (conceptually) will be that popular items and uncommon items will be shuffled among themselves, and relative frequencies will be preserved.
The proposed ItemCLDP protocol is given in Figure 3. In ItemCLDP the server communicates with each client twice, hence the protocol consists of two rounds. The first round contains steps 13 and the second round contains steps 35. The server executes the first round with each client in parallel (without blocking). At the end of the first round, the server performs the aggregation and denoising step (Step 3). Then, the server executes the second round of communication. Next, we explain each step in detail.
Step 1. When the protocol starts, the server knows universe and each client has a true value . The value of the privacy budget and budget allocation parameter 0 1 can be publicly known. (The role of will be explained later.) A random total order is constructed among all items in such that for , . The server advertises and to all clients.
Step 2. Each client runs Algorithm 1 with budget to obtain a perturbed value locally on their device. Then, the clients send their to the server.
Step 3. Due to the utilityunaware choice of in Step 1, the absolute item frequencies discovered at this step contain significant error. A second round is desirable to reduce error. We found that although the server does not accurately learn absolute item frequencies by the end of Step 2, it can learn frequency ranking of items after applying a denoising strategy. A key aspect is how denoising is performed. Let be an item, let be the true count of in the population, which we are trying to find, and let denote the observed count of following the first round of clientserver communication (i.e., by the beginning of Step 3). The following holds in expectation:
Our goal is to solve for , but we cannot do so since
is also unknown. Hence, in our denoising strategy we make the heuristic decision of plugging
in place of , thereby obtaining as follows:We apply this to all ’s in and rank them according to their . The distance function is set to reflect this new ranking instead of the original from Step 1.
Step 4. After the clients receive , each client runs Algorithm 1 with to obtain and sends to the server. This invocation of the algorithm is with budget .
Step 5. Upon receiving the values from all clients, the server aggregates all results and obtains the final frequency estimates.
Role of parameter . Since ItemCLDP is a tworound protocol, each client sends perturbed information twice. Hence, we need to quantify the total privacy disclosure by the end of two rounds. We introduce the parameter to control the amount of disclosure. takes values between 0 and 1, and determines how the CLDP privacy budget will be allocated to the two rounds of ItemCLDP. Denoting ItemCLDP by , it is easy to show that for any two possible inputs , of a user:
The property follows from the fact that the first round satisfies CLDP with , and the second round satisfies CLDP with . We choose the value of by finding which yields minimum frequency estimation error by the end of the second round of ItemCLDP. According to our experiments with different , we recommend as it often gives best results, which indicates that finding an accurate preliminary ranking in the first round of ItemCLDP is indeed important to obtain a good final result.
4.3 SequenceCLDP for Item Sequences
In OrdinalCLDP and ItemCLDP, each user reports a single item. We now study the case where each user reports a collection of items. We give our SequenceCLDP protocol assuming this collection forms a sequence and later show the applicability of SequenceCLDP to setvalued data. Sequential data arises naturally in many domains, including cybersecurity (log files), genomics (DNA sequences), web browsing histories, and mobility traces; thus, a protocol for privacypreserving collection of item sequences holds great practical value. We denote by a user’s true sequence, and by the ’th element in . Each element is an item from universe . We assume the distance metric between individual items is known apriori, e.g., for ordinal we can use builtin as in OrdinalCLDP; otherwise, we can infer using a process similar to the first round of ItemCLDP. We measure distance between two sequences as:
In SequenceCLDP, each client runs the sequence randomization procedure given in Algorithm 2 to locally perturb their . The procedure has two probability parameters: 0 halt, gen 1, and a length parameter max_len denoting the maximum sequence length allowed. Given true sequence , the algorithm returns a perturbed sequence . Our goal in SequenceCLDP is to hide two complementary types of information: the length of and the contents of . For example, let consist of a sequence of security events observed on a machine. Hiding the length of is useful because it disables the adversary from learning that many security events were observed on this machine, hence that the machine is probably infected. Hiding the contents of is useful because it disables the adversary from learning which security events were observed, hence the adversary cannot infer which types of problems exist on the machine, which attacks are successful, and so forth. Denoting SequenceCLDP by , we formalize these privacy properties as follows.
Definition 3.
Let denote the probability that produces a perturbed sequence of length given input sequence . We say that satisfies lengthindistinguishability if for any pair of true sequences , :
Definition 4.
Let denote the probability that produces perturbed sequence given input sequence . We say that satisfies contentindistinguishability if, for any pair of true sequences , of same length, it holds that:
Definition 3 states that an adversary observing the length of the output sequence should not infer the length of the user’s true sequence with high confidence. Definition 4 states that an adversary observing the contents of the output sequence should not infer the contents of the user’s true sequence with high confidence. If simultaneously satisfies both definitions, we can conclude that it successfully hides both the length and the contents of a user’s true sequence. Note that both guarantees are adaptations of the CLDP notion, hence the degree of privacy protection is controlled by the CLDP privacy parameter .
Theorem 2.
Algorithm 2 satisfies lengthindistinguishability and contentindistinguishability simultaneously if halt, gen are selected either symmetrically as:
or asymmetrically within the ranges:
Proof.
Provided in Appendix A. ∎
It can be observed from Algorithm 2 that high halt causes the algorithm to terminate early for a long sequence, causing the perturbed sequence to be much shorter than . High gen adds random items to , thus it causes to be much longer than ; in addition, since the added items are sampled uniformly at random, will contain bogus elements. If we consider only the utility perspective, simultaneously decreasing the values of halt and gen yields higher sequence utility. However, Theorem 2 places bounds on the values of halt and gen; we cannot arbitrarily decrease them, otherwise we will not satisfy the indistinguishability properties. Among the given choices, asymmetric parameter choice is preferable when we expect users’ true sequences to be long, since it assigns a lower halting probability compared to the symmetric case, thereby decreasing the probability that the algorithm is terminated early. This is done at the cost of increased gen, which implies that the asymmetric case will more likely add synthetic elements to . Since this is detrimental to utility especially when users’ true sequences are short, for short sequences, we recommend using the symmetric parameter choice.
Application to setvalued data. Although SequenceCLDP is designed for sequences, it can be applied to setvalued data without information loss as follows. First, each user enforces a random ordering among the items in their itemset, to convert the itemset to a sequence. Second, the user runs SequenceCLDP on this converted sequence to obtain a perturbed sequence. Third, the user removes the ordering from the perturbed sequence to obtain a perturbed itemset. Finally, the perturbed itemset is sent to the server. On the other hand, we cannot use existing setvalued LDP protocols [13, 15] on sequences without losing their sequentiality (ordering) aspect. Hence, we believe SequenceCLDP has wider applicability than existing setvalued protocols.
5 Experimental Evaluation
We compare our proposed CLDP protocols against the existing LDP protocols on realworld cybersecurity datasets provided by Symantec as well as on public datasets. In singleton item comparison, we use RAPPOR (proposed and deployed by Google [2]) and OLH (recent protocol with improved utility over prior works [3]). In setvalued setting, we use SVIM as the current stateoftheart LDP protocol [15]. In each experimental dataset and setting, given , and , we freshly execute Equation 4 and the process in Section 3.2 to obtain the appropriate parameter for CLDP to ensure a fair comparison under each individual setting.
Experiment Summary and Highlights. In Section 5.1, we consider cybersecurity use cases that reflect the limitations of existing LDP protocols, e.g., user populations are small and sequential datasets cannot be handled. In these experiments, our results show that LDP protocols do not yield sufficient utility while satisfying a desirable level of privacy (e.g., for = 1). In contrast, our CLDP protocols offer satisfactory utility in most cases, hence their use in corresponding security products is practical and more preferable. In Section 5.2, we experiment on public datasets which differ from the above since they do not reflect the limitations of LDP protocols, e.g., user populations are sufficiently large (over half a million). Even so, we show that our CLDP protocols perform at least comparable to, or in many cases better than, the existing LDP protocols. A particularly interesting result is that while LDP protocols are capable of finding the frequencies and ranking of heaviest hitter items with good accuracy (e.g., top 510% of the universe), CLDP protocols’ accuracy is similar for these few heavy hitters, but they significantly outperform LDP protocols for remaining items (medium frequency and infrequent items).
5.1 Case Studies with Cybersecurity Datasets
Cybersecurity datasets provided by Symantec allowed us to test the accuracy of our protocols on pertinent realworld use cases and assess their practical applicability. We note that certain details such as the total number of infected machines are omitted on purpose due to confidentiality reasons.
Case Study #1: Ransomware Outbreak Detection
Setup. We consider the case where Symantec collects malware reports from machines running its antimalware protection software. Each machine sends a locally private malware report to Symantec daily, containing the count of malwarerelated events observed on that machine during that day. Privacy is injected into malware reports by modifying the actual counts with LDP/CLDP.
For our experiments, we obtained the daily infection counts of two ransomware variants within those time periods in which we already know there were global outbreaks. Specifically, we considered the infections reported for Cerber between March 22 and April 21 in 2017 with the outbreak happening on April 6, and those reported for Locky between February 11 and March 13 in 2018 with the outbreak happening on February 26. We evaluated how accurately the total number of daily infections for these two ransomware variants can be estimated using our OrdinalCLDP approach versus LDP approaches RAPPOR and OLH. The goal of our experiment is to retroactively test whether LDP/CLDP could identify if and when a ransomware outbreak happened.
Results. We illustrate the results in Figures 4 and 5 for Cerber and Locky respectively. If we use RAPPOR or OLH to perform detection, we obtain many false positives (days on which RAPPOR/OLH claim there was an outbreak, but in fact there was not) and false negatives (days on which RAPPOR/OLH claim there was no outbreak, but in fact there was). Some important examples are marked on the graphs, e.g., in Figure 4, RAPPOR raises false positives on March 2324 as well as April 1213. In addition, OLH misses the onset of the outbreak happening on April 5 by reporting 0 observed infections whereas in reality there are 16,388 infections.
False positives are costly to Symantec since they cause the company to devote resources and response teams to combat a malware outbreak that does not exist. False negatives are also costly since Symantec will not react to the malware outbreak in a timely manner, losing customer trust. Observing this many false positives and false negatives with LDP methods raises serious concerns. In contrast, using our OrdinalCLDP protocol, Symantec can obtain daily infection counts with high accuracy. Note that in Figures 4 and 5, there are small discrepancies between the actual infections versus CLDP’s predicted infections, which demonstrates that CLDP is not errorfree. However, contrary to LDP protocols, our CLDP protocols can be used to detect ransomware outbreaks in a privacypreserving manner without major false positives or negatives.
Case Study #2: Ransomware Vulnerability Analysis
Setup. Next, we ask the question: Can we find which operating systems were most infected by ransomware? This would assist Symantec in discovering vulnerable or targeted OSs. When performing this analysis, we focus specifically on the day of outbreak (April 6, 2017 for Cerber and February 26, 2018 for Locky) and the machines reporting infections on this day. We assume Symantec obtains a locally private malware report from these machines including the vendor, specs, and OS version. Upon collecting reports from all machines, Symantec infers how frequently each OS was infected in the population, and ranks OSs in terms of infection frequency.
We conduct two experiments. In the first experiment, we find the actual (nonprivate) infection frequency of each OS, and compare actual frequencies with the frequencies that would be obtained if RAPPOR, OLH, or ItemCLDP were applied, using L1 distance as measurement of error. We vary between 0.5 4. In the second experiment, we fix = 1, rank the OSs in terms of infection frequency (highest to lowest), and study the top10 most infected OSs. Due to ethical considerations, we anonymize OS names by renaming according to their actual rank, e.g., topranked OS is named os1, 2nd ranked OS is named os2, and so forth. If a lowerranked OS is a different version of a higher ranked OS, we add the version information to the name, e.g., os1 v2.
Results. We report the results of these experiments on Cerber and Locky in Figures 6 and 7, respectively. In the tables on the right, those OSs that are correctly discovered by RAPPOR, OLH, and ItemCLDP with correct ranks are depicted in bold. OSs that are correctly discovered but have incorrect rank are depicted in regular font. OSs that privacy methods claim to be among top10 but in reality are not are depicted with strikethrough. We first observe from the tables that the heaviest hitters are correctly discovered in the correct order by all privacy solutions, e.g., top3 in Cerber. However, as we move lower in the ranking, LDP/CLDP methods start making errors. Particularly for Cerber, RAPPOR and OLH correctly identify only 4 and 5 out of 10 most frequent OSs, respectively, whereas ItemCLDP can identify 9 out of 10. Note that ItemCLDP is missing only the lowest ranked OS (10th), which is arguably the least significant among all ten.
L1 errors in the graphs on the left show that when is small (0.5 or 1), LDP is competitive against CLDP in this case study. When is higher, CLDP clearly dominates in terms of accuracy. Comparing tabular rankings with L1 scores, we see that CLDP can preserve relative rankings even when its L1 errors are similar to those of LDP. For example, the L1 errors of RAPPOR, OLH, and ItemCLDP are similar when = 1. However, studying the top10 tables shows that ItemCLDP is better at identifying frequent OSs than RAPPOR and OLH.
Case Study #3: Inspecting Suspicious Activity
Setup. In this case study, we consider the sequences of securityrelated event flags raised by Symantec’s behavioral detection engine on each client machine. There are 143 different flags signalling various forms of suspicious activity, ranging from process injection to load point modification. When a flag is raised, it is logged on the client machine with a timestamp, as such the collection of the flagged events constitute a sequence over a time period. We investigate the accuracy of collecting these event sequences using SequenceCLDP. We focus on the same 31day periods we considered in Case Study #1, and collect locally private event sequences from the same set of machines infected by ransomware Cerber/Locky. Longitudinal analysis of these event sequences enables Symantec to inspect suspicious activities possibly related to the ransomware infection, e.g., chain of anomalous events leading to the infection. This helps in inferring the precursors or consequences of the infection, and Symnatec can update its detection engine based on the findings. In total, we have 23,558 and 5,717 sequences for Cerber and Locky, respectively, with lengths between 2 to 30.
We use ngram analysis by mining the topk popular bigram and trigram patterns from the sequences
[28, 9]. We mine the actual patterns that would be obtained if no privacy were applied, and the patterns obtained after SequenceCLDP is applied. Let denote the set of actual topk patterns anddenote the set of topk patterns mined from perturbed data. We measure their similarity using the Jaccard index:
. Jaccard similarity is between 0 and 1, with values close to 1 indicating higher similarity. We do not compare against RAPPOR, OLH or SVIM in this case study, since they are not compatible with sequence perturbation.Results. The results are shown in Figure 8. We make two important observations. First, as we relax the privacy requirement by increasing , ngrams mined from perturbed sequences become more accurate, as implied by the increase in Jaccard similarity. Second, mining fewer topk patterns is easier than mining many patterns in general. For example, top20 in Locky has higher Jaccard similarity score than top30 and top50. Similar observation applies to Cerber. This shows SequenceCLDP preserves the heaviest hitters best, and has higher probability of making errors as ngrams become less and less frequent, which agree with our intuition from Case Study #2. Note that we mine more patterns in the case of Cerber (up to top200 as opposed to top50 for Locky) since the Cerber dataset has more input sequences, thus we can find more ngrams with significant support and confidence.
5.2 Experiments with Public Datasets
Datasets. We also experimented on two public datasets: POS and Retail. Both are setvalued datasets. We use them to run singleton nonordinal item experiments as well as setvalued experiments. For the former, we randomly sample an item from each itemset to create a singleton item dataset. For the latter, we run the setvalued adaptation of our SequenceCLDP protocol and compare it against SVIM, the stateoftheart setvalued LDP protocol [15].
POS contains several years of market basket sale data from a large electronics retailer [29]. It consists of a total of 515,596 transactions with 1,657 unique items sold.
Retail contains transactions occurring between January 2010 and September 2011 for a UKbased online retail site [30]. After cleaning empty entries, this dataset consists of a total of 540,455 transactions with 2,603 unique items.
Evaluation Metrics. We use the following metrics to evaluate the accuracy of the privacy protocols. Similar to our notation from Section 4.2, let denote an item, denote its true frequency, and denote its frequency estimated by the privacy protocol. Let be the ground truth topk items where is the j’th most frequent item.
Average Relative Error (AvRE) measures the mean relative error in topk items’ estimated frequencies versus their true frequencies. Formally:
The Kendalltau coefficient (KT) measures how well the rankings of heavy hitter topk items are preserved. A pair of items , are said to be concordant if their sorted popularity ranks agree, i.e., either of the following hold:
They are said to be discordant if neither holds. Then, the Kendalltau coefficient of correlation can be defined as:
Results of Singleton Experiments. We run experiments in the singleton setting and compare ItemCLDP with LDP protocols RAPPOR and OLH. Each experiment is repeated 20 times and results are averaged. In Figure 9, we measure AvRE and Kendalltau across all items by setting . Results show that as privacy is relaxed (i.e., and increase) AvRE decreases and Kendalltau increases. In most cases, ItemCLDP provides better accuracy than LDP protocols. Most noticeably, Kendalltau scores of ItemCLDP are much higher than those of LDP protocols. When 3.5, Kendalltau scores indicate almost perfect correlation between actual item rankings and rankings found by ItemCLDP, confirming its accuracy benefit.
Next, we fix the privacy parameter to 2.5 and vary the (for topk) to analyze how the protocols behave with respect to varying popularities of items. The results of this experiment are reported in Figure 10. For small such as 64, there is usually one LDP protocol at least comparable to or better than ItemCLDP. Note that 64 is a constrained setting covering less than only 6% of the items in the universe. LDP protocols are optimized to discover such heavy hitters and therefore, they deliver good results when is small. However, for larger , we observe that ItemCLDP can significantly outperform LDP. In particular, for 512, under LDP protocols there is almost no correlation in frequency rankings (implied by Kendalltau results near or below 0), whereas in ItemCLDP, a strong correlation is maintained across all . In short, if the goal is to discover only the top few heavy hitters, LDP protocols offer sufficient accuracy. However, if the goal is to find statistics regarding mediumfrequency or infrequent items as well, LDP protocols have inadequate accuracy and we recommend using ItemCLDP.
Results of SetValued Experiments. We compare the setvalued adaptation of SequenceCLDP against the SVIM protocol satisfying LDP [15]. Similar to the singleton experiment, we start by setting and vary to study the impact of the privacy budget on accuracy across all items. From the results in Figure 11, we observe that SequenceCLDP offers significant accuracy improvement in terms of both AvRE and Kendalltau score. For example, the accuracy improvement in terms of AvRE ranges between 3090% depending on the dataset and the value of the privacy parameter. In Figure 12, we fix to 2.5 and vary . As we increase , the accuracy of the protocols generally decrease, since making estimations regarding infrequent items is often more difficult than estimating only the heavy hitters. For SequenceCLDP, this accuracy decrease is linear or sublinear; but for SVIM, when 512, errors start increasing almost exponentially, reiterating that LDP protocols can be poorly suited to estimate statistics regarding infrequent items. Studying the Kendalltau scores, for very small , Kendalltau of SVIM and SequenceCLDP are similar, whereas when 128, SequenceCLDP’s Kendalltau scores are significantly better.
6 Related Work
Differential privacy was initially proposed in the centralized setting in which a trusted central data collector possesses a database containing clients’ true values, and noise is applied on the database or queries executed on the database instead of each client’s individual value [31, 18]. In contrast, in LDP, each client locally perturbs their data on their device before sending the perturbed version to the data collector [1]. The local setting has seen practical realworld deployment, including Google’s RAPPOR as a Chrome extension [2, 4], Apple’s use of LDP for spelling prediction and emoji frequency detection [6, 5], and Microsoft’s collection of application telemetry [7].
Local differential privacy has also sparked interest from the academic community. There have been several theoretical treatments for finding upper and lower bounds on the accuracy and utility of LDP [1, 32, 19, 33, 34]. From a more practical perspective, Wang et al. [3] showed the optimality of OLH for singleton item frequency estimation. Qin et al. [13] and Wang et al. [15] studied frequent item and itemset mining from setvalued client data. Cormode et al. [35] and Zhang et al. [36]
studied the problem of obtaining marginal tables from highdimensional data. Recently, LDP was considered in the contexts of geolocations
[21], decentralized social graphs [14], and discovering emerging terms from text [12].However, there have also been criticisms and concerns regarding the utility of LDP, which motivated recent works proposing relaxations or alternatives to LDP. BLENDER [37] proposed a hybrid privacy model in which only a subset of users enjoy LDP, whereas remaining users act as optin beta testers who receive the guarantees of centralized DP. In contrast, our work stays purely in the local privacy model without requiring a trusted data collector (necessary in centralized or hybrid DP) or optin clients. Personalized LDP, a weaker form of LDP, was proposed for spatial data aggregation in [21]; whereas the Restricted LDP scheme proposed in [38] treats certain client data as more sensitive than others and suggests restricted perturbation schemes to specifically address the more sensitive data. In contrast, our CLDP approach treats all users’ data as sensitive (parallel with LDP assumptions), remains agnostic and extensible with respect to data types, and gives as strong protection as LDP under LDP’s threat model.
7 Conclusion
In this paper, we proposed Condensed Local Differential Privacy (CLDP) for utilityaware and privacypreserving data collection, and developed three protocols: OrdinalCLDP, ItemCLDP, and SequenceCLDP. Our protocols have the desirable property of remaining accurate for populations that are orders of magnitude smaller than those required by existing LDP protocols to give adequate accuracy. Furthermore, our protocols handle a variety of data types prevalent in the cybersecurity domain, including item sequences. Our Symantec case studies and experiments on public datasets show that proposed CLDP protocols offer significant accuracy improvement over existing LDP protocols.
References
 [1] J. C. Duchi, M. I. Jordan, and M. J. Wainwright, “Local privacy and statistical minimax rates,” in 2013 IEEE 54th Annual Symposium on Foundations of Computer Science (FOCS). IEEE, 2013, pp. 429–438.
 [2] Ú. Erlingsson, V. Pihur, and A. Korolova, “Rappor: Randomized aggregatable privacypreserving ordinal response,” in Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2014, pp. 1054–1067.
 [3] T. Wang, J. Blocki, N. Li, and S. Jha, “Locally differentially private protocols for frequency estimation,” in Proc. of the 26th USENIX Security Symposium, 2017, pp. 729–745.
 [4] G. Fanti, V. Pihur, and Ú. Erlingsson, “Building a rappor with the unknown: Privacypreserving learning of associations and data dictionaries,” Proceedings on Privacy Enhancing Technologies, vol. 2016, no. 3, pp. 41–61, 2016.
 [5] A. G. Thakurta, A. H. Vyrros, U. S. Vaishampayan, G. Kapoor, J. Freudiger, V. R. Sridhar, and D. Davidson, “Learning new words,” Mar. 14 2017, uS Patent 9,594,741.
 [6] A. G. Thakurta, A. H. Vyrros, U. S. Vaishampayan, G. Kapoor, J. Freudinger, V. V. Prakash, A. Legendre, and S. Duplinsky, “Emoji frequency detection and deep link frequency,” Dec. 14 2017, uS Patent App. 15/640,266.
 [7] B. Ding, J. Kulkarni, and S. Yekhanin, “Collecting telemetry data privately,” in Advances in Neural Information Processing Systems, 2017, pp. 3571–3580.
 [8] D. H. . Chau, C. Nachenberg, J. Wilhelm, A. Wright, and C. Faloutsos, “Polonium: Terascale graph mining and inference for malware detection,” in Proceedings of the 2011 SIAM International Conference on Data Mining. SIAM, 2011, pp. 131–142.
 [9] N. Karampatziakis, J. W. Stokes, A. Thomas, and M. Marinescu, “Using file relationships in malware classification,” in International Conference on Detection of Intrusions and Malware, and Vulnerability Assessment. Springer, 2012, pp. 1–20.
 [10] A. Tamersoy, K. Roundy, and D. H. Chau, “Guilt by association: large scale malware detection by mining filerelation graphs,” in Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 2014, pp. 1524–1533.
 [11] M. Egele, T. Scholte, E. Kirda, and C. Kruegel, “A survey on automated dynamic malwareanalysis techniques and tools,” ACM Computing Surveys (CSUR), vol. 44, no. 2, p. 6, 2012.
 [12] N. Wang, X. Xiao, Y. Yang, T. D. Hoang, H. Shin, J. Shin, and G. Yu, “Privtrie: Effective frequent term discovery under local differential privacy,” in IEEE International Conference on Data Engineering (ICDE), 2018.
 [13] Z. Qin, Y. Yang, T. Yu, I. Khalil, X. Xiao, and K. Ren, “Heavy hitter estimation over setvalued data with local differential privacy,” in Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2016, pp. 192–203.
 [14] Z. Qin, T. Yu, Y. Yang, I. Khalil, X. Xiao, and K. Ren, “Generating synthetic decentralized social graphs with local differential privacy,” in Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2017, pp. 425–438.
 [15] T. Wang, N. Li, and S. Jha, “Locally differentially private frequent itemset mining,” in IEEE Symposium on Security and Privacy (SP). IEEE, 2018.
 [16] F. McSherry and K. Talwar, “Mechanism design via differential privacy,” in 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS). IEEE, 2007, pp. 94–103.
 [17] S. L. Warner, “Randomized response: A survey technique for eliminating evasive answer bias,” Journal of the American Statistical Association, vol. 60, no. 309, pp. 63–69, 1965.
 [18] A. Inan, M. E. Gursoy, and Y. Saygin, “Sensitivity analysis for noninteractive differential privacy: bounds and efficient algorithms,” IEEE Transactions on Dependable and Secure Computing, 2017.

[19]
R. Bassily and A. Smith, “Local, private, efficient protocols for succinct
histograms,” in
Proceedings of the 47th Annual ACM Symposium on Theory of Computing
. ACM, 2015, pp. 127–135.  [20] R. Bassily, U. Stemmer, A. G. Thakurta et al., “Practical locally private heavy hitters,” in Advances in Neural Information Processing Systems, 2017, pp. 2288–2296.
 [21] R. Chen, H. Li, A. Qin, S. P. Kasiviswanathan, and H. Jin, “Private spatial data aggregation in the local setting,” in 2016 IEEE 32nd International Conference on Data Engineering (ICDE). IEEE, 2016, pp. 289–300.
 [22] K. Chatzikokolakis, M. E. Andres, N. E. Bordenabe, and C. Palamidessi, “Broadening the scope of differential privacy using metrics,” in International Symposium on Privacy Enhancing Technologies (PETS). Springer, 2013, pp. 82–102.
 [23] M. Andres, N. Bordenabe, K. Chatzikokolakis, and C. Palamidessi, “Geoindistinguishability: Differential privacy for locationbased systems,” in Proceedings of the 2013 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2013, pp. 901–914.
 [24] N. Bordenabe, K. Chatzikokolakis, and C. Palamidessi, “Optimal geoindistinguishable mechanisms for location privacy,” in Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2014, pp. 251–262.
 [25] M. Hay, C. Li, G. Miklau, and D. Jensen, “Accurate estimation of the degree distribution of private networks,” in 9th IEEE International Conference on Data Mining (ICDM). IEEE, 2009, pp. 169–178.
 [26] J. SoriaComas, J. DomingoFerrer, D. Sánchez, and S. Martínez, “Enhancing data utility in differential privacy via microaggregationbased kanonymity,” The VLDB Journal – The International Journal on Very Large Data Bases, vol. 23, no. 5, pp. 771–794, 2014.
 [27] D. Sánchez, M. Batet, D. Isern, and A. Valls, “Ontologybased semantic similarity: A new featurebased approach,” Expert Systems with Applications, vol. 39, no. 9, pp. 7718–7728, 2012.

[28]
S. B. Mehdi, A. K. Tanwani, and M. Farooq, “Imad: inexecution malware
analysis and detection,” in
Proceedings of the 11th Annual Conference on Genetic and Evolutionary Computation
. ACM, 2009, pp. 1553–1560.  [29] “Pos dataset,” https://github.com/cpearce/HARM/blob/master/datasets/BMSPOS.csv, 2015, [Online; accessed 05Oct2018].
 [30] D. Chen, S. L. Sain, and K. Guo, “Data mining for the online retail industry: A case study of rfm modelbased customer segmentation using data mining,” Journal of Database Marketing & Customer Strategy Management, vol. 19, no. 3, pp. 197–208, 2012.
 [31] C. Dwork, “Differential privacy,” in International Colloquium on Automata, Languages, and Programming. Springer, 2006, pp. 1–12.
 [32] P. Kairouz, S. Oh, and P. Viswanath, “Extremal mechanisms for local differential privacy,” in Advances in Neural Information Processing Systems, 2014, pp. 2879–2887.
 [33] A. Smith, A. Thakurta, and J. Upadhyay, “Is interaction necessary for distributed private learning?” in 2017 IEEE Symposium on Security and Privacy (SP). IEEE, 2017, pp. 58–77.
 [34] M. Bun, J. Nelson, and U. Stemmer, “Heavy hitters and the structure of local privacy,” in Proceedings of the 35th ACM SIGMODSIGACTSIGAI Symposium on Principles of Database Systems. ACM, 2018, pp. 435–447.
 [35] G. Cormode, T. Kulkarni, and D. Srivastava, “Marginal release under local differential privacy,” in Proceedings of the 2018 International Conference on Management of Data. ACM, 2018, pp. 131–146.
 [36] Z. Zhang, T. Wang, N. Li, S. He, and J. Chen, “Calm: Consistent adaptive local marginal for marginal release under local differential privacy,” in Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2018, pp. 212–229.
 [37] B. Avent, A. Korolova, D. Zeber, T. Hovden, and B. Livshits, “BLENDER: Enabling local search with a hybrid differential privacy model,” in 26th USENIX Security Symposium (USENIX Security 17). Vancouver, BC: USENIX Association, 2017, pp. 747–764.
 [38] T. Murakami and Y. Kawamoto, “Restricted local differential privacy for distribution estimation with high data utility,” arXiv preprint arXiv:1807.11317, 2018.
Appendix A Proof of Theorem 1
Recall that for item universe , Exponential Mechanism (EM), denoted by , takes as input true value and produces fake value with probability:
We prove here that satisfies CLDP by showing that:
Proof.
We start by applying the definition of EM and breaking the odds ratio into two terms:
(5)  
(6) 
For *, we observe that:
(7) 
Since is a metric, it satisfies the triangle inequality. Therefore, it holds that: . Combining this with the above, we conclude for *:
(8) 
Next, we study the second term **:
(9) 
Again by triangle inequality: . Applying this to the numerator we get:
(10)  
(11)  
(12) 
We established that and . Plugging them into Equation 6 concludes our proof:
∎
Appendix B SequenceCLDP Privacy Proofs
This section contains the privacy proofs for the SequenceCLDP perturbation mechanism given in Algorithm 2 of the main text. We have claimed in the main text that SequenceCLDP satisfies two properties: length indistinguishability and content indistinguishability. We organize this section into two subsections for proving each property.
b.1 Length Indistinguishability
We first prove that SequenceCLDP, denoted here onwards by , satisfies lengthindistinguishability when certain value ranges are enforced for its parameters halt, gen. We need to show that for any pair of true sequences , :
Proof.
Observe from Algorithm 2 in the main text that given a true sequence of length , produces an output sequence of length with probability:
We consider several disjoint cases depending on the length of true sequences , . The parameters must be chosen so that all cases are simultaneously satisfied.
Case 0: . This case is trivial since treats their length equally, resulting in:
Remaining cases fall under , and are analyzed casebycase below.
Case 1: and
Since , we trivially have , hence this case is satisfied without placing constraints on the values of halt, gen.
Case 2: and
Divide this into two subcases:
2a: : When this holds, the requirement that
can be written as:
Thus, we have the constraints for parameters halt, gen as:
(13) 
2b: : When this holds, the same requirement can be written as:
resulting in the parameter constraints:
(14) 
Case 3: and
Since the assumption in this case is , we can rewrite the RHS as:
Given the constraint we established in Equation 14 holds, the following is a sufficient condition to satisfy the above:
Notice that, by assumption, and both and are integers representing sequence length. Then, , making the following parameter constraint sufficient:
(15) 
Case 4: and
Since the assumption in this case is , we can rewrite the RHS as:
Given the constraint we established in Equation 13 holds, the following is a sufficient condition to satisfy the above:
Since and both and are integers, we have , making the following parameter constraint sufficient:
(16) 
Combine all cases: Finally, we combine the parameter constraints we identified at the end of each case (Equations 13, 14, 15, and 16) to obtain a system of equations:
Given the privacy parameter , the values of halt, gen satisfying all four equations simultaneously satisfy lengthindistinguishability. If we set halt=gen and solve this system of equations, we observe that the following is a solution (which we call the symmetric solution in the main text):
Another solution to the same system of equations, which we call the asymmetric solution, is desirable when input sequences , are longer and therefore a smaller halting probability is preferable:
Comments
There are no comments yet.