Private Sequential Learning

by   John N. Tsitsiklis, et al.
Stanford University

We formulate a private learning model to study an intrinsic tradeoff between privacy and query complexity in sequential learning. Our model involves a learner who aims to determine a scalar value, v^*, by sequentially querying an external database and receiving binary responses. In the meantime, an adversary observes the learner's queries, though not the responses, and tries to infer from them the value of v^*. The objective of the learner is to obtain an accurate estimate of v^* using only a small number of queries, while simultaneously protecting her privacy by making v^* provably difficult to learn for the adversary. Our main results provide tight upper and lower bounds on the learner's query complexity as a function of desired levels of privacy and estimation accuracy. We also construct explicit query strategies whose complexity is optimal up to an additive constant.


page 1

page 2

page 3

page 4


Optimal query complexity for private sequential learning

Motivated by privacy concerns in many practical applications such as Fed...

Query Complexity of Bayesian Private Learning

We study the query complexity of Bayesian Private Learning: a learner wi...

Learner-Private Online Convex Optimization

Online convex optimization is a framework where a learner sequentially q...

Playing to Learn Better: Repeated Games for Adversarial Learning with Multiple Classifiers

We consider the problem of prediction by a machine learning algorithm, c...

Dynamically Protecting Privacy, under Uncertainty

We propose and analyze the ε-Noisy Goal Prediction Game to study a funda...

Data Privacy for a ρ-Recoverable Function

A user's data is represented by a finite-valued random variable. Given a...

Distribution Privacy Under Function Recoverability

A user generates n independent and identically distributed data random v...

1 Introduction

Organizations and individuals often rely on relevant data to solve decision problems. Sometimes, such data are beyond the immediate reach of a decision maker and must be acquired by interacting with an external entity or environment. However, these interactions may be monitored by a third-party adversary and subject the decision maker to potential privacy breaches, a possibility that has become increasingly prominent as information technologies and tools for data analytics advance.

The present paper studies a decision maker, henceforth referred to as the learner, who acquires data from an external entity in an interactive fashion by submitting sequential queries. The interactivity benefits the learner by enabling her to tailor future queries based on past responses and thus reduce the number of queries needed, while, at the same time, exposes the learner to substantial privacy risk: the more her queries depend on past responses, the easier it might be for an adversary to use the observed queries to infer those past responses. Our main objective is to articulate and understand an intrinsic privacy versus query complexity tradeoff in the context of such a Private Sequential Learning model.

We begin with an informal description of the model. A learner would like to determine the value of a scalar, , referred to as the true value, which lies in a bounded subset of . To search for , she must interact with an external database, through sequentially submitted queries: at step , the learner submits a query, , and receives a binary response, , where if , and , otherwise. The interaction is sequential in the sense that the learner may choose a query depending on the responses to all previous queries. Meanwhile, there is an adversary who eavesdrops on the learner’s actions: she observes all of the learner’s queries, , but not the responses, and tries to use these queries to estimate the true value, . The learner’s goal is to submit queries in such a way that she can learn within a prescribed error tolerance, while cannot be accurately estimated by the adversary with high confidence. The learner’s goal is easily attained by submitting an unlimited number of queries, in which case the queries need not depend on the past responses and hence reveal no information to the adversary. Our quest is, however, to understand the least number of queries that the learner needs to submit in order for her to successfully retain privacy. Is the query complexity significantly different from the case where privacy constraints are absent? How does it vary as a function of the levels of accuracy and privacy? Is there a simple and yet efficient query strategy that the learner can adopt? Our main results address these questions.

1.1 Motivating Examples

We discuss two examples that provide some context for our model.

Example 1 - learning an optimal price. A firm is to release a new product and would like to identify a revenue maximizing price, , prior to the product launch. The firm believes that the revenue function, , is strictly concave and differentiable as a function of the price, , but has otherwise little additional information. A sequential learning process is employed to identify

over a series of epochs: in epoch

, the firm assesses how the market responds to a test price, , and receives a binary feedback as to whether or . This may be achieved, for instance, by contracting a consulting firm to conduct market surveys on the price sensitivity around . The firm would like to estimate with reasonable accuracy over a small number of epochs, but is wary that a competitor might be able to observe the surveys and deduce from them the value of ahead of the product launch. In the context of Private Sequential Learning, the firm is the learner, the competitor is the adversary, the revenue-maximizing price is the true value, and the test prices are the queries. The binary response on the revenue’s price sensitivity indicates whether the revenue-maximizing price is less than the current test price.

Example 2 - online optimization with private weights. In the previous example, the adversary is a third-party that does not observe the responses to the queries. We now provide a different example in which the adversary is the database to which queries are submitted, and thus has partial knowledge of the responses.

Consider a learner who wishes to identify the maximizer, , of a function over some bounded interval , where is a collection of strictly concave differentiable constituent functions, and are positive (private) weights representing the importance that the learner associates with each constituent function. The learner knows the weights but does not have information about the constituent functions; such knowledge is to be acquired by querying an external database. During epoch , the learner submits a test value, , and receives from the database the derivatives of all constituent functions at , . Using the weights, the learner can then compute the derivative , whose sign serves as a binary indicator of the position of the maximizer relative to the current test value. The database, which possesses complete information about the constituent functions but does not know the weights, would like to infer from the learner’s querying pattern the maximizing value or possibly the weights themselves. The query strategies that we develop for Private Sequential Learning can also be applied to this setting. The connection between the two is made precise in Xu (2017).

1.2 Preview of the Main Result

We now preview our main result. Let us begin by introducing some additional notation. Recall that both the learner and the adversary aim to obtain estimates that are close to a true value . We denote by and the absolute estimation error that the learner and the adversary is willing to tolerate, respectively. We will employ a privacy parameter to quantify the learner’s level of privacy at the end of the learning process: the learner’s privacy level is if the adversary can successfully approximate the true value within an error of

with probability at most

. A private query strategy for the learner must be able to produce an estimate of the true value within an error of at most , while simultaneously guaranteeing that the desired privacy level holds against the adversary.

Our main objective is to quantify the query complexity of private sequential learning, , defined as the minimum number of queries needed for a private learner strategy, under a given set of parameters, and . Specifically, we will focus on the regime where . The reason for this choice will become clear after a formal introduction of the model, and we will revisit it at the beginning of Section 4.222Intuitively, this regime is interesting because of two factors. On the one hand, a smaller (relative to the learner’s accuracy ) is effectively impossible for the adversary, as the adversary has less information available than the learner. On the other hand, a larger (relative to ) makes the adversary’s problem trivial because a random guess in will then have roughly at least probability to be within of the true value. In this regime, we have the following upper and lower bounds on the query complexity.

  1. We establish an upper bound333All logarithms are taken with respect to base 2. To reduce clutter, non-integer numbers are to be understood as rounded upwards. For example, the lower bound should be understood as , where represents the ceiling function. of by explicitly constructing a private learner strategy, which applies for any in the range .

  2. We establish a lower bound of by characterizing the amount of information available to the adversary.

We note that our bounds are tight in the sense that when the adversary’s accuracy requirement is as loose as possible, i.e., , the upper bound matches the lower bound, up to an additive constant.

1.3 Related Work

In the absence of a privacy constraint, the problem of identifying a value within a compact interval through (possibly noisy) binary feedback is a classical problem arising in domains such as coding theory (Horstein (1963)) and root finding (Waeber et al. (2013)). It is well known that the bisection algorithm achieves the optimal query complexity of (cf. Waeber et al. (2013)), where is the error tolerance. In contrast, to the best of our knowledge, the question of how to preserve a learner’s privacy when her actions are fully observed by an adversary and what the resulting query complexity would be has received relatively little attention in the literature.

Related to our work, in spirit, is the body of literature on differential privacy (Dwork et al. (2006), Dwork and Roth (2014)), a concept that has been applied in statistics (Wasserman and Zhou (2010), Smith (2011), Duchi et al. (2016)) and learning theory (Raskhodnikova et al. (2008), Chaudhuri and Hsu (2011), Blum et al. (2013), Feldman and Xiao (2014)). Differential privacy mandates that the output distribution of an algorithm be insensitive under certain perturbations of the input data. For instance, Jain et al. (2012) study regret minimization in an online optimization problem while ensuring differential privacy, in the sense that the distribution of the sequence of solutions remains nearly identical when any one of the functions being optimized is perturbed. In contrast, our definition of privacy measures the adversary’s ability to perform a specific inference task.

In a different model, Tsitsiklis and Xu (2018) study the issue of privacy in a sequential decision problem, where an agent attempts to reach a particular node in a graph, traversing it in a way that obfuscates her intended destination against an adversary who observes her past trajectory. The authors show that the probability of a correct prediction by the adversary is inversely proportional to the time it takes for the agent to reach her destination. Similar to the setting of Tsitsiklis and Xu (2018), the learner in our model also plays against a powerful adversary who observes all past actions. However, a major new element is that the learner in our model strives to learn a piece of information of which she herself has no prior knowledge, in contrast to the agent in Tsitsiklis and Xu (2018) who tries to conceal private information already in her possession. In a way, the central conflict of trying to learn something while preventing others from learning the same information sets our work apart from the extant literature.

Similarly, our model is close in spirit to private information retrieval problems in the field of cryptography (Kushilevitz and Ostrovsky (1997), Chor et al. (1998), Gasarch (2004)). In these problems, a learner wishes to retrieve an item from some location in a database, in such a manner that the database obtains no information on the value of , where the latter requirement can be either information theoretic or based on computational hardness assumptions. Compared to this line of literature, our privacy requirement is substantially weaker: the adversary may still obtain some information on the true value. This relaxation of the privacy requirement allows the learner to deploy richer and more sample-efficient query strategies.

1.4 Organization

The remainder of the paper is organized as follows. We formally introduce the Private Sequential Learning model in Section 2. In Section 3, we motivate and discuss private learner strategies. Our main results are stated in Section 4. Before delving into the proofs, we examine in Section 5 three examples of learner strategies that provide further insight into the structure of the problem. Sections 6 and 7 are devoted to the proof of the upper and lower bounds in our main theorem, respectively. We conclude in Section 8, where we also describe some interesting variations of our model.

2 The Private Sequential Learning Model

We formally introduce our Private Sequential Learning model. The model involves a learner who aims to determine a particular true value, . The true value is a scalar in some bounded subset of . Without loss of generality, we assume that belongs to the interval444We consider a half-open interval here, which allows for a cleaner presentation, but the essence is not changed if the interval is closed. and that the learner knows that this is the case. The true value is stored in an external database. In order to learn the true value, the learner interacts with the database by submitting queries as follows. At each step , the learner submits a query , and receives from the database a response, , indicating whether is greater than or equal to the query value, i.e.,

where stands for the indicator function. Furthermore, each query is allowed to depend on the responses to previous queries, through a learner strategy, to be defined shortly.

Denote by the total number of learner queries, and by the learner’s desired accuracy. After receiving the responses to queries, the learner aims to produce an estimate , for , that satisfies

In the meantime, there is an adversary who is also interested in learning the true value, . The adversary has no access to the database, and hence seeks to estimate by free-riding on observations of the learner queries. Let be an accuracy parameter for the adversary. We assume that the adversary can observe the values of the queries but not the responses, and knows the learner’s query strategy. Based on this information, and after observing all of the queries submitted by the learner, the adversary aims to generate an estimate, , for , that satisfies

2.1 Learner Strategy

The queries that the learner submits to the database are generated by a (possibly randomized) learner strategy, in a sequential manner: the query at step depends on the queries and their responses up until step

, as well as on a discrete random variable

. In particular, the random variable allows the learner to randomize if needed, and we will refer to as the random seed. Without loss of generality, we assume that

is uniformly distributed over

, where is a large integer. Formally, fixing , a learner strategy of length is comprised of two parts:

  1. A finite sequence of query functions, , where each is a mapping that takes as input the values of the first queries submitted, the corresponding responses, as well as the realized value of , and outputs the th query .

  2. An estimation function , which takes as input the queries submitted, the corresponding responses, and the realized value of , and outputs the final estimate for the true value .

More precisely, we have

  1. If , then , and ;

  2. If , then , and

  3. , and .

Observe that the above definition can be simplified: knowing the value of the random seed and the responses to the queries is sufficient for reconstructing the values of the queries. As an example, we have , for some new function . Through induction, it then suffices to let the input to be just and . This leads to an alternative, simpler definition of learner strategies:

  1. If , then , and ;

  2. If , then , and ;

  3. , and .

In the sequel, we adopt the latter, simpler definition. In addition, we will consider learner strategies that submit distinct queries, as repeated queries do not provide additional information to the learner. We will denote by the set of all learner strategies of length , defined as above.

Fix a learner strategy . To clarify the dependence on the random seed, for any and , we will use to denote the realization of the sequence of queries, , when the true value is and the learner’s random seed is . Similarly, we will denote by the learner’s estimate of the true value when and .

2.2 Information Available to the Adversary

We summarize in this subsection the information available to the adversary. First, the adversary is aware that the true value belongs to . Second, we assume that the adversary can observe the values of the queries but not the corresponding responses, and that the learner strategy is known to the adversary. In particular, the adversary observes the value of each query , for , and knows the mappings, . This means that if the adversary had access to the values and the realized value of , she would know exactly what is for step . While it may seem that an adversary who sees both the learner strategy and her actions is too powerful to defend against, we will see in the sequel that the learner will still be able to implement effective and efficient obfuscation by exploiting the randomness of .

3 Private Learner Strategies

In this section, we introduce and formally define private learning strategies, the central concept of this paper. While we will briefly discuss the underlying intuition, a more detailed interpretation is relegated to Appendix 9. As was mentioned in the Introduction, a private learner strategy must always make sure that its estimate is close to the true value , while keeping the adversary’s probability of correct detection of sufficiently small. Our goal in this section is to formalize those ideas. To this end, we first introduce in Section 3.1 ways of quantifying the amount of information acquired by the adversary, as a function of the learner’s queries. This then leads to a precise privacy constraint presented in Section 3.2.

3.1 Information Set

Recall from Section 2.2 that the adversary knows the values of the queries and the learner strategy. We will now convert this knowledge into a succinct representation: the information set of the adversary. Fix a learner strategy, . Denote by the set of query sequences that have a positive probability of appearing under , when the true value is equal to :



is a vector-valued random variable representing the sequence of learner queries, whereas

stands for a typical realization; the probability is measured with respect to the randomness in the learner’s random seed, .

Definition 3.1

Fix . The information set for the adversary, , is defined by:


From the viewpoint of the adversary, the information set represents all possible true values that are consistent with the queries observed. As such, it captures the amount of information that the learner reveals to the adversary.

3.2 Private Strategies

A private learner strategy should achieve two aims: accuracy and privacy. Accuracy can be captured in a relatively straightforward manner, by measuring the absolute distance between the learner’s estimate and the true value. An effective measure of the learner’s privacy, on the other hand, is more subtle, as it depends on what the adversary is able to infer. To this end, we develop in this subsection a privacy metric by quantifying the “effective size” of the information set described in Definition 3.1. Intuitively, since the information set contains all possible realizations of the true value, , the larger the information set, the more difficult it is for the adversary to pin down the true value.

The choice of such a metric requires care. As a first attempt, the diameter of the information set, , may appear to be a natural candidate. Since the adversary has an accuracy parameter of , we could require that the diameter of be greater than . The diameter, however, is not a good metric, as it paints an overly optimistic picture for the learner. Consider the example where the information set is the union of two intervals of length each, placed far apart from each other. By setting her estimate to be the center of one of the two intervals, chosen at random with equal probabilities, the adversary will have probability of correctly predicting the true value, even though the diameter of the information set could be large. The Lebesgue measure of the information set appears to be another plausible candidate. However, it also fails to accurately describe the learner’s privacy. Consider again the example where the information set consists of many distantly placed but very small intervals. It is not difficult to see that the adversary would not be able to correctly estimate the true value with high certainty, even if the Lebesgue measure of the set is arbitrarily small.

The shortcomings of the above metrics motivate a more refined notion of “effective size,” and in particular, one that would be appropriate for disconnected information sets. To this end, we will use set coverability to measure the size of the information set, defined as follows.

Definition 3.2

Fix , , and a set . We say that a collection of closed intervals , , , , is a cover for if , and for all

We say that a set is -coverable if it admits a cover. In addition, we define the -cover number of a set , , as


We are now ready to define -private learner strategies.

Definition 3.3 (Private Learner Strategy)

Fix , , , with . A learner strategy is -private if it satisfies the following:

  1. Accuracy constraint: the learner estimate accurately recovers the true value, with probability one:

    where the probability is measured with respect to the randomness in .

  2. Privacy constraint: for every and every possible sequence of queries , the -cover number of the information set for the adversary, , is at least , i.e.,


The accuracy constraint requires that a private learner strategy always produce an accurate estimate within the error tolerance , for any possible true value in . The privacy constraint controls the size of the information set induced by the sequence of queries generated, and the parameter can be interpreted as the learner’s privacy level: since the intervals used to cover the information set are of length at most , each interval can be thought of as representing a plausible guess for the adversary. Therefore, the probability of the adversary successfully estimating the location of is essentially inversely proportional to the number of intervals needed to cover the information set, which is at most . We make the link between and the adversary’s probability of correct estimation precise in Appendix 9.

4 Main Result

The learner’s overall objective is to employ the minimum number of queries while satisfying the accuracy and privacy requirements. We state our main theorem in this section, which establishes lower and upper bounds for the query complexity of a private learner strategy, as a function of the adversary accuracy , learner accuracy , and learner privacy level, . Recall that is the set of learner strategies of length . Define to be the minimum number of queries needed across all -private learner strategies,


Our result will focus on the regime of parameters where


Having corresponds to a scenario where the learner would like to identify the true value with high accuracy, while the adversary is aiming for a coarse estimate. Note that the regime where is arguably much less interesting, because it is not natural to expect the adversary, who is not engaged in the querying process, to have a higher accuracy requirement than the learner. The requirement that stems from the following argument. If , then the entire interval is trivially -coverable, and . Thus, the privacy constraint is automatically violated, and no private learner strategy exists. To obtain a nontrivial problem, we therefore only need to consider the case where , which is only sightly broader than the regime that we consider. The following theorem is the main result of this paper.

Theorem 4.1 (Query Complexity of Private Sequential Learning)

Fix , , and a positive integer , such that . Then,


The proof of the upper bound in Theorem 4.1 is constructive, providing a specific learner strategy that satisfies the bound. If we set , which corresponds to the worst case where the adversary’s accuracy requirement is essentially as loose as possible, then Theorem 4.1 leads to the following corollary. It yields upper and lower bounds that are tight up to an additive constant of . In other words, the private learner strategy that we construct achieves essentially the optimal query-complexity in this scenario.

Corollary 4.2

Fix and a positive integer such that . The following holds.

  1. If , we have

  2. If , we have


A main take-away from the above results is about the price of privacy: it is not difficult to see that in the absence of a privacy constraint, the most efficient strategy, using a bisection search, can locate the true value with queries. Our results thus demonstrate that the price of privacy is at most an additive factor of .

5 Examples of Learner Strategies

Before delving into the proofs of our main result, we first provide some intuition and motivation by examining three representative learner strategies situated at different locations along the complexity-privacy tradeoff curve.

Strategy 1: Bisection. A most natural candidate is the classical bisection strategy, which is known to achieve the optimal query-complexity in the absence of privacy constraints. Under this strategy, the learner first submits a query at the mid-point of , i.e., . Then, based on the response, the learner identifies the half interval that contains the true value, and subsequently submits its mid-point as the next query, . The process continues recursively until the learner finds an interval of length at most that contains the true value . Figure 1 provides an illustration of this strategy.

Figure 1: An example of the bisection search where the red star represents the true value . The dashed line with arrows represents the learner’s error tolerance, and the solid line with arrows represents the information set for the adversary, .

Under the Bisection strategy, the learner knows that the interval containing the true value is halved with each successive query. It follows that the number of queries needed under the bisection strategy is . Unfortunately, the favorable query complexity afforded by the bisection strategy comes at the cost of the learner’s privacy. In particular, at the end of the process, the adversary knows that the true value must be close to the last query the learner submitted, and hence the -cover number of the information set is always 1 whenever . As such, the Bisection strategy lies at one extreme end of the complexity-privacy tradeoff, with a minimal query complexity but no privacy.

Strategy 2: -Dense. Sitting on the opposite end of the spectrum is the -Dense strategy, where the learner submits a sequence of pre-determined queries, with , (Figure 2). The strategy is accurate because the distance between two adjacent queries is equal to the error tolerance, . Moreover, because the sequence of queries is pre-determined and does not depend on the location of the true value, the adversary obtains no information from the learner’s query patterns, and the information set remains the interval throughout. Thus, if , the strategy is -private. Compared to the Bisection strategy, the perfect privacy of -Dense Strategy is achieved at the expense of an exponential increase in query complexity, from to . The -Dense strategy is therefore overly conservative and, as our proposed strategy will demonstrate, leads to unnecessarily high query complexity for moderate values of .

Figure 2: An example of the -Dense strategy. The dashed line with arrows represents the learner’s error tolerance, and the solid line with arrows represents the information set for the adversary, .

Strategy 3: Replicated Bisection. The contrast between Strategies 1 and 2 highlights the tension between the learner’s conflicting objectives: on the one hand, to maximally exploit the information learned from earlier queries and shorten the search, and on the other hand, to reduce adaptivity so that the queries are not too revealing. An efficient private learner strategy should therefore strike a balance between these two objectives. To start, it is natural to consider a learner strategy that combines Strategies 1 and 2 in an appropriate manner, which leads us to the Replicated Bisection strategy. This strategy has two phases:

  1. Phase 1 - Deterministic Queries. The learner submits queries, chosen deterministically:


    which partition the unit interval into disjoint sub-intervals of length each: . At this point, the learner has learned which one of the sub-intervals contains the true value, while the adversary has gained no additional information about the true value. We will refer to the sub-interval that contains the true value as the true sub-interval, and all other sub-intervals as false sub-intervals. This phase uses queries.

  2. Phase 2 - Replicated Bisection. In the second phase, the learner conducts a bisection strategy within the true sub-interval until the true value has been located, while in the meantime submitting translated replicas of these queries in each false sub-interval, in parallel. The exact order in which these queries are submitted can be arranged in such a manner as to be independent from the identity of the true sub-interval. This phase uses queries, where is the number of queries needed to conduct a bisection strategy in a sub-interval.

When the process is completed, the learner will have identified the true value via the bisection strategy within the true sub-interval, while the adversary will have seen identical copies of the same bisection strategy, leading to an information set that consists of disjoint length- intervals, separated from each other by a distance of . It is not difficult to show that the Replicated Bisection strategy is -private, with queries. In particular, the Replicated Bisection strategy achieves privacy at the cost of an increase in query complexity that is a multiplicative factor of , compared to that of the Bisection strategy ().

Figure 3: An example of the Replicated Bisection strategy, with . The dashed lines with arrows represents the learner’s error tolerance, and the solid lines with arrows represents the information set for the adversary, .

The Replicated Bisection strategy thus appears to be a natural and successful combination of the Bisection and -Dense strategies: it ensures privacy while requiring substantially fewer than the queries of the -Dense strategy. Is it an optimal strategy, achieving the minimal query complexity for a given privacy level, ? Perhaps surprisingly, the answer is negative. Our proof for the upper bound of Theorem 4.1 in the following section will show that the query complexity of the Replicated Bisection strategy can be much improved, so that the query complexity overhead, compared to the Bisection strategy, is only an additive factor of .

6 Proof of the Upper Bound: Opportunistic Bisection Strategy

We prove in this section the upper bound on the query complexity in Theorem 4.1. This is achieved by constructing a specific learner strategy, which we will refer to as the Opportunistic Bisection (OB) strategy. We start with some terminology, to facilitate the exposition.

Definition 6.1

Fix and an interval . Let be an infinite sequence of i.i.d. Bernoulli random variables, with . Let be a sequence of queries, where is equal to the mid-point of , and let be their corresponding responses.

  1. We say that is a truthful bisection search of , if it satisfies the following criteria, defined inductively. Let . For ,

    1. is set to

    2. is set to

  2. We say that is a fictitious bisection search of , if it satisfies the following criteria, defined inductively. Let . For ,

    1. is set to

    2. is set to


In words, whether a bisection search is truthful or fictitious depends on how the interval is updated. In a truthful search, is set to the half-interval within that, according to the response , contains the true value. In a fictitious search, this choice is made uniformly at random, according to .

We are now ready to define the Opportunistic Bisection strategy, which consists of two phases.

Phase 1 - Opportunistic Guesses. The first queries submitted by the strategy are deterministic and do not depend on responses from earlier queries, with




Notice that the two queries and determine an interval of length . At the end of this phase, there will be such intervals, evenly spaced across the unit interval. Each such interval thus represents a “guess” on the true value, ; if lies in for some , then the learner learns the location of within the desired level of accuracy. We will refer to the interval as the th guess.

Phase 2 - Local Bisection Search. The guesses submitted in Phase 1 are few and spaced apart, and it is possible that none of the guesses contains . The goal of Phase 2 is to hence ensure that the learner identifies at the end, but the queries are to be executed in a fashion that conceals from the adversary whether was identified during Phase 1 or Phase 2.

Define as the interval between the th and th guesses:


We will refer to as the th sub-interval. Importantly, by the end of Phase , if none of the guesses contains the true value, then the learner knows which sub-interval contains the true value, which we will denote by . The queries in Phase 2 will be chosen according to the following rule:

  1. If none of the guesses in Phase contains , then, let be a truthful bisection search of with .

  2. If one of the guesses in Phase contains , then, let be a sub-interval chosen uniformly at random among all sub-intervals, and let be a fictitious bisection search of with , using the randomization provided by (i.e., using to generate the sequence of i.i.d. Bernoulli random variables, ).

An example of this strategy is provided in Figure 4.

Remark. It is interesting to contrast Opportunistic Bisection with the Replicated Bisection strategy in Section 5. Both strategies use deterministic queries in the first phase, but instead of submitting queries, the OB strategy incurs a slight overhead and submits guesses. Crucially, the guesses make it possible to immediately discover the location of the true value in the first phase, albeit such discoveries might be unlikely. In the second stage, while the Replicated Bisection strategy conducts a bisection search in each of the sub-intervals, the Opportunistic Bisection strategy does so in only one of the sub-intervals, hence drastically reducing the number of queries.

It follows directly from the definition that the number of queries submitted under the Opportunistic Bisection strategy is:

Figure 4: An example of the Opportunistic Bisection Strategy, with .

To complete the proof of the upper bound in Theorem 4.1, it thus suffices to show that the OB strategy satisfies both the accuracy and privacy constraints. This is accomplished in the following proposition, which is the main result of this section.

Proposition 6.2

Fix , , and a positive integer , such that . Then, the Opportunistic Bisection (OB) strategy is -private.


Proof.We first show that the OB strategy is accurate, and specifically, that it will allow the learner to produce an estimate of with an absolute error of at most . To this end, we consider two possible scenarios:

Case 1. Suppose that some guess in Phase , namely, the interval , contains the true value, . In this case, the learner can set to be the mid-point of the guess, i.e., . Since the length of each guess is exactly , we must have .

Case 2. Suppose that none of the guesses in Phase contains . This means that a truthful bisection search will be conducted in Phase 2, in the sub-interval that contains . Because the search is truthful, we know that one of the two intervals adjacent to must contain . Let this interval be denoted by . Furthermore, because the length of each sub-interval is less than and there are steps in the bisection search, we know that the length of is at most . Therefore, the learner can generate an accurate estimate by setting to be the mid-point of . Together with Case 1, this shows that the OB strategy leads to an accurate estimate of .

We now show that the OB strategy is private, and in particular, that the -cover number of the information set of the adversary, , is at least . Denote by the union of the guesses, i.e.,


It is elementary to show that for two sets and , with , if is at least , then so is . Therefore, it suffices to prove the following two claims.

Claim 6.3

The -cover number of , , is at least .

Claim 6.4

The information set, , contains .

We first show Claim 6.3. Consider any interval , with length , used in a cover for . Note that by construction, each guess has length , and two adjacent guesses are separated by a distance of . Since , this implies that the Lebesgue measure of is at most . Since the Lebesgue measure of is , we conclude that it will require at least such intervals of size to cover . Therefore, . This proves Claim 6.3.

We next show Claim 6.4. To see that is the case, note that any particular query sequence can arise in two different ways: (i) it may be that is an arbitrary element of one of the guesses (i.e., ), and is the result of a fictitious bisection search; or, (ii) it may be that lies outside the guesses, and is the result of a truthful bisection search. The adversary has no way of distinguishing between these two possibilities. Furthermore, there is no information available to the adversary that could distinguish between different elements of . As a consequence, all elements of are included within the information set, i.e., , which establishes Claim 6.4. ∎

7 Proof of the Lower Bound

We now derive the two lower bounds on query complexity in Theorem 4.1. Note that the query-complexity of the Opportunistic Bisection strategy carries a overhead compared to the (non-private) bisection strategy. The value admits an intuitive justification: a private learner strategy must create plausible locations of the true values, and each such location is associated with at least 2 queries. One may question, however, whether the queries need to be distinct from the queries already used by the bisection search, or whether the query complexity could be further reduced by “blending” the queries for obfuscation with those for identifying the true value in a more effective manner. The key to the proof of the lower bound in this section is to show that such “blending” is not possible: in order to successfully obfuscate the true value, one needs queries that are distinct from those that participate in the bisection algorithm.

7.1 Information Sets

We first introduce some notation to facilitate our discussion. Recall that is the sequence of learner queries. For the remainder of this section, we augment this sequence with two more queries, and , so that . This is inconsequential because , and hence adding and does not provide additional information to either the learner or the adversary.

We start by examining the information provided to the learner, through the queries and the responses. Let us fix an arbitrary that has positive probability, and some . Consider the resulting sequence of queries, , and then let be the sequence of queries in , arranged in increasing order. (In particular, and .) For each query , the learner knows (through the response to the corresponding query) whether or . In particular, at the end of the learning process, the learner has access to an interval of the form , for some , such that is certain to belong to that interval. Furthermore, from the definition of user strategies, all elements of that interval would have produced identical responses to the queries, and the learner has no information that distinguishes between such elements.

It is not hard to see that if , then the learner has no way of producing an -accurate estimate of .555For any choice of , there will always be some such that . Furthermore, such an is a possible value of , as it would have produced the exact same sequence of responses. Since we are interested in learner strategies that satisfy the accuracy constraint in Definition 3.3, we conclude that the length of is at most .

Let us now consider the situation from the point of view of the adversary. The adversary can look at the query sequence , form the intervals of the form , and select those intervals whose length is at most ; we refer to these as special intervals. We have already argued that must lie inside a special interval. Therefore, the adversary has enough information to conclude that lies in the union of the special intervals. We denote that union by , and we have


7.2 Completing the Proof of the Lower Bound

We are now ready to prove the lower bound. We begin with a lemma.

Lemma 7.1

Fix a learner strategy that satisfies the accuracy constraint. For every , there exists such that there are at least of the queries in that belong to .

Lemma 7.1 is essentially the classical result that query complexity of the bisection strategy is optimal for the unit interval (cf. Waeber et al. (2013)), which proves the first term of the lower bound in Theorem 4.1. We omit the proof of Lemma 7.1, which is fairly standard, but provide an intuitive argument. Fix . Note that the interval consists of disjoint sub-intervals of length each. An accurate learner strategy, therefore, must be able to distinguish in which one of these sub-intervals the true value resides. Distinguishing among possibilities using binary feedback therefore implies that there will be some whose accurate identification requires queries in .

For the rest of this proof, fix and for some and satisfying Lemma 7.1, and use to denote . We now consider the queries in the interval . Among them, we restrict attention to those queries that are endpoints of special intervals. We call these special queries, and let be their number. We sort the special queries in ascending order, and denote them by , where is their number.

The outline of the rest of the argument is as follows. For a private learner strategy, the -cover number of is at least . From Eq. (20), it follows that the -cover number of is also at least . Since endpoints of special intervals are within of each other, every must be within of a “neighboring” query (namely, or ), with the possible exception of , which could be the right endpoint of a special interval whose left endpoint, denoted , is in . Using the assumption that , each interval used in the cover can include two, and often three, queries . In what follows, we will make this argument precise, and show that the number, , of special queries is at least .

Let us consider first the case where the above mentioned exception does not arise; i.e., we assume that is not the right endpoint of a special interval. We decompose the set as the union of its (finitely many) connected components, which we will call just components, for short; see Figure 5 for an illustration. Each connected component is an interval whose endpoints are special queries (, for some ). Furthermore within each such interval, special queries are separated by at most . Suppose that we have components. For the th component, let be the number of special queries it contains, and let be its length. We have . In particular, the th interval can be covered by at most

intervals of length . (We have used here our standing assumption that . Summing over the different components, we conclude that can be covered by at most intervals of length .

For the exceptional case where is the right endpoint of a special interval , we just apply the same argument, now on the set , and for an augmented collection of special queries, . Having effectively increased the number of points of interest, from to , we obtain an upper bound of on the number of intervals of length that are needed to cover .

Figure 5: An illustration of the proof. and hence, is a special interval. There are three connected components: , , and . The dashed lines with arrows represent a cover.

By combining the results of the two cases, and using one more interval of length to cover the set , we conclude that the -cover number of , and therefore of as well, is at most . On the other hand, as long as we are dealing with an -private strategy, this -cover number is at least . Thus , or .

Recall that the argument is being carried out for the case of particular and with the properties specified earlier. For such and , we have at least queries in the set and at least queries in the set . Recall that we have introduced an additional artificial query at , i.e., , which is above and beyond the queries used by the strategy, and this query may be included in the special queries. For this reason, the lower bound on has to be decremented by 1, leading to the lower bound in Theorem 4.1. ∎

8 Conclusions and Future Work

This paper studies an intrinsic privacy-complexity tradeoff faced by a learner in a sequential learning problem who tries to conceal her findings from an observant adversary. We use the notion of information set, the set of possible true values, to capture the information available to the adversary through the learner’s learning process, and focus on the coverability of the information set as the main metric for measuring a learner strategy’s level of privacy. Our main result shows that to ensure privacy, i.e., so that the resulting information set requires at least intervals of size to be fully covered, it is necessary for the learner to employ at least queries. We further provide a constructive learner strategy that achieves privacy with queries. Together, the upper and lower bounds on the query complexity demonstrate that increasing the level of privacy, , leads to a linear additive increase in the learner’s query complexity.

There are several interesting extensions and variations of the model that were left unaddressed. One may consider the binary query model in higher dimensions, where

. A query in this setting will be a hyperplane in

, where the response indicates whether the true value is to the right or to the left of the queried hyperplane. Another interesting direction is to consider adversaries with varying levels of risk aversion. The present model considers a risk-averse adversary who considers all points in as equally “plausible” as long as they have a positive probability of being close to the true value. In contrast, a less risk-averse adversary may entirely ignore points that are less likely to be close to the true value, thus reducing the size of the information set. One such variation, the Bayesian Private Learning model, is explored in Appendix 10.

More broadly, there could be other interesting problem formulations for understanding the privacy implications in a number of sequential decision problems in learning theory, optimization, and decision theory. It is not difficult to see that standard algorithms, originally designed to optimize run-time or query complexity, often provide little or no protection for the learner’s privacy. Can we identify a universal procedure to design sample-efficient private decision strategies? Is there a more general tradeoff between privacy and complexity in sequential decision making? We are optimistic that there are many fruitful inquiries along these directions.


  • Blum et al. (2013) Blum A, Ligett K, Roth A (2013) A learning theory approach to noninteractive database privacy. Journal of the ACM (JACM) 60(2):12.
  • Chaudhuri and Hsu (2011) Chaudhuri K, Hsu D (2011) Sample complexity bounds for differentially private learning. Proceedings of the Conference on Learning Theory, 155–186.
  • Chor et al. (1998) Chor B, Kushilevitz E, Goldreich O, Sudan M (1998) Private information retrieval. Journal of the ACM 45(6):965–981.
  • Duchi et al. (2016) Duchi J, Wainwright M, Jordan M (2016) Minimax optimal procedures for locally private estimation. arXiv preprint arXiv:1604.02390.
  • Dwork et al. (2006) Dwork C, McSherry F, Nissim K, Smith A (2006) Calibrating noise to sensitivity in private data analysis. Theory of Cryptography Conference, 265–284 (Springer).
  • Dwork and Roth (2014) Dwork C, Roth A (2014) The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science 9(3–4):211–407.
  • Feldman and Xiao (2014) Feldman V, Xiao D (2014) Sample complexity bounds on differentially private learning via communication complexity. Conference on Learning Theory, 1000–1019.
  • Gasarch (2004) Gasarch W (2004) A survey on private information retrieval. Bulletin of the EATCS (Citeseer).
  • Horstein (1963) Horstein M (1963) Sequential transmission using noiseless feedback. IEEE Transactions on Information Theory 9(3):136–143.
  • Jain et al. (2012) Jain P, Kothari P, Thakurta A (2012) Differentially private online learning. Conference on Learning Theory, 24–1.
  • Kushilevitz and Ostrovsky (1997) Kushilevitz E, Ostrovsky R (1997) Replication is not needed: Single database, computationally-private information retrieval. Foundations of Computer Science (FOCS), volume 97, 364–373.
  • Raskhodnikova et al. (2008) Raskhodnikova S, Smith A, Lee HK, Nissim K, Kasiviswanathan SP (2008) What can we learn privately. Proceedings of the 54th Annual Symposium on Foundations of Computer Science, 531–540.
  • Smith (2011) Smith A (2011) Privacy-preserving statistical estimation with optimal convergence rates.

    Proceedings of the forty-third annual ACM symposium on Theory of computing

    , 813–822 (ACM).
  • Tsitsiklis and Xu (2018) Tsitsiklis J, Xu K (2018) Delay-predictability tradeoffs in reaching a secret goal. Operations Research 66(2):587–596.
  • Waeber et al. (2013) Waeber R, Frazier PI, Henderson SG (2013) Bisection search with noisy responses. SIAM Journal on Control and Optimization 51(3):2261–2279.
  • Wasserman and Zhou (2010) Wasserman L, Zhou S (2010) A statistical framework for differential privacy. Journal of the American Statistical Association 105(489):375–389.
  • Xu (2017) Xu Z (2017) Private sequential search and optimization. Master’s thesis, Massachusetts Institute of Technology, URL

9 Coverability and the Adversary’s Probability of Correct Estimation

In this section we argue that the quantity can be interpreted as a probability of correct detection for the adversary. Recall the definition of set of possible sequences when (cf. Eq. (1)), and let . We consider here adversary estimates that are random variables, determined by the observed query sequence , together with an independent randomization seed.

Definition 9.1

Fix , , a learner strategy , and a sequence of queries . We say that an adversary estimate, , is -correct given , if


where the probability is taken with respect to any randomization in the adversary’s estimate, .

In words, an adversary estimate is -correct given if, as soon as the learner deploys the queries , the adversary will know that the estimate will incur an error of at most with probability larger than . In a sense, this means that conceals the true value “poorly.” The following proposition, the main result of this section, shows that the coverability of the information set is effectively equivalent to the existence of a -correct estimate.

Proposition 9.2

Fix , , a learner strategy , and a sequence of queries . The following hold.

  1. If the -cover number of the information set , , is at most , then there exists an adversary estimate that is -correct given .

  2. Conversely, if is at least , then, for any , there does not exist an adversary estimate that is -correct given .


Proof.To prove the first statement, fix such that is -coverable. Then, there exist intervals, , , , , each of length , that cover . Consider a randomized adversary estimate that is distributed uniformly at random among the mid-points of the intervals. Then, with probability , the estimate will lie in the same interval as the true value; since all intervals have length at most , such a will be at a distance of at most from the true value, i.e.,

This implies that is -correct given , which proves the first statement. For the second statement, we will make use of the following lemma.

Lemma 9.3

Fix and . Let be a subset of such that the -cover number of , , is at least . Then, there exist points in the closure of such that


Proof.We will prove the lemma by constructing the ’s explicitly. Consider the following procedure:

  1. .

  2. For , define recursively as:


The procedure terminates at some step when or . Note that by construction, all ’s belong to the closure of . Furthermore, the intervals


form a cover of . Since by assumption, it follows that we must have . It is easy to verify that the points satisfy the conditions outlined in the lemma, and which completes the proof. ∎

Fix any adversary estimate , and some , and such that . Apply Lemma 9.3 with , and let be as defined in the lemma. Because the ’s belong to the closure of , by perturbing them, we can obtain a set of points , such that


Define intervals


Since the distance between any two distinct ’s is greater than , we know that the intervals