1 Introduction
Betting markets of various forms—including the stock exchange (Grossman, 1976), futures markets (Roll, 1984), sports betting markets (Gandar et al., 1999), and markets at the racetrack (Thaler and Ziemba, 1988)—have been shown to successfully collect and aggregate information. Over the last few decades, prediction markets designed specifically for the purpose of elicitation and aggregation, have yielded useful predictions in domains as diverse as politics (Berg et al., 2001), disease surveillance (Polgreen et al., 2007), and entertainment (Pennock et al., 2002).
The desire to aggregate and act on the strategically valuable information dispersed among employees has led many companies to experiment with internal prediction markets. An internal corporate market could be used to predict the launch date of a new product or the product’s eventual success. Among the first companies to experiment with internal markets were HewlettPackard, which implemented realmoney markets, and Google, which ran markets using its own internal currency that could be exchanged for raffle tickets or prizes (Plott and Chen, 2002; Cowgill and Zitzewitz, 2015). More recently, Microsoft, Intel, Ford, GE, Siemens, and others have engaged in similar experiments (Berg and Proebsting, 2009; Charette, 2007; Cowgill and Zitzewitz, 2015).
Proponents of internal corporate markets often argue that the market structure helps in part because, without it, “business practices… create incentives for individuals not to reveal their information” (Plott and Chen, 2002). However, even with a formal market structure in place, an employee might be hesitant to bet against the success of their team for fear of insulting her coworkers or angering management. If an employee has information that is unfavorable to the company, she might choose not to report it, leading to predictions that are overly optimistic for the company and ultimately contributing to an “optimism bias” in the market similar to the bias in Google’s corporate markets discovered by Cowgill and Zitzewitz (2015).
To address this issue, we consider the problem of designing private prediction markets. A private market would allow participants to engage in the market and contribute to the accuracy of the market’s predictions without fear of having their information or beliefs revealed. The goal is to provide participants with a form of “plausible deniability.” Although participants’ trades or wagers should together influence the market’s behavior and predictions, no single participant’s actions should have too much influence over what others can observe. We formalize this idea using the popular notion of differential privacy (Dwork et al., 2006; Dwork and Roth, 2014), which can be used to guarantee that any participant’s actions cannot be inferred from observations.
We begin by designing a private analog of the weighted score wagering mechanisms first introduced by Lambert et al. (2008). A wagering mechanism allows bettors to each specify a belief about the likelihood of a future event and a corresponding monetary wager. These wagers are then collected by a centralized operator and redistributed among bettors in such a way that more accurate bettors receive higher rewards. Lambert et al. (2008) showed that the class of weighted score wagering mechanisms, which are built on the machinery of proper scoring rules (Gneiting and Raftery, 2007), is the unique set of wagering mechanisms to satisfy a set of desired properties such as budget balance, truthfulness, and anonymity. We design a class of wagering mechanisms with randomized payments that maintain the nice properties of weighted score wagering mechanisms in expectation while additionally guaranteeing joint differential privacy in the bettors’ reported beliefs. We discuss the tradeoffs that exist between the privacy of the mechanism (captured by the parameter ) and the sensitivity of a bettor’s payment to her own report, and show how to set the parameters of our mechanisms to achieve a reasonable level of the plausible deniability desired in practice.
We next address the problem of running private dynamic prediction markets. We consider the setting in which traders buy and sell securities with values linked to future events. For example, a market might offer a security worth if Microsoft Bing’s market share increases in 2016 and
otherwise. A risk neutral trader who believes that the probability of Bing’s market share increasing is
would profit from buying this security at any price less than or (short) selling it at any price greater than . The market price of the security is thought to reflect traders’ collective beliefs about the likelihood of this event. We focus on costfunction prediction markets (Chen and Pennock, 2007; Abernethy et al., 2013) such as Hanson’s popular logarithmic market scoring rule (Hanson, 2003). In a costfunction market, all trades are placed through an automated market maker, a centralized algorithmic agent that is always willing to buy or sell securities at some current market price that depends on the history of trade via a potential function called the cost function. We ask whether it is possible for a market maker to price trades according to a noisy cost function in a way that maintains traders’ privacy without allowing traders to make unbounded profit off of the noise. Unfortunately, we show that under general assumptions, it is impossible for a market maker to achieve bounded loss and differential privacy without allowing the privacy guarantee to degrade very quickly as the number of trades grows. In particular, the quantity must grown faster than linearly in the number of trades, making such markets impractical in settings in which privacy is valued. We suggest several avenues for future research aimed at circumventing this lower bound.There is very little prior work on the design of private prediction markets, and to the best of our knowledge, we are the first to consider privacy for oneshot wagering mechanisms. Most closely related to our work is the recent paper of Waggoner et al. (2015) who consider a setting in which each of a set of selfinterested agents holds a private data point consisting of an observation and corresponding label . A firm would like to purchase the agents’ data in order to learn a function to accurately predict the labels of new observations. Building on the mathematical foundations of costfunction market makers, Waggoner et al. propose a mechanism that provides incentives for the agents to reveal their data to the firm in such a way that the firm is able to solve its prediction task while maintaining the agents’ privacy. The authors mention that similar ideas can be applied to produce privacypreserving prediction markets, but their construction requires knowing the number of trades that will occur in advance to appropriately set parameters. The most straightforward way of applying their techniques to prediction markets results in a market maker falling in the class covered by our impossibility result, suggesting that such techniques cannot be used to derive a privacypreserving market with bounded loss when the number trades is unknown.
2 Tools from Differential Privacy
We formalize privacy using the nowstandard notion of differential privacy, which was introduced by Dwork et al. (2006). The most basic version of differential privacy is used to measure the privacy of a randomized algorithm’s output when given as input a database with entries from some input domain . Differential privacy is often studied in settings in which the entries are provided by agents, each of whom would like to keep their entry private. Two databases and are said to be neighboring if they differ only in a single entry. Differential privacy requires that the distribution of the algorithm’s output given is “close to” the distribution of its output given any neighboring database . For these definitions, it is enough to view an “algorithm” as a randomized function mapping inputs to outputs; we are not concerned with the precise way in which the outputs are computed. In the following definitions, we restrict to realvalued outputs for consistency with the algorithms used in this paper.
Definition 1 (Differential Privacy (Dwork et al., 2006)).
For any , an algorithm is differentially private if for every pair of neighboring databases and every subset ,
If , we say that is differentially private.
As approaches 0, it becomes increasingly more difficult to distinguish neighboring databases, leading to a higher level of privacy. As grows large, the privacy guarantee grows increasingly weak. There is generally no consensus about what constitutes a “good” value of , and the strength of the guarantee needed may depend on the application. We discuss this point more later in the context of our results.
We sometimes abuse terminology and say that a random variable (such as a bettor’s profit) is differentially private. This should be taken to mean that the algorithm used to compute the value of the random variable is differentially private.
In the context of mechanism design, differential privacy is often too strong of a notion. Suppose, for example, that the algorithm
outputs a vector of prices that each of
agents will pay based on their joint input. While we may want the price that agent pays to be differentially private in the input of the other agents, it is natural to allow it to be more sensitive to changes in ’s own input. To capture this idea, Kearns et al. (2014) defined the notion of joint differential privacy. Call two neighboring databases and neighbors if they differ only in the th entry. Suppose that now outputs one element for each agent and let denote the vector of outputs to all agents excluding agent . Then joint differential privacy is defined as follows.Definition 2 (Joint Differential Privacy (Kearns et al., 2014)).
For any , an algorithm is joint differentially private if for every , for every pair of neighbors , and for every subset ,
If , we say that is joint differentially private.
Joint differential privacy is still a strong requirement. It protects the privacy of any agent from arbitrary coalitions; even if all other agents shared their private output, they would still not be able to learn too much about the input of agent .
One useful tool for proving joint differential privacy is the billboard lemma (Hsu et al., 2014). The idea behind the billboard lemma is quite intuitive and simple. Imagine that we display some message publicly so that it is viewable by all agents, as if posted on a billboard, and suppose that the algorithm to compute this message is differentially private. If each agent ’s output is computable from this public message along with ’s own private input, then is joint differentially private. A more formal statement and proof are given in Hsu et al. (2014).
The definitions above assume the input database is fixed. Differential privacy has also been considered for streaming algorithms (Chan et al., 2011; Dwork et al., 2010). Let . Following Chan et al. (2011), a stream is a string of countable length of elements in , where denotes the element at position or time and is the length prefix of the stream . Two streams and are said to be neighbors if they differ at exactly one time .
A streaming algorithm is said to be unbounded if it accepts streams of indefinite length, that is, if for any stream , . In contrast, a streaming algorithm is bounded if it accepts only streams of length at most . Dwork et al. (2010) consider only bounded streaming algorithms. Since we consider unbounded streaming algorithms, we use a more appropriate definition of differential privacy for streams adapted from Chan et al. (2011). For unbounded streaming algorithms, it can be convenient to let the privacy guarantee degrade as the input stream grows in length. Chan et al. (2011) implicitly allow this in some of their results; see, for example, Corollary 4.5 in their paper. For clarity and preciseness, we explicitly capture this in our definition. Here and throughout the paper we use to denote the nonnegative reals.
Definition 3 (Differential Privacy for Streams).
For any nondecreasing function and any , a streaming algorithm is differentially private if for every pair of neighboring streams , for every , and for every subset ,
If , we say that is differentially private.
Note that we allow to grow with , but require that stay constant. In principle, one could also allow to depend on the length of the stream. However, allowing to increase would likely be unacceptable in scenarios in which privacy is considered important. In fact, it is more typical to require smaller values of for larger databases since for a database of size , an algorithm could be considered private for on the order of even if it fully reveals a small number of randomly chosen database entries (Dwork and Roth, 2014). Since we use this definition only when showing an impossibility result, allowing to decrease in would not strengthen our result.
3 Private Wagering Mechanisms
We begin with the problem of designing a oneshot wagering mechanism that incentivizes bettors to truthfully report their beliefs while maintaining their privacy. A wagering mechanism allows a set of bettors to each specify a belief about a future event and a monetary wager. Wagers are collected by a centralized operator and redistributed to bettors in such a way that bettors with more accurate predictions are more highly rewarded. Lambert et al. (2008) showed that the class of weighted score wagering mechanisms (WSWMs) is the unique class of wagering mechanisms to satisfy a set of desired axioms such as budget balance and truthfulness. In this section, we show how to design a randomized wagering mechanism that achieves joint differential privacy while maintaining the nice properties of WSWMs in expectation.
3.1 Standard wagering mechanisms
Wagering mechanisms, introduced by Lambert et al. (2008), are mechanisms designed to allow a centralized operator to elicit the beliefs of a set of bettors without taking on any risk. In this paper we focus on binary wagering mechanisms, in which each bettor submits a report specifying how likely she believes it is that a particular event will occur, along with a wager specifying the maximum amount of money that she is willing to lose. After all reports and wagers have been collected, all parties observe the realized outcome indicating whether or not the event occurred. Each bettor then receives a payment that is a function of the outcome and the reports and wagers of all bettors. This idea is formalized as follows.
Definition 4 (Wagering Mechanism (Lambert et al., 2008)).
A wagering mechanism for a set of bettors is specified by a vector of (possibly randomized) profit functions, , where denotes the total profit to bettor when the vectors of bettors’ reported probabilities and wagers are p and m and the realized outcome is . It is required that for all p, m, and , which ensures that no bettor loses more than her wager.
There are two minor differences between the definition presented here and that of Lambert et al. (2008). First, for convenience, we use to denote the total profit to bettor (i.e., her payment from the mechanism minus her wager), unlike Lambert et al. (2008), who use to denote the payment only. While this difference is inconsequential, we mention it to avoid confusion. Second, all previous work on wagering mechanisms has restricted attention to deterministic profit functions . Since randomization is necessary to attain privacy, we open up our study to randomized profit functions.
Lambert et al. (2008) defined a set of desirable properties or axioms that deterministic wagering mechanisms should arguably satisfy. Here we adapt those properties to potentially randomized wagering mechanisms, making the smallest modifications possible to maintain the spirit of the axioms. Four of the properties (truthfulness, individual rationality, normality, and monotonicity) were originally defined in terms of expected profit with the expectation taken over some true or believed distribution over the outcome . We allow the expectation to be over the randomness in the profit function as well. Sybilproofness was not initially defined in expectation; we now ask that this property hold in expectation with respect to the randomness in the profit function. We define anonymity in terms of the distribution over all bettors’ profits, and ask that budget balance hold for any realization of the randomness in .

[leftmargin=20pt]

Budget balance: The operator makes no profit or loss, i.e., , , , and for any realization of the randomness in ,

Anonymity: Profits do not depend on the identify of the bettors. That is, for any permutation of the bettors , , ,
, the joint distribution over profit vectors
is the same as the joint distribution over profit vectors . 
Truthfulness: Bettors uniquely maximize their expected profit by reporting the truth. That is, , , , with ,

Individual rationality: Bettors prefer participating to not participating. That is, , , for all , there exists some such that , ,

Normality:^{1}^{1}1Lambert et al. (2015) and Chen et al. (2014) used an alternative definition of normality for wagering mechanisms that essentially requires that if, from some agent ’s perspective, the prediction of agent improves, then ’s expected profit decreases. This form of normality also holds for our mechanism. If any bettor changes her report, the change in the expected profit to any other bettor with respect to a fixed belief is the opposite sign of the change in expected payoff to . That is, , , with for all , , ,
All expectations are taken w.r.t. and the randomness in the mechanism.

Sybilproofness: Profits remain unchanged as any subset of players with the same reports manipulate user accounts by merging accounts, creating fake identities, or transferring wagers. That is, , with for all , with for and , , two conditions hold:

Monotonicity The magnitude of a bettor’s expected profit (or loss) increases as her wager increases. That is, , , , , , either or
Previously studied wagering mechanisms (Lambert et al., 2008; Chen et al., 2014; Lambert et al., 2015) achieve truthfulness by incorporating strictly proper scoring rules (Savage, 1971) into their profit functions. Scoring rules reward individuals based on the accuracy of their predictions about random variables. For a binary random variable, a scoring rule maps a prediction or report and an outcome to a score. A strictly proper scoring rule incentivizes a risk neutral agent to report her true belief.
Definition 5 (Strictly proper scoring rule (Savage, 1971)).
A function is a strictly proper scoring rule if for all with , .
One common example is the Brier scoring rule (Brier, 1950), defined as . Note that for the Brier scoring rule, for all and . Any strictly proper scoring rule with a bounded range can be scaled to have range .
The WSWMs incorporate proper scoring rules, assigning each bettor a profit based on how her score compares to the wagerweighted average score of all bettors, as in Algorithm 1. Lambert et al. (2008) showed that the set of WSWMs satisfy the seven axioms above and is the unique set of deterministic mechanisms that simultaneously satisfy budget balance, anonymity, truthfulness, normality, and sybilproofness.
3.2 Adding privacy
We would like our wagering mechanism to protect the privacy of each bettor , ensuring that the other bettors cannot learn too much about ’s report from their own realized profits, even if they collude. Note that paying each agent according to an independent scoring rule would easily achieve privacy, but would fail budget balance and sybilproofness. We formalize our desire to add privacy to the other good properties of weighted score wagering mechanisms using joint differential privacy.

[leftmargin=20pt]

joint differential privacy: The vector of profit functions satisfies joint differential privacy, i.e., , , , , , and ,
This definition requires only that the report of each bettor be kept private, not the wager . Private wagers would impose more severe limitations on the mechanism, even if wagers are restricted to lie in a bounded range; see Section 3.3.2 for a discussion. Note that if bettor ’s report is correlated with his wager , as might be the case for a Bayesian agent (Lambert et al., 2015), then just knowing could reveal information about . In this case, differential privacy would guarantee that other bettors can infer no more about after observing their profits than they could from observing alone. If bettors have immutable beliefs as assumed by Lambert et al. (2008), then reports and wagers are not correlated and reveals nothing about .
Unfortunately, it is not possible to jointly obtain properties (a)–(h) with any reasonable mechanism. This is due to an inherent tension between budget balance and privacy. This is easy to see. Budget balance requires that a bettor ’s profit is the negation of the sum of profits of the other bettors, i.e., . Therefore, under budget balance, the other bettors could always collude to learn bettor ’s profit exactly. In order to obtain privacy, it would therefore be necessary for bettor ’s profit to be differentially private in her own report, resulting in profits that are almost entirely noise. This is formalized in the following theorem. We omit a formal proof since it follows immediately from the argument described here.
Theorem 1.
Let be the vector of profit functions for any wagering mechanism that satisfies both budget balance and joint differential privacy for any . Then for all , is differentially private in bettor ’s report .
Since it is unsatisfying to consider mechanisms in which a bettor’s profit is not sensitive to her own report, we require only that budget balance hold in expectation over the randomness of the profit function. An operator who runs many markets may be content with such a guarantee as it implies that he will not lose money on average.

[leftmargin=20pt]

Budget balance in expectation: The operator neither makes a profit nor a loss in expectation, i.e., , , ,
3.3 Private weighted score wagering mechanisms
Motivated by the argument above, we seek a wagering mechanism that simultaneously satisfies properties (a) and (b)–(h). Keeping Theorem 1 in mind, we would also like the wagering mechanism to be defined in such a way that each bettor ’s profit is sensitive to her own report . Sensitivity is difficult to define precisely, but loosely speaking, we would like it to be the case that 1) the magnitude of varies sufficiently with the choice of
, and 2) there is not too much noise or variance in a bettor’s profit, i.e.,
is generally not too far from .A natural first attempt would be to employ the standard Laplace Mechanism (Dwork and Roth, 2014) on top of a WSWM, adding independent Laplace noise to each bettor’s profit. The resulting profit vector would satisfy joint differential privacy, but since Laplace random variables are unbounded, a bettor could lose more than her wager. Adding other forms of noise does not help; to obtain differential privacy, the noise must be unbounded (Dwork et al., 2006). Truncating a bettor’s profit to lie within a bounded range after noise is added could achieve privacy, but would result in a loss of truthfulness as the bettor’s expected profit would no longer be a proper scoring rule.
Instead, we take a different approach. Like the WSWM, our private wagering mechanism, formally defined in Algorithm 2, rewards each bettor based on how good his score is compared with an aggregate measure of how good bettors’ scores are on the whole. However, this aggregate measure is now calculated in a noisy manner. That is, instead of comparing a bettor’s score to a weighted average of all bettors’ scores, the bettor’s score is compared to a weighted average of random variables that are equal to bettors’ scores in expectation. As a result, each bettor’s profit is, in expectation, equal to the profit she would receive using a WSWM, scaled down by a parameter to ensure that no bettor ever loses more than her wager, as stated in the following lemma. The proof, which simply shows that for each , , is in the appendix.
Lemma 1.
For any number of bettors with reports and wagers , for any setting of the privacy parameter , for any outcome , the expected value of bettor ’s profit under the private wagering mechanism with scoring rule is equal to bettor ’s profit under a WSWM with scoring rule .
Using this lemma, we show that this mechanism does indeed satisfy joint differential privacy as well as the other desired properties.
Theorem 2.
The private wagering mechanism satisfies (a) budget balance in expectation, (b) anonymity, (c) truthfulness, (d) individual rationality, (e) normality, (f) sybilproofness, (g) monotonicity, and (h) joint differential privacy.
Proof.
Any WSWM satisfies budget balance in expectation (by satisfying budget balance), truthfulness, individual rationality, normality, sybilproofness, and monotonicity Lambert et al. (2008). Since these properties are defined in terms of expected profit, Lemma 1 implies that the private wagering mechanism satisfies them too.
Anonymity is easily observed since profits are defined symmetrically for all bettors.
Finally we show joint differential privacy. We first prove that each random variable is differentially private in bettor ’s report which implies that the noisy aggregate of scores is private in all bettors’ reports. We then apply the billboard lemma (see Section 2) to show that the profit vector satisfies joint differential privacy.
To show that is differentially private in , for each of the two values that can take on we must ensure that the ratio of the probability it takes this value under any report and the probability it takes this value under any alternative report is bounded by . Fix any . Since has range in ,
Thus is differentially private in . By Theorem 4 of McSherry (2009), the vector (and thus any function of this vector) is differentially private in the vector p, since each does not depend on the reports of anyone but . Since we view the wagers as constants, the quantity is also differentially private in the reports p. Call this quantity .
To apply the billboard lemma, we can imagine the operator publicly announcing the quantity to the bettors. Given access to , each bettor is able to calculate her own profit using only her own input and the values and . The billboard lemma implies that the vector of profits is joint differentially private. ∎
3.3.1 Sensitivity of the mechanism
Having established that our mechanism satisfies properties (a) and (b)–(h), we next address the sensitivity of the mechanism in terms of the two facets described above: range of achievable expected profits and the amount of noise in the profit function. This discussion sheds light on how to set in practice.
The first facet is quantified by Lemma 1. As grows, the magnitude of bettors’ expected profits grows, and the range of expected profits grows as well. When approaches 1, the range of expected profits achievable through the private wagering mechanism approaches that of a standard WSWM with the same proper scoring rule.
Unfortunately, since , larger values of imply larger values of the privacy parameter . This gives us a clear tradeoff between privacy and magnitude of expected payments. Luckily, in practice, it is probably unnecessary for to be very small for most markets. A relatively large value of can still give bettors plausible deniability. For example, setting implies that a bettor’s report can only change the probability of another bettor receiving a particular profit by a factor of roughly and leads to , a tradeoff that may be considered acceptable in practice.
The second facet is quantified in the following theorem, which states that as more money is wagered by more bettors, each bettor’s realized profit approaches its expectation. The bound depends on . If all wagers are equal, this quantity is equal to and bettors’ profits approach their expectations as grows. This is not the case at the other extreme, when there are a small number of bettors with wagers much larger than the rest. The proof, which uses Hoeffding’s inequality to bound the difference between the quantity and its expectation, is in the appendix.
Theorem 3.
For any , any , any number of bettors , any vectors of reports and wagers , with probability at least , for all , the profit output by the private wagering mechanism satisfies
The following corollary shows that if all wagers are bounded in some range , profits approach their expectations as the number of bettors grows.
Corollary 1.
Fix any and , . For any , any , any , any vectors of reports and wagers , with probability at least , for all , the profit output by the private wagering mechanism satisfies
3.3.2 Keeping wagers private
Property (h) requires that bettors’ reports be kept private but does not guarantee private wagers. The same tricks used in our private wagering mechanism could be applied to obtain a privacy guarantee for both reports and wagers if wagers are restricted to lie in a bounded range , but this would come with a great loss in sensitivity. Under the most straightforward extension, the parameter would need to be set to rather than , greatly reducing the scale of achievable profits and thus making the mechanism impractical in most settings.
Loosely speaking, the extra factor of stems from the fact that a bettor’s effect on the profit of any other bettor must be roughly the same whether he wagers the maximum amount or the minimum. The poor dependence on is slightly more subtle. We created a privatebelief mechanism by replacing each bettor ’s score in the WSWM with a random variable that is differentially private in . To obtain private wagers, we would instead need to replace the full term with a random variable for each . This term depends on the wagers of all bettors in addition to . Since each bettor’s profit would depend on such random variables, achieving joint differential privacy would require that each random variable be differentially private in each bettor’s wager.
We believe that sacrifices in sensitivity are unavoidable and not merely an artifact of our techniques and analysis, but leave a formal lower bound to future work.
4 Limits of Privacy with CostFunction Market Makers
In practice, prediction markets are often run using dynamic mechanisms that update in real time as new information surfaces. We now turn to the problem of adding privacy guarantees to continuoustrade markets. We focus our attention on costfunction prediction markets, in which all trades are placed through an automated market maker (Hanson, 2003; Chen and Pennock, 2007; Abernethy et al., 2013). The market maker can be viewed as a streaming algorithm that takes as input a stream of trades and outputs a corresponding stream of market states from which trade prices can be computed. Therefore, the privacy guarantees we seek are in the form of Definition 3. We ask whether it is possible for the automated market maker to price trades according to a cost function while maintaining differential privacy without opening up the opportunity for traders to earn unbounded profits, leading the market maker to experience unbounded loss. We show a mostly negative result: to achieve bounded loss, the privacy term must grow faster than linearly in , the number of rounds of trade.
For simplicity, we state our results for markets over a single binary security, though we believe they extend to costfunction markets over arbitrary security spaces.
4.1 Standard costfunction market makers
We consider a setting in which there is a single binary security that traders may buy or sell. After the outcome has been revealed, a share of the security is worth $1 if and $0 otherwise. A costfunction prediction market for this security is fully specified by a convex function called the cost function. Let be the number of shares that are bought or sold by a trader in the th transaction; positive values of represent purchases while negative values represent (short) sales. The market state after the first trades is summarized by a single value , and the th trader is charged . Thus the cost function can be viewed as a potential function, with capturing the amount of money that the market maker has collected from the first trades. The instantaneous price at round , denoted , is the price per share of purchasing an infinitesimally small quantity of shares: . This framework is summarized in Algorithm 3.
The most common costfunction market maker is Hanson’s log market scoring rule (LMSR) (Hanson, 2003). The cost function for the singlesecurity version of LMSR can be written as where is a parameter controlling the rate at which prices change as trades are made and controls the initial market price at state . The instantaneous price at any state is .
Under mild conditions on , all costfunction market makers satisfy several desirable properties, including natural notions of noarbitrage and information incorporation (Abernethy et al., 2013). We refer to any cost function satisfying these mild conditions as a standard cost function. Although the market maker subsidizes trade, crucially its worstcase loss is bounded. This ensures that the market maker does not go bankrupt, even if traders are perfectly informed. Formally, there exists a finite bound such that for any , any sequence of trades , and any outcome ,
where is the indicator function that is 1 if its argument is true and 0 otherwise. The first term on the lefthand side is the amount that the market maker must pay (or collect from) traders when is revealed. The second is the amount collected from traders. For the LMSR with initial price (), the worstcase loss is .
4.2 The noisy costfunction market maker
Clearly the standard costfunction market maker does not ensure differential privacy. The amount that a trader pays is a function of the market state, the sum of all past trades. Thus anyone observing the stream of market prices could infer the exact sequence of past trades. To guarantee privacy while still approximating costfunction pricing, the marker maker would need to modify the sequence of published prices (or equivalently, market states) to ensure that such information leakage does not occur.
In this section, we define and analyze a noisy costfunction market maker. The noisy market maker prices trades according to a cost function, but uses a noisy version of the market state in order to mask the effect of past trades. In particular, the market maker maintains a noisy market state , where is the true sum of trades and is a (random) noise term. The cost of trade is , with the instantaneous price now . Since the noise term must be large enough to mask the trade , we limit trades to be some maximum size . A trader who would like to buy or sell more than shares must do this over multiple rounds. The full modified framework is shown in Algorithm 4. For now we allow the noise distribution to depend arbitrarily on the history of trade. This framework is general; the natural adaptation of the privacypreserving data market of Waggoner et al. (2015) to the single security prediction market setting would result in a market maker of this form, as would a costfunction market that used existing private streaming techniques for bit counting (Chan et al., 2011; Dwork et al., 2010) to keep noisy, private counts of trades.
In this framework, we can interpret the market maker as implementing a noise trader in a standard costfunction market. Under this interpretation, after a (real) trader purchases shares at state , the market state momentarily moves to . The market maker, acting as a noise trader, then effectively “purchases” shares at this state for a cost of , bringing the market state to . The market maker makes this trade regardless of the impact on its own loss. These noise trades obscure the trades made by real traders, opening up the possibility of privacy.
However, these noisy trades also open up the opportunity for traders to profit off of the noise. For the market to be practical, it is therefore important to ensure that the property of bounded worstcase loss is maintained. For the noisy costfunction market maker, for any sequence of trades , any outcome , and any fixed noise values , the loss of the market maker is
As before, the first term is the (possibly negative) amount that the market maker pays to traders when is revealed, and the second is the amount collected from traders (which no longer telescopes). Unfortunately, we cannot expect this loss to be bounded for any noise values; the market maker could always get extremely unlucky and draw noise values that traders can exploit. Instead, we consider a relaxed version of bounded loss which holds in expectation with respect to the noise values .
In addition to this relaxation, one more modification is necessary. Note that traders can (and should) base their actions on the current market price. Therefore, if our loss guarantee only holds in expectation with respect to noise values , then it is no longer sufficient to give a guarantee that is worst case over any sequences of trades. Instead, we allow the sequence of trades to depend on the realized noise, introducing a game between traders and the market maker. To formalize this, we imagine allowing an adversary to control the traders. We define the notion of a strategy for this adversary.
Definition 6 (Trader strategy).
A trader strategy s is a set of (possibly randomized) functions , with each mapping a history of trades and noisy market states to a new trade for the trader at round .
Let be the set of all strategies. With this definition in place, we can formally define what it means for a noisy costfunction market maker to have bounded loss.
Definition 7 (Bounded loss for a noisy costfunction market maker).
A noisy costfunction market maker with cost function and distribution over noise values is said to have bounded loss if there exists a finite such that for all strategies , all times , and all ,
where the expectation is taken over the market’s noise values distributed according to and the (possibly randomized) actions of a trader employing strategy s. In this case, the loss of the market maker is said to be bounded by . The noisy costfunction market maker has unbounded loss if no such exists.
If the noise values were deterministic, this definition of worstcase loss would correspond to the usual one, but because traders react intelligently to the specific realization of noise, we must define worstcase loss in gametheoretic terms.
4.3 Limitations on privacy
By effectively acting as a noise trader, a noisy costfunction market maker can partially obscure trades. Unfortunately, the amount of privacy achievable through this technique is limited. In this section, we show that in order to simultaneously maintain bounded loss and achieve differential privacy, the quantity must grow faster than linearly as a function of the number of rounds of trade.
Before stating our result, we explain how to frame the market maker setup in the language of differential privacy. Recall from Section 2 that a differentially private unbounded streaming algorithm takes as input a stream of arbitrary length and outputs a stream of values that depend on in a differentially private way. In the market setting, the stream corresponds to the sequence of trades . We think of the noisy costfunction market maker (Algorithm 4) as an algorithm that, on any stream prefix , outputs the noisy market states .^{2}^{2}2Announcing allows traders to infer the instantaneous price . It is equivalent to announcing in terms of information revealed as long as is strictly convex in the region around . The goal is to find a market maker such that is differentially private.
One might ask whether it is necessary to allow the privacy guarantee to diminish as the the number of trades grows. When considering the problem of calculating noisy sums of bit streams, for example, Chan et al. (2011) are able to maintain a fixed privacy guarantee as their stream grows in length by instead allowing the accuracy of their counts to diminish. This approach doesn’t work for us; we cannot achieve bounded loss yet allow the market maker’s loss to grow with the number of trades.
Our result relies on one mild assumption on the distribution over noise. In particular, we require that the noise be chosen independent of the current trade . ^{3}^{3}3The proof can be extended easily to the more general case in which the calculation of is differentially private in ; we make the slightly stronger assumption to simplify presentation. We refer to this as the tradeindependent noise assumption. The distribution of may still depend on the round , the history of trade , and the realizations of past noise terms, . This assumption is needed in the proof only to rule out unrealistic market makers that are specifically designed to monitor and infer the behavior of the specific adversarial trader that we consider, and the result likely holds even without it. However, it is not a terribly restrictive assumption as most standard ways of generating noise could be written in this form. For example, Chan et al. (2011) and Dwork et al. (2010) show how to maintain a noisy count of the number of ones in a stream of bits. Both achieve this by computing the exact count and adding noise that is correlated across time but independent of the data. If similar ideas were used to choose the noise term in our setting, the tradeindependent noise assumption would be satisfied. The noise employed in the mechanism of Waggoner et al. (2015) also satisfies this assumption. Our impossibility result then implies that their market would have unbounded loss if a limit on the number of rounds of trade were not imposed. To obtain privacy guarantees, Waggoner et al. must assume that the number of trades is known in advance and can therefore be used to set relevant market parameters.
We now state the main result.
Theorem 4.
Consider any noisy costfunction market maker using a standard convex cost function that is nonlinear in some region, a noise distribution satisfying the tradeindependent noise assumption, and a bound on trade size. If the market maker satisfies bounded loss, then it cannot satisfy differential privacy for any function such that with any constant .
This theorem rules out bounded loss with for any constant . It is open whether it is possible to achieve (and therefore ) for some , but such a guarantee would likely be insufficient in most practical settings.
Note that with unbounded trade size (i.e., ), our result would be trivial. A trader could change the market state (and hence the price) by an arbitrary amount in a single trade. To provide differential privacy, the noisy market state would then have to be independent of past trades. The noisy market price would not be reflective of trader beliefs, and the noise added could be easily exploited by traders to improve their profits. By imposing a bound on trade size, we only strengthen our negative result.
While the proof of Theorem 4 is quite technical, the intuition is simple. We consider the behavior of the noisy costfunction market maker when there is a single trader trading in the market repeatedly using a simple trading strategy. This trader chooses a target state . Whenever the noisy market state is less than (and so ), the trader purchases shares, pushing the market state as close to as possible. When the noisy state is greater than (so ), the trader sells shares, again pushing the state as close as possible to . Each trade makes a profit for the trader in expectation if it were the case that with probability . Since there is only a single trader, this means that each such trade would result in an expected loss with respect to for the market maker. Unbounded expected loss for any implies unbounded loss in the worst case—either when or . The crux of the proof involves showing that in order achieve bounded loss against this trader, the amount of added noise cannot be too big as grows, resulting in a sacrifice of privacy.
To formalize this intuition, we first give a more precise description of the strategy employed by the single trader we consider.
Definition 8 (Target strategy).
The target strategy with target chosen from a region in which is nonlinear is defined as follows. For all rounds ,
As described above, if with probability , a trader following this target strategy makes a nonnegative expected profit on every round of trade. Furthermore, this trader makes an expected profit of at least some constant on each round in which the noisy market state is more than a constant distance from . The market maker must subsidize this profit, taking an expected loss with respect to on each round. These ideas are formalized in Lemma 2, which lower bounds the expected loss of the market maker in terms of the probability of falling far from . In this statement, denotes the Bregman divergence^{4}^{4}4The Bregman divergence of a convex function of a single variable is defined as . The Bregman divergence is always nonnegative. If is strictly convex, it is strictly positive when the arguments are not equal. of . The proof is in the appendix.
Lemma 2.
Consider a noisy costfunction market maker satisfying the conditions in Theorem 4 with a single trader following the target strategy with target . Suppose with probability . Then for any such that ,
where the expectation and probability are taken over the randomness in the noise values , the resulting actions of the trader, and the random outcome , and where .
We now complete the proof.
Proof of Theorem 4.
We will show that bounded loss implies that differential privacy cannot be achieved with for any constant .
Throughout the proof, we reason about the probabilities of various events conditioned on there being a single trader playing a particular strategy. All strategies we consider are deterministic, so all probabilities are taken just with respect to the randomness in the market maker’s added noise ().
As described above, we focus on the case in which a single trader plays the target strategy with target . Define to be the open region of radius around , that is, . Let and let . Notice that and do not intersect, but from any market state a trader could move the market state to with a purchase or sale of no more than shares.
For any round , let be the strategy in which for all rounds , but if (otherwise, can be defined arbitrarily). In other words, a trader playing strategy behaves identically to a trader playing strategy on all rounds except round . On round , the trader instead attempts to move the market state to .
For any , the behavior of a trader playing strategy and a trader playing strategy are indistinguishable through round , and therefore the behavior of the market maker is indistinguishable as well. At round , if it is the case that (and therefore and also ), then a trader playing strategy would purchase shares, while a trader playing strategy would purchase . Differential privacy tells us that conditioned on such a state being reached, the probability that lies in any range (and in particular, in ) should not be too different depending on which of the two actions the trader takes. More formally, if the market maker satisfies differential privacy, then for all rounds ,
The first inequality follows from the definition of differential privacy. The second follows from the fact that and are disjoint. The last line is a consequence of the tradeindependent noise assumption. By simple algebraic manipulation, for all ,
Comments
There are no comments yet.