Information Aggregation in Exponential Family Markets

02/22/2014 ∙ by Jacob Abernethy, et al. ∙ 0

We consider the design of prediction market mechanisms known as automated market makers. We show that we can design these mechanisms via the mold of exponential family distributions, a popular and well-studied probability distribution template used in statistics. We give a full development of this relationship and explore a range of benefits. We draw connections between the information aggregation of market prices and the belief aggregation of learning agents that rely on exponential family distributions. We develop a very natural analysis of the market behavior as well as the price equilibrium under the assumption that the traders exhibit risk aversion according to exponential utility. We also consider similar aspects under alternative models, such as when traders are budget constrained.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Prediction markets are aggregation mechanisms that allow market prices to be interpreted as predictive probabilities on an event. Each trader in the market is assumed to have some private information that he uses to make a prediction on the outcome of the event. Traders are allowed to report their beliefs by buying and selling securities whose ultimate payoff depends on the future outcome. This will affect the state of the market, thus updating the predictive probabilities for the event. Further, since the trades are done sequentially, the trader is allowed to observe all past trades in the market and update his private information based on this information. In this sense the market prices, which are in effect the prices at which the marginal trader is willing to buy or sell the available securities, can be interpreted as an aggregate “consensus probability forecast” of the event in question.

Much of the work on prediction market design has focused heavily on structural properties of the mechanism: incentive compatibility, the market maker loss, the available liquidity, the fluctuations of the prices as a function of the trading volume, to name a few. Absent from much of the literature is a corresponding semantics of the market behavior or the observed prices. That is, how can we interpret the equilibrium market state when we have a number of traders with diverse beliefs on the underlying state of the world? In what sense is the market an aggregation mechanism? Do price changes relate to our usual Bayesian notion of information incorporation via posterior updating?

In the present work we show that a number of classical statistical tools can be leveraged to design a prediction market framework in the mold of exponential family distributions; we show that this statistical framework leads to a number of attractive properties and interpretations. Common concepts in statistics—including entropy maximization, log loss, and bayesian inference—relate to natural aspects of our class of mechanisms. In particular, the central objects in our market framework can be interpreted via concepts used to define exponential families:

  • the market’s payoff function corresponds to the sufficient statistics of the distribution;

  • the vector of

    outstanding shares in the market corresponds to the natural parameter vector of the distribution;

  • the market prices correspond to mean parameters;

  • the market’s cost function corresponds to the distribution’s log-partition function.

We begin in Section 2 with a discussion of scoring rules based on exponential family distributions, and we show how the framework leads to a variety of scoring rules for continuous outcome spaces. We turn our attention to market design in Section 3 and give a full description of our proposed mechanisms. In addition to showing the syntactic relationship between exponential families and prediction markets, we explore a number of rich semantic implications as well. In particular, we show that our formulation allows us to analyze the evolution of the market under various models of trader behavior:

  • Trader behavior varies depending on how they assimilate information; for example, should we consider our agents as Bayesians or frequentists. In Section 4

    we consider traders that use a conjugate prior to update their beliefs, and we study how their trades would affect the market state.

  • In Section 5 we consider risk-averse agents that optimize their bets according to exponential utility. In this case we can characterize precisely how a single trader interacts with the market, as well as the equilibrium reached given multiple traders; this result is achieved via a potential game argument. The eventual market state is a weighted combination of traders’ beliefs and the initial state; the weights are proportional to risk aversion parameters.

  • In Section 6 we consider budget-limited traders who are constrained in how much they influence the market. We analyze the market under these circumstances; we are able to show that traders with good information can expect to profit and their influence over the market state increases over time whereas malicious traders have limited impact on the market.

Related Work. The notion of an exponential family distribution is fundamental to this paper. For comprehensive introductions to these distributions, see [4, 32]. Exponential families are intimately tied to the notions of log loss and entropy, but can be generalized to other types of convex losses and information, as shown by Grünwald and Dawid [18], who also make a connection to scoring rules.

Scoring rules are a measure of prediction accuracy, and we are concerned here with scoring rules for statistic expectations, typically over infinite outcome spaces. Such rules have been characterized by Savage [29]; see also [15, 23]. Our rules are of course special cases of this characterization, but it appears the range of elegant scoring rules that arise from exponential families has not been appreciated. Indeed, Gneiting and Raftery [17] observe that specific instances of scoring rules for continuous outcomes are lacking, and survey various possibilities.

In a seminal paper, Hanson [19] showed how to form a prediction market based on a sequentially-shared scoring rule, and specifically proposed the logarithmic market scoring rule (LMSR) based on log loss for finite outcome spaces [20]. The markets we introduce are direct generalizations of the LMSR to continuous outcomes, but take the form of cost-function based markets as introduced by Chen and Pennock [6]Gao et al. [16] and Chen et al. [7] also consider extending various market makers to infinite outcome spaces.

Prediction markets are known to perform well in practice [28, 27]. However, a sound theory for interpreting trader behavior and market prices is an ongoing area of study [33]. At one extreme, agents are assumed myopic and risk-neutral, implying they move the market state to their belief [8]. At the other extreme, agents are strategic and the market fully incorporates all information [25].

We are not aware of any works that consider risk-averse agents within cost-function based markets. However, risk aversion is a fundamental component of mathematical finance and portfolio optimization, and there are close connections between the notion of a cost function and that of a convex risk measure [13, 12]. Indeed, they arise from the same axioms as noted by Othman and Sandholm [26]. We see the potential to draw more on the mathematical finance literature to take into account risk aversion, as prediction markets are simply single-period financial markets [14, Part I]

. We also note that connections between Machine Learning and market mechanisms have been explored in

[30].

2 Generalized Log Scoring Rules

We consider a measurable space consisting of a set of outcomes together with a -algebra . An agent or expert has a belief over potential outcomes taking the form of a probability measure absolutely continuous with respect to a base measure .111Recall that a measure is absolutely continuous with respect to  if for every for which . In essence the base measure  restricts the support of . In our examples will typically be a restriction of the Lebesgue measure for continuous outcomes or the counting measure for discrete outcomes. Throughout we represent the belief as the corresponding density with respect to . Let denote the set of all such probability densities.

We are interested in eliciting information about the agent’s belief, in particular expectation information. Let

be a vector-valued random variable or

statistic, where is finite. The aim is to elicit where is the random outcome. A scoring rule is a device for this purpose. Let

be the set of realizable statistic expectations. A scoring rule pays the agent according to how well its report agrees with the eventual outcome . The following definition is due to Lambert et al. [23]. A scoring rule is proper for statistic if for each and with expected statistic , we have for all

(1)

Given a proper scoring rule any affine transformation of the rule, with and an arbitrary real-valued function of the outcomes, again yields a proper scoring rule termed equivalent [9, 17]. Throughout we will implicitly apply such affine transformations to obtain the clearest version of the scoring rule. We will also focus on scoring rules where inequality (1) is strict to avoid trivial cases such as constant scoring rules.

Classically, scoring rules take in the entire density rather than just some statistic, and incentive compatibility must hold over all of . When the outcome space is large or infinite, it is not feasible to directly communicate , so the definition allows for summary information of the belief.

Note that Definition 2 places only mild information requirements on the part of the agent to ensure truthful reporting. Because condition (1) holds for all consistent with expectation , it is enough for the agent to simply know the latter and not the complete density to be properly incentivized. However, the agent must also agree with the support of the density as implicitly defined by base measure .

When the outcome space is finite we recover classical scoring rules by using the statistic that maps an outcome to a unit vector with a 1 in the component corresponding to . The expectation of is then exactly the probability mass function.

2.1 Proper Scoring from Maximum Entropy

Our starting point for designing proper scoring rules is the classic logarithmic scoring rule for eliciting probabilities in the case of finite outcomes. This rule is simply , namely we take the log likelihood of the reported density at the eventual outcome. To generalize the rule to expected statistics rather than full densities, we consider a subset of densities . If there is a bijection between the sets and , then we say that parametrizes and write for the density mapping to . Given such a family parametrized by the relevant statistics, the generalized log scoring rule is then

(2)

Even though the log score is only applied to densities from , according to Definition 2 it must work over all densities in . It turns out this is possible if is chosen appropriately, drawing on a well-known duality between maximum likelihood and maximum entropy [18].

Exponential Families

We let be the maximum entropy distribution with expected statistic . Specifically, it is the solution to the following program:222We assume that the minimum is finite and achieved for all . Some care is needed to ensure this holds for specific statistics and outcome spaces. For example, taking outcomes to be the real numbers, there is no maximum entropy distribution with a given mean

(one can take densities tending towards the uniform distribution over the reals), but there is always a solution if we constrain both the mean and variance.

(3)

where the objective function is the negative entropy of the distribution, namely

Note that the explicit set of constraints in (3) are linear, whereas the objective is convex. We let be the optimal value function of (3), meaning is the negative entropy of the maximum entropy distribution with expected statistics .

It is well-known that solutions to (3) are exponential family distributions, whose densities with respect to take the form

(4)

The density is stated here in terms of its natural parametrization , where arises as the Lagrange multiplier associated with the linear constraints in (3). The term essentially arises as the multiplier for the normalization constraint (the density must integrate to 1), and so ensures that (4) is normalized:

(5)

The function is known as the log-partition or cumulant function corresponding to the exponential family. Its domain is , called the natural parameter space. The exponential family is regular if is open—almost all exponential families of interest, and all those we consider in this work, are regular. The family is minimal if there is no such that is a constant over (-almost everywhere); minimality is a property of the associated statistic , usually called the sufficient statistic in the literature.

The following proposition collects the relevant results on regular exponential families; proofs may be found in Wainwright and Jordan [32, Prop. 3.1–3.2, Thm. 3.3–3.4] and see also Banerjee et al. [2, Lem. 1, Thm. 2]. A convex function is of Legendre type if it is proper, closed, strictly convex and differentiable on the interior of its domain, and when lies on the boundary of the domain.

Proposition 2.1

Consider a regular exponential family with minimal sufficient statistic. The following properties hold:

  1. and are of Legendre type, and (equivalently ).

  2. The gradient map is one-to-one and onto the interior of . Its inverse is which is one-to-one and onto the interior of .

  3. The exponential family distribution with natural parameter has expected statistic .

  4. The maximum entropy distribution with expected statistic is the exponential family distribution with natural parameter .

In the above denotes the convex conjugate of , which here can be evaluated as . Similarly, .

Proper Log Scoring

We are now in a position to analyze the log scoring rule under exponential family distributions. From our discussion so far, we have that an exponential family density can be parametrized either by the natural parameter , or by the mean parameter , and that the two are related by the invertible gradient map . We will write or given the parametrization used.

The following observation is crucial. Let be a density (not necessarily from an exponential family) with expected statistic , let be the exponential family with the same expected statistic, and let be an alternative report. Then from (4) note

(6)

where is the natural parameter for the exponential family with statistic . We see from this that the expected log score only depends on the expectation of the underlying density, not the full density, which is how we can achieve proper scoring according to Definition 2. Consider the logarithmic scoring rule defined over a set of densities parametrized by . The scoring rule is proper if and only if is the exponential family with statistic . Let be the agent’s true belief and an alternative report, and let be a density consistent with . Let and , and note that . We have

(7)

The latter is positive by the strict convexity of , which shows that the log score is proper. For the converse, assume the defined log score is proper. By the Savage characterization of proper scoring rules for expectations (see Gneiting and Raftery [17, Thm. 1] and Savage [29]), we must have

for some strictly convex function . Let , so that , and let . Then the above can be written as

which shows that takes the form of an exponential family.

As further intuition for the result, note that (7) is the definition of the ‘Bregman divergence’ with respect to strictly convex function , written . Therefore we have

where the last equality is a well-known identity relating the Bregman divergences of and . The equation states that the agent’s regret from misreporting its mean parameter does not depend on the full density , only the mean .

2.2 Examples: Moments over the Real Line

Theorem 6 leads to a straightforward procedure for constructing score rules for expectations. Define the relevant statistic, and consider the maximum entropy (equivalently, exponential family) distribution consistent with the agent’s reported mean

. The scoring rule compensates the agent according to the log likelihood of the eventual outcome according to this distribution. The interpretation is that the agent is only providing partial information about the underlying density, so the principal first infers a full density according to the principle of maximum entropy, and then scores the agent using the usual log score.

An advantage of this generalization of the log score is that, for many domains (multi-dimensional included) and expectations of interest, it leads to novel closed-form scoring rules. By examining the log densities of various exponential families, we can for instance obtain scoring rules for several different combinations of the arithmetic, geometric, and harmonic means, as well as higher order moments. The following examples illustrate the construction. As base measure we take the Lebesgue restricted to

, and we consider the statistic so that we are simply eliciting the mean. The maximum entropy distribution with a given mean

is the exponential distribution, and taking its log density gives the scoring rule

(8)

We stress that although this rule is derived from the exponential distribution, Theorem 6 implies that it elicits the mean of any distribution supported on the non-negative reals (e.g., Pareto, lognormal). Indeed, it is easy to see that the expected score (8) depends only on the mean of the agent’s belief because it is linear in . As a generalization of this example, the maximum entropy distribution for the -th moment with respect to the same base measure is the Weibull distribution. Taking its log density leads to the following equivalent scoring rule:

(9)

where denotes the gamma function (the extension of the factorial to the reals). We have not found either scoring rule (8) or (9) in the literature. As a base measure we take the Lebesgue over the real numbers . We are interested in eliciting the mean and variance , so as a statistic we take for which . The max entropy distribution for a given mean and variance is the Gaussian, whose log density gives the scoring rule

(10)

Again, we stress that this scoring rule elicits the mean and variance of any density over the real numbers, not just those of a normal distribution. The construction easily generalizes to a multi-dimensional outcome space by taking the log density of the multivariate normal:

(11)

Here the statistics being elicited are the mean vector and the covariance matrix . These scoring rules have been studied by Dawid and Sebastiani [10] as rules that only depend on the mean and variance of the reported density. They note that these rules are weakly proper (because they do not distinguish between densities with the same first and second moments), but do not make the point that knowledge of the full density is not necessary on the part of the agent.

In the above, Example 9 illustrates an important point about parametrizations of the elicited expectations. The variance cannot be written as for any , because the mean enters the definition of but is not available when is defined (indeed it is elicited in tandem with the variance).333This is an intuitive but far from formal explanation for the fact that the dimension of the message space, or elicitation complexity, for eliciting the variance is at least 2 [23]. Instead one must use the first two uncentered moments and . These are in bijection with and , so the resulting scoring rule can be re-written in terms of the latter. Therefore, it is possible to elicit not just expectations but also bijective transformations of expectations.

3 Exponential Family Markets

In a single-agent setting, a scoring rule is used to elicit the agent’s belief. In a multi-agent setting, a prediction market can be used to aggregate the beliefs of the agents. In his seminal paper Hanson [19] introduced the idea of a market scoring rule, which inherits the appealing elicitation and aggregation properties of both in order to perform well in thin or thick markets. In this section, we adapt the generalized log scoring rule to a market scoring rule which leads to markets with simple closed-form cost functions for many statistics of interest.

3.1 Prediction Market

In a prediction market an agent’s expected belief is elicited indirectly through the purchase and sale of contingent claim securities. Under this approach, each component  of the statistic is interpreted as the payoff function of a security; that is, a single share of security pays off when occurs. Thus if the portfolio of shares held by the agent is , where entry corresponds to the number of shares of security , then the payoff to the agent when occurs is evaluated by taking the inner product .

As a concrete example, recall that in the classic finite-outcome case the statistic has a component for each outcome such that if and 0 otherwise. Therefore the corresponding security pays 1 dollar if outcome occurs. (These are known as Arrow-Debreu securities.) In Example 2.2 the one-dimensional statistic is , corresponding to a security whose payoff is linear in the outcome . (This amounts to a futures contract.)

The standard way to implement a prediction market in the literature, due to Chen and Pennock [6], is via a centralized market maker. The market maker maintains a convex, differentiable cost function , where records the revenue collected when the vector of outstanding shares is . The cost to an agent of purchasing portfolio under a market state of is , and therefore the instantaneous prices of the securities are given by the gradient .

A risk-neutral agent will choose to acquire shares up to the point where, for each share, expected payoff equals marginal price. Formally, if the agent acquires portfolio , moving the market state vector to , then we must have

(12)

In this way, by its choice of , the agent reveals that its expected belief is . We stress that this observation relies on the assumptions that 1) the agent is risk-neutral, 2) the agent does not incorporate the market’s information into its own beliefs, and 3) the agent is not budget constrained. We will examine relaxations of each assumption in later sections.

3.2 Information-Theoretic Interpretation

In the remainder of this paper we focus on the following cost function, which arises from the “generalized” logarithmic market scoring rule (LMSR):

(13)

This is of course exactly the log-partition function (5) for the exponential family with sufficient statistic , and we recover the classic LMSR using outcome indicator vectors as statistics. Because an agent would never select a portfolio with infinite cost, the effective domain (i.e., the possible vectors of outstanding shares) of is , which gives an economic interpretation to the natural parameter space of an exponential family.

The correspondence between the cost function (13) and the log-partition function (5) suggests the following interpretation. The market maker maintains an exponential family distribution over the state space parametrized by share vectors that lie in . When an agent buys shares, it moves the distribution’s natural parameter so that the market prices matches its beliefs, or in other words the market’s mean parametrization matches the agent’s expectation.

There is a well-known duality between scoring rules and cost-function based markets [1, 19]. To see this in our context, recall from (6):

where is the agent’s belief and the agent’s report. The expected log score from reporting is exactly the same as the expected payoff from buying portfolio of shares (assuming an initial market state of 0), as is the expected revenue and is the cost. As in Section 2 this reasoning relies on the assumption of risk-neutrality, not on any specific form for the agent’s belief.

The agent’s expected profit from moving the share vector from to is

recalling (7). Now Banerjee et al. [3]

have observed (among others) that the Kullback-Leibler divergence between two exponential family distributions is the Bregman divergence, with repect to the log-partition function, between their natural parameters. The agent’s expected profit is therefore the KL divergence between the market’s implied expectation and the exponential family corresponding to the agent’s expectation, a well-known property from the classical LMSR 

[20].

3.3 Examples: Real Line and the Sphere

Let us now revisit our scoring rule examples from Section 2 in the context of prediction markets. The relevant entities now are the payoff function, the effective domain of shares, and the cost function. We consider outcomes over the positive reals and set up a market for the expected outcome, consisting of a single security that pays off . The log partition function of the exponential distribution leads to the following cost function:

The effective domain is . This means the market must start with a negative number of outstanding shares for the security, and the number of shares must stay negative. The market maker need not explicitly enforce this, because by the Legendre property of the cost tends to as the outstanding shares approach the boundary, which is straightforward to see in this example.

We consider outcomes over the real line and set up a market with securities corresponding to the first two uncentered moments (i.e, agents are betting on the return and volatility). The securities are defined by the payoffs . The log partition function of the normal distribution, under its natural parametrization, leads to the following cost function:

The effective domain is

. Again, we have here an instance where it is not possible for the number of outstanding shares of the second security to exceed 0. However, an arbitrary amount of the securities can be sold short, which corresponds to increasing the variance of the market’s estimate.

As another example let the outcome space be the -dimensional unit sphere. This setting was considered by Abernethy et al. [1] who provide a cost function implicitly defined through a variational characterization. The maximum entropy approach leads to another alternative. We have a security for each of the dimensions, and security simply pays off , where is the unit-norm outcome. The maximum entropy distribution over the sphere with such sufficient statistics is the von Mises-Fisher distribution. The log partition function corresponds to

where refers to the modified Bessel function of first kind and order ; see Banerjee et al. [3] for an explanation of these quantities. The effective domain of is the positive orthant in . The mean parametrization of the von Mises-Fisher distribution gives a generalized log scoring rule for the expected outcome components, but it is unwieldy and involves several special functions.

4 Bayesian Traders with Linear Utility

In the standard model of cost-function based prediction markets, a sequence of myopic, risk-neutral agents arrive and trade in the market [6, 8]. As we saw in Section 3.1, such a trader moves the prices to its own expectation . However, this means that the market does not perform meaningful aggregation of agents’ beliefs, as the final prices are simply the final agent’s expectation.

In this section we examine the aggregation behavior of the market when agents are Bayesian and take into account the current market state when forming their beliefs. This requires more structure to their beliefs. For this section and the remainder of the paper, we will assume that agents have exponential family beliefs.

The exponential families framework is well-suited to reasoning about Bayesian updates. As before let the data distribution be given by where is the log partition function and are the sufficient statistics. Instead of direct beliefs about the data distribution the agent maintains a conjugate prior over the parameters . Every exponential family admits a conjugate prior of the form

Note that this is also an exponential family with natural parameter where and is a positive integer. The sufficient statistic maps to , and the log partition function is defined as the normalizer as usual. For a complete treatment of exponential families conjugate priors, see for instance Barndorff-Nielsen [4]. Now Diaconis and Ylvisaker [11, Thm. 2] and Jewell [22] have shown that

(14)

meaning that is the posterior mean. Thus, it is helpful to think of the prior as being based on a ‘phantom’ sample of size and mean . Suppose now that the agent observes an empirical sample with mean and size . By a standard derivation [see 11], the posterior conjugate prior parameters become and , and the posterior expectation (14) evaluates to

(15)

Thus the posterior mean is a convex combination of the prior and empirical means, and their relative weights depend on the phantom and empirical sample sizes.

Consider Bayesian agents maintaining an exponential family conjugate prior over the data model’s natural parameters (equivalently, the expected security payoffs). Each agent has access to a private sample of the data of size with mean statistic . If agents have arrived before to trade, then the current market prices correspond to the phantom sample, and the phantom sample size is . After forming the posterior (15) with these substitutions, the (risk-neutral) agent purchases shares to move the current market share vector to

As a result, the final market prices under this behavior are a simple average of the agent’s mean parameters and the initial market prices. We note that to facilitate such belief updating, the market should post the number of trades since initialization.

5 Risk-Averse Traders with Exponential Utility

In this section we relax the standard assumption that agents in the market are risk-neutral. We show that with sufficient extra structure to the agents’ beliefs and utilities, the market performs a clean aggregation of the agents’ beliefs via a simple weighted average. Assume that the agent has an exponential utility function for wealth :

(16)

Here controls the risk aversion: the agent’s aversion grows as increases, and as tends to 0 we approach linear utility (risk-neutrality). Specifically, is the Arrow-Pratt coefficient of absolute risk aversion, and exponential utilities of the form (16) are the unique utilities that exhibit constant absolute risk aversion [31, Chap. 11].

If wealth is distributed according to a probability measure , then the certainty equivalent of a random amount of wealth is defined as

Suppose as before that the agent’s belief over outcomes takes the form of a density with respect to base measure . There is a close relationship between the log-partition function and the certainty equivalent under exponential utility [see 5]. The certainty equivalent of the agent’s expected profit, with exponential utility, when acquiring shares under a market state of is

(17)

where is the log partition function (5) with a base measure of . Furthermore, if the agent’s belief is an exponential family with natural parameter , we have

where is the usual log partition function with base measure . Explicitly, the certainty equivalent of the profit is

For the second part of the result, we have

where the last line follows from the fact that density integrates to 1.

Recall that for the generalized LMSR, the cost function is exactly the log partition function . We are therefore lead to the following understanding of a risk-averse agent’s behavior in such a market. Suppose an agent has exponential utility with coefficient and exponential family beliefs with natural parameter . In the generalized LMSR market with current market state , the agent’s optimal trade moves the state vector to

(18)

The agent’s optimal trade maximizes its expected utility, or equivalently the certainty equivalent. From Lemma 5 and that , the agent maximizes

This objective is strictly concave, from the strict convexity of . The optimum is therefore characterized by the first-order condition . As the gradient map is one-to-one, this is solved by equating the arguments, which leads to and (18).

Note that as, tends to 0, we approach risk neutrality and the agent moves the share vector all the way to its private estimate . As grows larger (the agent grows more risk averse) the agent makes smaller trades to reduce it exposure, and the final state stays closer to the current state . Update (18) implies that, under the conditions of the theorem, a market that receives a sequence of myopic traders aggregates their natural parameters in the form of an exponentially weighted moving average. The final market estimates (i.e., prices) are obtained by applying to this average.

Liquidity Adjustment

In practice the centralized market maker allows itself some control over the liquidity in the market, which captures how responsive prices are to trades. To adjust liquidity we consider the parametrized cost . Here is construed as the inverse liquidity, or price responsiveness. A larger setting of means fewer shares need to be bought to reach the same prices.444The liquidity adjustment to the cost function takes the same form as the risk-aversion adjustment to the exponential utility in (16). In convex analysis, this transformation is known as the perspective function [21, p. 90].

In the context of the generalized LMSR we write rather than where is the log partition function, with liquidity-adjusted version . Let be the agent’s mean belief with corresponding natural parameter . Recall that a risk-neutral agent moves the share vector so that the prices match its mean parameter. Therefore, define the target shares as . The target shares and natural parameter are related by . In addition it is straightforward to check that so we have

(19)

Higher price responsiveness means fewer shares must be bought to make the market prices match the agent’s expectation, so the natural parameter is scaled down accordingly. With a liquidity adjustment the analysis of Theorem 5 can be extended and yields the following result, where as before and are the market’s outstanding shares and prices respectively. Under the conditions of Theorem 5 and an inverse liquidity of , the agent’s optimal trade moves the state vector to

(20)

According to (20), as grows large the agent moves the market state closer to the target shares, rather than its true natural parameter. Note that the target shares themselves depend on by (19), but the update can be directly written in terms of the agent’s beliefs as in the right-hand side of (20).

5.1 Repeated Trading and the Effective Belief

In previous sections we have analyzed trader behavior assuming it is his first entry into the market. We now pose the question: how will a trader reason about a possible future investment when the trader holds an existing portfolio? In the context of a trader possessing an exponential family belief together with exponential utility, we show that we can explicitly analyze how an agent incorporates an existing portfolio. The key conclusion is that a trader will reason about a future investment simply as though he had updated his belief and had no prior investment.

Suppose an exponential utility agent has exponential family belief parametrized by natural parameter . Based on this belief, let be the vector of shares the agent has purchased on first entry in the market. On a subsequent entry into this market with market state , his optimal purchase is given by the solution of

Then if is the effective belief, the trader’s optimal purchase is given by , moving the share vector to , which is a convex combination of the effective belief and the current market state. Suppose an exponential utility maximizing trader with utility parameter who has belief makes a purchase in a market. On subsequently re-entering the market, he will behave identically to an exponential utility maximizing trader with belief and no prior exposure in the market. Theorem 5.1 implies that financial exposure can be equivalently understood as changing the privately held beliefs.

5.2 Equilibrium Market State for Exponential Utility Agents

We have shown that every exponential-utility maximizing trader picks the share vector so that the eventual market state can be represented as a convex combination of the current market state and the natural parameter of his (exponential family) belief distribution. In this section we will compute the equilibrium state in an exponential family market with multiple such traders.

We draw a well-known result from game theory regarding the class of

potential games. We say a function is at a local optimum if changing any coordinate of does not increase the value of . [Monderer and Shapley [24]] Let be the utility function of the trader given strategies . If there exists a potential function such that

then is a Nash equilibrium if and only if is at a local optimum. In the exponential family market, the cost function is identical to the log partition function defined in (5). Let be the matrix of share vectors purchased by every trader in the market at equilibrium. Let be the initial market state, the natural parameter of trader ’s belief distribution and his risk aversion parameter.

Define a potential function as .

Rather than working directly with the utilities of every trader, we will work with the log of their utility values.555It is important to note that the potential function analysis still applies for any monotonically increasing transformation of the traders’ utility functions. Now the log-utility of trader is

We can now apply Theorem 5.2, hence the equilibrium state is obtained by jointly maximizing for each :

This leads to the following expression for the final market state.

We see that the equilibrium state is a convex combination of the initial market state and all agent beliefs, with the latter weighted according to risk tolerance.

6 Budget-limited Aggregation

In this section, we consider the evolution of the market state when traders are budget-limited. We assume that the traders trade in multiple instances of the market. As before, the market price is interpreted as a probability density over the outcome space and the share vector as the natural parameter of an exponential family distribution. Consistent with the connections drawn in Section 2 and throughout, we measure the error in prediction using the standard log loss.

We show that traders with faulty information can only impose a limited amount of additional loss to the market’s prediction. Further, since informative traders experience an expected increase in budget, they will eventually be unconstrained and allowed to carry out unrestricted trades. Taken together, this means that while the market suffers limited damage from ill-informed traders, it is also able to make use of all the information from informative traders in the long run.

Budget-limited trades

Let be the budget of a trader in the market. Suppose that with infinite budget, the trader would have moved the market state from to , where represents his true belief. Now suppose further that ; that is, the trader’s budget does not allow for purchasing enough shares to move the market state to his belief. In this case, we want to budget-limit the trader’s influence on the market state.

Let the current market state be given by and let the final market state be where . The cost to the trader to move the market state from to is at most his budget and is called his budget-limited trade.

Limited Damage

We will now quantify the error in prediction that the market maker might have to endure as a result of ill-informed entities entering the market. We assume that these entities trade in multiple instances of the market; thus the exposure of the market maker is over several rounds. The log loss function for

shares held is defined as .

The loss induced on the market by an uninformative trader is bounded by his initial budget. First consider the change in budget of a trader over multiple rounds of the prediction market. Let his budget at rounds and be and respectively. The change in budget for trader moving the market state from to with outcome is

Where is called the myopic impact of a trader in round . Thus, the myopic impact captures incremental gain in prediction due to the trader in a round and is equal to the change in his budget in that round.

Since the market evolves so that the budget of any trader never falls below zero, the total myopic impact in rounds caused due to trader is .

An interesting aspect of Lemma 6 is that the log loss can be quantified in the same units as the traders’ budgets.

Budget of Informative Traders

We now characterize the expected change in budget for an informative trader. Let be the current market state. Suppose that an informative trader with belief distribution parametrized by moves the market state to the budget-limited state . Then, the expectation (over the trader’s belief) of the trader’s profit is strictly positive whenever his budget is positive and his belief differs from the previous market position . Let the cost function be equal to the log partition function of the belief distribution. The payoff is given by the sufficient statistics . Then, the trader’s expected net payoff is given by

where is the Bregman divergence based on . The second to last inequality holds since is convex in and we have:

A trader who adjusts the market state may expect positive profit .

We note one important aspect of Lemma 6: the expectation is taken with respect to each trader’s belief at the time of trade, rather than with respect to the true distribution. This is needed because we have made no assumptions about the optimality of the traders’ belief updating procedure. If we assume that the traders’ belief formation is optimal, then this growth result will extend to the true distribution as well.

Given a continuous density the probability a trader will form exactly the same beliefs as the current market position is , and thus, each trader will have positive expected profit on almost all sequences of observed samples and beliefs. This result suggests that, eventually, every informative trader will have the ability to influence the market state in accordance with his beliefs, without being budget limited.

Notice that Lemma 6 only required that the market state to which the trader moves be representable as a convex combination of the current market state and his belief. This means that the result holds for exponential utility traders aiming to maximize their utility by Theorem 5. In this case, the trader who moves the market state can expect his profit to be positive and at least where is the exponential utility parameter. When the cost function is adjusted to with an inverse liquidity parameter as in Section 5, the trader receives an expected payoff of at least .

References

  • Abernethy et al. [2013] Abernethy, J., Chen, Y., and Vaughan, J. W. 2013. Efficient market making via convex optimization, and a connection to online learning. ACM Transactions on Economics and Computation 1, 2, 12:1–12:39.
  • Banerjee et al. [2005a] Banerjee, A., Dhillon, I. S., and Ghosh, J. 2005a. Clustering with Bregman divergences. Journal of Machine Learning Research 6, 1705–1749.
  • Banerjee et al. [2005b] Banerjee, A., Dhillon, I. S., Ghosh, J., and Sra, S. 2005b. Clustering on the unit hypersphere using von Mises-Fisher distributions. Journal of Machine Learning Research 6, 1345–1382.
  • Barndorff-Nielsen [1978] Barndorff-Nielsen, O. 1978. Information and Exponential Families in Statistical Theory. Wiley Publishers.
  • Ben-Tal and Teboulle [2007] Ben-Tal, A. and Teboulle, M. 2007. An old-new concept of convex risk measures: The optimized certainty equivalent. Mathematical Finance 17, 3, 449–476.
  • Chen and Pennock [2007] Chen, Y. and Pennock, D. M. 2007. A utility framework for bounded-loss market makers. In

    In Proceedings of the 23rd conference on uncertainty in artificial intelligence (UAI)

    . 49–56.
  • Chen et al. [2013] Chen, Y., Ruberry, M., and Wortman Vaughan, J. 2013. Cost function market makers for measurable spaces. In Proceedings of the 14th ACM conference on Electronic commerce.
  • Chen and Vaughan [2010] Chen, Y. and Vaughan, J. 2010. A new understanding of prediction markets via no-regret learning. In Proceedings of the 11th ACM Conference on Electronic Commerce (EC). 189–198.
  • Dawid [1998] Dawid, A. P. 1998. Coherent measures of discrepancy, uncertainty and dependence with applications to Bayesian predictive experimental design. Tech. Rep. 139, University College London, Dept. of Statistical Science.
  • Dawid and Sebastiani [1999] Dawid, A. P. and Sebastiani, P. 1999. Coherent dispersion criteria for optimal experimental design. Annals of Statistics 27, 65–81.
  • Diaconis and Ylvisaker [1979] Diaconis, P. and Ylvisaker, D. 1979. Conjugate priors for exponential families. Annals of Statistics 7, 2, 269–281.
  • Föllmer and Knispel [2011] Föllmer, H. and Knispel, T. 2011. Entropic risk measures: Coherence vs. convexity, model ambiguity and robust large deviations. Stochastics and Dynamics 11, 333–351.
  • Föllmer and Schied [2002] Föllmer, H. and Schied, A. 2002. Convex measures of risk and trading constraints. Finance and Stochastics 6, 4, 429–447.
  • Föllmer and Schied [2004] Föllmer, H. and Schied, A. 2004. Stochastic Finance: An Introduction in Discrete Time. de Gruyter Studies in Mathematics. Walter de Gruyter.
  • Frongillo [2013] Frongillo, R. M. 2013. Eliciting means of distributions. In Eliciting Private Information from Selfish Agents. University of California, Berkeley, Chapter 4. PhD Thesis.
  • Gao et al. [2009] Gao, X., Chen, Y., and Pennock, D. M. 2009. Betting on the real line. In Proceedings of the 5th International Workshop on Internet and Network Economics (WINE). 553–560.
  • Gneiting and Raftery [2007] Gneiting, T. and Raftery, A. 2007. Strictly proper scoring rules, prediction, and estimation. Journal of the American Statistical Association 102, 477, 359–378.
  • Grünwald and Dawid [2004] Grünwald, P. and Dawid, A. P. 2004. Game theory, maximum entropy, minimum discrepancy and robust Bayesian decision theory. Annals of Statistics 32, 4, 1367–1433.
  • Hanson [2003] Hanson, R. 2003. Combinatorial information market design. Information Systems Frontiers 5, 1, 105–119.
  • Hanson [2007] Hanson, R. 2007. Logarithmic market scoring rules for modular combinatorial information aggregation. Journal of Prediction Markets 1, 1, 3–15.
  • Hiriart-Urruty and Lemaréchal [2000] Hiriart-Urruty, J.-B. and Lemaréchal, C. 2000. Fundamentals of Convex Analysis. Grundlehren Text Editions. Springer.
  • Jewell [1974] Jewell, W. S. 1974. Credible means are exact Bayesian for exponential families. Astin Bulletin 8, 1, 77–90.
  • Lambert et al. [2008] Lambert, N. S., Pennock, D. M., and Shoham, Y. 2008. Eliciting properties of probability distributions. In Proceedings of the 9th ACM Conference on Electronic Commerce. 129–138.
  • Monderer and Shapley [1996] Monderer, D. and Shapley, L. S. 1996. Potential games. Games and Economic Behavior 14, 1, 124–143.
  • Ostrovsky [2012] Ostrovsky, M. 2012. Information aggregation in dynamic markets with strategic traders. Econometrica 80, 6, 2595–2647.
  • Othman and Sandholm [2011] Othman, A. and Sandholm, T. 2011. Liquidity-sensitive automated market makers via homogeneous risk measures. In Internet and Network Economics. Springer, 314–325.
  • Pennock et al. [2001] Pennock, D. M., Lawrence, S., Giles, C. L., Nielsen, F. A., et al. 2001. The real power of artificial markets. Science 291, 5506, 987–988.
  • Pennock and Sami [2007] Pennock, D. M. and Sami, R. 2007. Computational aspects of prediction markets. In Algorithmic Game Theory, N. Nisan, T. Roughgarden, E. Tardos, and V. V. Vazirani, Eds. Cambridge University Press, Chapter 26.
  • Savage [1971] Savage, L. J. 1971. Elicitation of personal probabilities and expectations. Journal of the American Statistical Association 66, 783––801.
  • Storkey [2011] Storkey, A. J. 2011. Machine learning markets. In Proceedings of AI and Statistics (AISTATS). 716–724.
  • Varian [1992] Varian, H. R. 1992. Microeconomic Analysis. W. W. Norton and Company.
  • Wainwright and Jordan [2008] Wainwright, M. J. and Jordan, M. I. 2008. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning 1, 1–305.
  • Wolfers and Zitzewitz [2006] Wolfers, J. and Zitzewitz, E. 2006. Interpreting prediction market prices as probabilities. Tech. rep., National Bureau of Economic Research.