Robust Multi-agent Counterfactual Prediction

04/03/2019 ∙ by Alexander Peysakhovich, et al. ∙ 0

We consider the problem of using logged data to make predictions about what would happen if we changed the `rules of the game' in a multi-agent system. This task is difficult because in many cases we observe actions individuals take but not their private information or their full reward functions. In addition, agents are strategic, so when the rules change, they will also change their actions. Existing methods (e.g. structural estimation, inverse reinforcement learning) make counterfactual predictions by constructing a model of the game, adding the assumption that agents' behavior comes from optimizing given some goals, and then inverting observed actions to learn agent's underlying utility function (a.k.a. type). Once the agent types are known, making counterfactual predictions amounts to solving for the equilibrium of the counterfactual environment. This approach imposes heavy assumptions such as rationality of the agents being observed, correctness of the analyst's model of the environment/parametric form of the agents' utility functions, and various other conditions to make point identification possible. We propose a method for analyzing the sensitivity of counterfactual conclusions to violations of these assumptions. We refer to this method as robust multi-agent counterfactual prediction (RMAC). We apply our technique to investigating the robustness of counterfactual claims for classic environments in market design: auctions, school choice, and social choice. Importantly, we show RMAC can be used in regimes where point identification is impossible (e.g. those which have multiple equilibria or non-injective maps from type distributions to outcomes).



There are no comments yet.


page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

All markets have rules and some rules work better than others (Roth and Peranson, 1999; Roth et al., 2005; Abdulkadiroğlu et al., 2005; Klemperer, 2002; Porter et al., 2003). Figuring out which rules yield good outcomes is the bread and butter of market design, an interdisciplinary field focused on the engineering of effective rules (Roth, 2002). Good market design is particularly important for businesses which make their livelihoods as platforms (e.g. internet ad auctions, ride sharing, dating sites). A key challenge for market designers is to observe an existing set of rules at work and make a counterfactual statement about how outcomes would change if the rules changed (Bottou et al., 2013; Athey, 2015).

The multi-agent counterfactual question is difficult for two reasons. First, participants are strategic. An agent’s optimal action can change due to changes in the rules of the game, and often, can change when other agents change what they are doing. Second, agents have private information that is not known to the designer so even knowledge of the rules, and ability to compute optimal actions, is insufficient to estimate counterfactual outcomes. For example, if we observed data from a series of first-price sealed bid auctions, we could not assume that agents would continue to bid the same way if we changed the auction format to second price with a reserve.

The technique of structural estimation deals with these issues by assuming that observed actions are coming from a multi-agent system that, through repetition or other forces, has come to equilibrium. Further, it is often assumed that once changes are made, the system will again equilibriate. This means the counterfactual question becomes asking about how equilibria change as we make design changes. A downside of the standard structural approach is that it requires strong assumptions that are not always completely true in practice. For example, this process requires assuming that agents are optimizing their utility given the behavior of others so that an analyst can infer underlying ‘taste’ parameters from agent actions (Berry et al., 1995; Athey and Nekipelov, 2010). It is well known, however, that human decisions do not always obey the axioms of utility maximization (Camerer et al., 2011) and that both mistakes and biases can persist even when there is ample opportunity for learning (Erev and Roth, 1998; Fudenberg and Peysakhovich, 2016).

Our main contribution is to propose a method which allows an analyst to see how robust their counterfactual conclusions are to relaxations of the assumptions of rationality and correct specification of the model. We first show that the counterfactual estimation problem can be written as a game which we call a revelation game. When the standard assumptions are satisfied the revelation game has a unique equilibrium. Looking at the set of -equilibria of the revelation game is equivalent to relaxing these assumptions.

To apply this idea in practice we need to solve for particular equilibria of the revelation game - the ‘worst’ and ‘best’ elements of the -equilibrium set with respect to some evaluation function (e.g. revenue). These equilibria form the upper and lower bounds for our robust multi-agent counterfactual prediction (RMAC). Varying gives the analyst a measure of how confident they can be in their inferences as they relax how strictly the standard assumptions hold. In addition, RMAC can be applied even when standard assumptions about point identification do not hold (e.g. when there are multiple equilibria or when the data is consistent with multiple type distributions) to compute optimistic and pessimistic counterfactual predictions.

The RMAC bounds are different from standard uncertainty bounds (e.g. the standard error of a maximum likelihood estimator). Statistical uncertainty bounds (i.e. standard errors) reflect variance introduced by access to only finite data but still assume the underlying model is completely correct. On the other hand, our robustness bounds are intended to measure error that can come from the analyst using a model that is precisely incorrect but approximately true.

We show that computing the RMAC bounds exactly is a difficult problem as it is NP-hard even for -player Bayesian games. We propose a first-order method based on fictitious play applied to the revelation game which we refer to as revelation game fictitious play (RFP) to compute the RMAC bounds.

We apply RFP to generate RMAC in three domains of interest: auctions, matching, and social choice. In each of them we find that some counterfactual predictions are much more robust than others. Variation in even these simple cases suggests that RMAC can be a useful addition to the toolbox of structural estimation.

1.1 Related Work

Our work is closely related to the notion of partial identification (Manski, 2003). The main idea behind partial identification is that many statistical models are only able to recover a set of parameters consistent with the data, not a single point estimate. The PI literature focuses on models where this ‘identified set’ can be extracted easily. The adversarial revelation game is strongly related in that the equilibrium relaxation we employ makes the counterfactual predictions a set rather than a point. Our optimization procedure finds this set’s worst (in terms of some evaluation function) and best elements and returns them.

Existing work in the field of market design has used econometric techniques to estimate counterfactuals in specific applications. Some existing work focuses on the econometrics of auctions and deriving underlying valuations (types) from bid behavior (Athey and Nekipelov, 2010; Chawla et al., 2017) or payment profiles. Agarwal (2015) focuses on using data from medical residency matches to infer the underlying benefit to a young doctor from a particular residency. These approaches are, like ours, designed with the goal of answering counterfactual questions. However, while they allow for measures of statistical uncertainty they do not allow analysts to check for robustness of conclusions to violations of assumptions. Haile and Tamer (2003) consider using ‘incomplete’ models of auctions to provide some form of robustness but, like much of the literature on the econometrics of auctions (and unlike RMAC), requires hand-deriving estimators specifically tailored to the auction at hand.

Since the pioneering work of Myerson (1981)

there is a large subfield of game theory dedicated to designing mechanisms that optimize some quantity (e.g. seller revenue). Myerson-style results are useful because they give closed form solutions to optimal auction design, however, this comes at a high informational cost. For example, they often require the auctioneer to know the distribution of types (valuations) in the population. These strong assumptions are relaxed in robust mechanism design

(Bergemann and Morris, 2005), automated mechanism design (Conitzer and Sandholm, 2002)

, and recent work in using deep learning methods to approximate optimal mechanisms

(Dütting et al., 2017; Feng et al., 2018). Optimal mechanism design is related to, but different from, the RMAC problem as it typically assumes access to at least some direct information about the distribution of types, whereas our main problem is to robustly infer the underlying types from observed actions. However, these problems are related and combining insights from these literatures with RMAC is an interesting direction for future work.

There is recent interest in relaxing equilibrium assumptions in structural models. For example Nekipelov et al. (2015) consider replacing equilibrium assumptions with the assumption that individuals are no-regret learners. This, again, gives a set valued solution concept which can be worked out explicitly for the special case of auctions. Given the prominence of no-regret learning in algorithmic game theory a natural extension of the work in this paper is to consider expanding RMAC to learning as a solution concept.

2 Bayesian Games

We consider the standard one-shot Bayesian game setup. There are players which each have a type drawn from an unknown distribution . This type is assumed to represent their preferences and private information. For example, in the case of auctions this type describes the valuations of each player for each object.

Definition 1.

A game has a set of actions for each player with generic element . After each player chooses their action, the players receive utilities given by .

We will be interested in systems that come to a stable state and we will use the concept of Bayesian Nash equilibrium. We denote a strategy for player in game as a mapping which takes as input and outputs an action

. As standard for a vector

of variables (strategies, actions, types, etc…), one for each player, we let be the variable for player and be the vector for everyone other than .

Definition 2.

A Bayesian Nash equilibrium is a strategy profile such that for each player , all possible types

for that player which have positive probability under

, and any other strategy we have

The Bayesian Nash equilibrium (BNE) assumption can be motivated by, for example, assuming that repeated play (with rematching) have led learning agents to converge to such a state (Fudenberg and Levine, 1998; Dekel et al., 2004; Hartline et al., 2015). Importantly, BNE states that players’ actions are optimal given the distribution of partners they could play, not necessarily that they are optimal at each realization of the game with types fixed.

For the purposes of lightening notation from here on we will deal with games where every player’s action set is the same and every players’ type is drawn iid from .

3 The Revelation Game as a Counterfactual Estimator

Given the formal setup above, we now turn to answering our main question:

Question 1.

Suppose we have a dataset of actions played in . What can we say about what would happen if we changed the underlying game to ?

Formally, when we say that we change the game to we mean that the action set changes to and the utility functions change to remains a Bayesian game so the definitions and notation above continue to apply.

As a concrete example: in the case of online advertising auctions, will contain a series of auctions with bids taken by different participants. We may wish to ask, what would happen if we changed the auction format?

We now discuss a set of assumptions typically made either implicitly or explicitly when analysts apply equilibrium based structural models to estimate a counterfactual:

Assumption 1 (Equilibrium).

Data is drawn from a BNE of and play in will form a BNE.

Assumption 2 (Identification).

For any possible distribution of types and associated BNE there does not exist another distribution of types and BNE that induces the same distribution of actions.

Assumption 3 (Uniqueness in ).

Given there is a unique BNE in

If the assumptions are satisfied then we can use to answer the counterfactual question. By Assumption 1 each action is optimal against the distribution of actions implied by If is large enough then it approximates the true distribution implied by and . By Assumption 2, there is a unique and that give rise to this distribution and we can use various methods to find them. Once we have we can solve for the equilibrium in , which is unique by Assumption 3, using any number of methods and we are done.

We now show this procedure is equivalent to solving for the Nash equilibrium in a modified game which we refer to as a revelation game.111We are indebted to Jason Hartline who pointed out in an earlier versions of this work that our optimization problem can be thought of as equilibrium finding and thus make exposition much simpler. We do not consider that agents will actually play this game, rather we will show that this proxy game is a useful abstraction for doing robust counterfactual inference.

The revelation game has players, one for each element of . We refer to these as data-players to avoid confusion with the players in and

. Each data-player knows that the analyst has a random variable

of actions from the equilibrium of . includes the data-player’s own true equilibrium action but the other actions are ex-ante unknown. Each data-player has a true type which is unknown to the analyst, the types of the other data-players are unknown to but it is commonly known that they are drawn from the distribution

Each data-player makes a decision: they report a type and an action for the counterfactual game They are paid as follows: first, let the denote the random variable which denotes the actions of the other data-players the analyst will observe. Now we define the -Regret of data-player as

We define the Regret of data-player as

The revelation game is a Bayesian game where each data-player tries to minimize a loss given by the max of the two above regrets:

Since most game theory definitions (e.g. equilibria) use utility maximization, rather than loss minimization, we will also sometimes use the notation

Given these definitions, we can show the following property:

Theorem 1.

If assumptions 1-3 are satisfied then the revelation game has a unique BNE where each agent reveals their true type and counterfactual action.

We leave the proof of the theorem to the Appendix. With this result in hand, we now discuss how to modify the revelation game to make our counterfactual predictions robust.

4 Robust Multi-agent Counterfactual Inference

In reality, assumptions 1-3 above are rarely satisfied exactly and we would like to see how robust our conclusions are to violations of our assumptions. In particular, we are interested in allowing agents to not be perfectly rational, not requiring identification to hold strictly, and allowing for our model of agents’ reward functions to be misspecified. In addition, all modeling makes the important assumption

Assumption 4 (Specification).

and include the correct specifications of individuals’ reward functions.

which, like the others, is rarely completely true in practice.

To relax all of these assumptions we will consider the concept of -equilibrium:

Definition 3.

For an -Bayesian Nash equilibrium is a strategy such that for each player , all possible types for that player which have positive probability under , and any other strategy we have

Allowing for -BNE in the revelation game means that we are also allowing for -BNE in and . Using this formulation is equivalent to relaxing assumptions about rationality or correct specification in our structural models. -equilibria can arise because agents are imperfect optimizers (but are able to learn to avoid actions that cause huge negative regret) or because the utility functions in or are slightly incorrect (and individuals reach an equilibrium corresponding to some other reward function).

However, like many instances of partial identification Manski (2003) -BNE is a set valued solution concept. Rather than enumerate the whole set, we will consider particular boundary equilibria:

We assume the existence of an evaluation function which gives us a scalar evaluation of the counterfactual outcome that the analyst cares about. We overload notation and let be the expected value of given a mixed strategy . Common examples of valuation function used in the mechanism design literature include revenue, efficiency, fairness, envy, stability, strategy-proofness, or some combination of them (Roth and Sotomayor, 1992; Guruswami et al., 2005; Budish, 2011; Caragiannis et al., 2016).

We will consider the maximal and minimal elements of the -BNE set with respect to . Formally:

Definition 4.

The -pessimistic counterfactual prediction of is

The -optimistic prediction replaces the inf with sup. The -RMAC bounds are the values of attained at the pessimistic and optimistic predictions.

Figure 1 summarizes the idea behind RMAC. Standard structural imply a one-to-one mapping between observed distributions and underlying types followed by a one-to-one mapping between underlying types and counterfactual behavior. Assuming only -equilibrium makes both of these mappings one-to-many and RMAC bounds select the most optimistic and pessimistic counterfactual distributions consistent with these mappings.

Figure 1: Standard structural assumptions allow us to map an observed distribution of equilibrium actions into an underlying distribution of types and then map this distribution of types into a distribution of counterfactual actions. This is represented by points and solid arrows. Assuming -equilibrium in the original game means we now map a distribution of actions to many possible distributions of types. Assuming -equilibrium in the counterfactual game implies another one-to-many mapping from underlying type distributions to counterfactual action distributions. RMAC bounds, represented by the dashed arrows, are the most optimistic and pessimistic choices of counterfactual actions (with respect to some evaluation function ) from this set.

5 Computing Equilibria of the Revelation Game

In practice, we can replace the random variable of the revelation game with their sample analogue, the observed data. From here forward will refer to the sample data. Unfortunately, we can derive a quite negative complexity result for computing -RMAC bounds exactly:

Theorem 2.

It is NP-hard to compute the robust counterfactual estimate even if each data-point has only a single feasible type, and there are only two data points. It is also NP-hard even if there is no objective function, a finite number of feasible types, and has only two players.

The proof follows from the reduction of solving the revelation game to other known NP-hard problems and we leave it to the Appendix. Importantly, NP-hard does not mean impossible and the RMAC bounds can be computed for small instances using a mathematical program. We give this program in the Appendix for the case where we only consider pure-strategy -BNE. In addition, we show that for the special case where is a two player game, we can solve for the RMAC bounds using a mixed integer program.

5.1 Revelation Game Fictitious Play

Given that computing RMAC programs is intractable beyond the simplest cases, we propose to adapt the fictitious play algorithm Brown (1951) to compute the optimistic and pessimistic equilibria of the revelation game. We refer to this as Revelation Game Fictitious Play (RFP).

RFP works as follows. For notation, let be the estimated type for data point at iteration and be the estimated counterfactual action at iteration Recall the definition of the revelation game utility as

As with standard fictitious play, at each time step each reports a type-action pair. They observe the choices of others and update their choice to be to be the one that minimizes (or maximizes) out of the set of best responses to the current history of play (when RFP simply chooses the best response to the current history, breaking ties randomly). The pseudocode is shown in algorithm 1.

Input: , pessimism/optimism
if pessimistic then
if optimisic then
Randomly initialize
while not converged do
     Let be the historical distribution of for
     Let be the historical distribution of for
     Let be the strategy defined by the historical distribution of for
     Let the set of low-regret revelation game actions be
     Breaking ties randomly, update guesses for each datapoint
Algorithm 1 Revelation Fictitious Play

It is well-known that fictitious play converges in -player zero-sum and potential games, while it may cycle in general. Nonetheless, a well-known result states that if fictitious play converges, then it converges to a Nash equilibrium (Fudenberg and Levine, 1998).

We now show an analogous result for RFP: if RFP converges then it converges to an -BNE and locally minimizes in the sense that no unilateral deviation by a single data-player in the revelation game that are strictly -best responses leads to a smaller .

Recall that we use the notation to denote a history of behavior. We denote by the mixed strategy implied by that history. As with standard fictitious play we consider convergence of :

Definition 5.

RFP converges to a mixed strategy if

We use the following notion of local optimality (analogously defined for optimistic V):

Definition 6.

A mixed -BNE of the revelation game is locally -optimal if

for any data-player and unilateral deviation where

Note the strict inequality on the value of the deviation: we do not show robustness to lower at deviations that are on the boundary of the -best-response set at convergence. The reason is that there may be deviations which have strictly greater than regret for all , but their regret converges to from above, and so they enter the set at the limit.

Theorem 3.

If RFP converges to then is an -BNE of the revelation game, and locally optimal.

We relegate the proof to the Appendix. The argument is a fairly straightforward extension of standard fictitious play results to the revelation game.

An important question is whether RFP can be guaranteed to converge in particular classes of Bayesian games. We leave the theoretical study of RFP (or other learning algorithms in the revelation game) to future work and focus the rest of the paper on empirical evaluation.

6 Experiments

We now turn to constructing RMAC bounds for classic problems in market design including auctions, school choice, and social choice.

6.1 RMAC in Auctions

Figure 2: RMAC revenue predictions using data drawn from the equilibrium of a first price player auction for various counterfactual auction formats. The RMAC robustness bounds, even with small are much larger than the standard error bounds (grey ribbon around RMAC line) estimated from multiple replicates.

As our first evaluation, we will consider the study of counterfactual revenue in auctions. We set as a first-price -player auction with types drawn from uniformly and bids in the interval discretized at intervals of . As our counterfactual games we consider a -player second-price auction with varying reserves222A reserve price in an auction is a price floor, individuals cannot win the auction if they bid below the reserve. In addition, in the case of second-price auctions, the price paid by the winner is the max of (as long as is less than the bid) and the second-highest bid. in the interval and player first-price auctions.

We use counterfactual expected revenue as our valuation function. We set the domain of possible types to also be equal to 333In our experiments we found that the choice of initial hypothesis space mattered very much, allowing a larger upper bound let some extreme types to be set fairly far above . Thus, incorporating analyst priors is an important part of RMAC and the addition of other forms of regularization into the procedure that can reflect these priors is an important future research direction.

We generate data by first sampling 1000 independent types and their actions from the closed form first-price equilibrium (), using these actions as . We then use to compute -RMAC predictions for several levels of . Figure 2

shows our results with (small) error bars being shown as standard deviations of the statistic over replicates.

Figure 3: In depth analysis of how RMAC changes counterfactual estimates. Top panel shows estimated types and RMAC pushes the entire distribution up or down, in this special case the extent of the downward shift is not affected by the counterfactual game. This happens in auctions because in RMAC the type regret is determined by and and lower valuations will guarantee lower in the counterfactual game. Bottom panel shows RMAC generated counterfactual strategies for various counterfactual auctions.

We see that in auctions, even slight changes to can lead to larger changes in revenue. In particular, if we consider that the average expected utility accrued to the winner in the player auction is , an of corresponds only to a misoptimization/misspecification. However, this small still gives quite wide revenue bounds.

To see the logic behind this lack of robustness, consider the pessimistic estimate, in which the data is drawn from an -equilibrium where individuals are overbidding in the original game and underbidding in the counterfactual game.

Assuming a uniform bid distribution, an individual’s regret for (unilaterally) deviating by is in either a first- or second-price auction. However, if all individuals decrease their bid by expected revenue will decrease by . Therefore, we expect a worst-case -equilibrium in the counterfactual game to decrease revenue by . In addition, there will be a similar decrease in revenue from the shift in types inferred from the original game.

The top panel of Figure 3 plots the RMAC estimated types as a function of true type and we can see that the type distribution is fairly uniformly shifted down. As a robustness check we can also see that this downward shift is not affected by the counterfactual game. This is not a general property of the RMAC estimator, and is specific to this case of auctions where revenue will be monotonic in counterfactual bid and counterfactual bid will be monotonic in type.

The worst case scenario is compounded by assumption that the equilibrium that attains in the counterfactual will be the one where these same individuals will slightly underbid. We can see the RMAC type-contingent counterfactual strategies plotted in the bottom panel of figure 3. Error ribbons reflect and percentiles taken over multiple replicates with wide bands appearing when reserves are set high since any bid below the reserve always achieves a payoff of and so individuals are indifferent between those bids.

6.1.1 RMAC Without Point Identification

We now discuss how RMAC can be useful for situations where point identification of a structural model is not guaranteed. This can happen when there are multiple equilibria in or when the mapping from type distributions to equilibrium distributions in is not injective. In such situations there will be multiple solutions to a maximum likelihood estimator and no guarantees about which one will be output by the procedure. On the other hand, RMAC bounds will still be well defined and if we choose a small enough will be close to the worst and best case full equilibria.

We illustrate this by considering counterfactual prediction where is a player second-price auction with reserve with the same simulation parameters as above (as we use truthful reports). is dominant strategy truthful for all types but the payoff to bids in the interval is always so any type can rationalize any bid in this interval. This means that the type distribution is not point identified from an action distribution. We apply RMAC to this situation with the counterfactual question of what would happen if we changed the reserve .

Figure 4 shows the results. We see on the left panel that RMAC bounds for reserves are very wide whereas bounds for reserves above the original are smaller since our type censoring appears only on one side.

The right panel shows that here, unlike in the auction experiments above, the choice of does affect type estimation. When the counterfactual reserve is then the pessimistic RMAC pushes previously unidentified types to to create the worst case equilibrium. When the counterfactual reserve is set very high to low types do not bid above the reserve even in the optimistic equilibria and so types which were not identified in the original game remain unidentified and their guesses are chosen arbitrarily.

Figure 4: Results for data drawn from a second-price auction with reserve with counterfactual question involving changing the reserve. RMAC is well defined even when the inverse problem is not identified due to multiple types being consistent with the same observed actions. The maximum likelihood solution (red line) simply picks a random type from among all equally likely ones. RMAC bounds reflect the lack of identification in the original game as they are quite large for counterfactual reserves less than the original reserve. In the right panel we see that in this situation, unlike in the example above, the choice of counterfactual game does affect the estimated underlying types.

6.2 RMAC in School Choice

We move to another commonly studied domain: school choice. Here the problem is to assign items (schools) to agents (students). Agents have preferences over schools, report them, and the output of the mechanism is an assignment.

We look at two real world school choice mechanisms. The first is the Boston mechanism (Abdulkadiroğlu et al., 2005). In Boston each student reports their rank order list and the mechanism tries to maximize the number of first choice assignments that it can. Once it has done this, it tries to maximize the number of second-choice assignments, and so on. The second mechanism uses the random serial dictatorship (RSD) mechanism (Abdulkadiroğlu and Sönmez, 1998). Here students are each given a random number and sorted, the first in line gets to choose their favorite school, the second chooses their favorite among what’s left and so on.

Figure 5: RMAC intervals for the change in social welfare and change in truthfulness from changing school choice mechanisms. Dark and light curves are for and percentile of estimated intervals over replicates with different sampled . The presence of multiple type distributions consistent with a given action distribution in Boston means that even for small RMAC bounds can be quite wide for Boston to RSD.

The main tradeoff in practical school choice comes from balancing the total social welfare achieved by the mechanism and the strategy proofness. RSD (and other algorithms like student-proposing deferred acceptance) have a dominant strategy for each agent to report their true type. This means that participants in real world implementations of such mechanisms do not need to spend cognitive effort on guessing what others might do or searching for information - they can simply tell the truth and go on with their day. On the other hand, equilibria of the Boston mechanism can be more efficient in terms of allocating schools to students but are not strategy proof (Mennle and Seuken, 2015; Abdulkadiroğlu et al., 2011).

We consider a problem mechanism with 3 students and 3 schools . For both mechanisms the action space is a permutation over .

We consider a hypothesis space of types that are permutations of utility vector - that is, individuals receive utility if they get their first choice, for the second and for the third. We are going to consider the case where all individuals have identical preferences of We will take these types, generate an equilibrium under Boston and construct a dataset and ask the counterfactual question: what would happen if we changed to RSD? We will also generate actions from the RSD equilibrium and ask: what would happen if we changed to Boston?

We look at two choices of inspired by discussion of these mechanisms: overall social welfare of the allocation and truthfulness of the strategies (i.e. whether types report their true values). We plot the estimated change in welfare and truthfulness from moving from one mechanism to another. In other words, we perform the kind of exercise that a practicing market designer might actually do to when trying to convince a school district to change mechanisms.

Note that in the case of ‘Boston to RSD’ at the standard structural assumptions are not satisfied, as multiple type distributions are consistent with the observed actions. Given our utility space, even though everyone has the same preferences, same types may choose different actions (i.e. play a mixed strategy), since it is better to be assured of getting than take a lottery between , and So, some proportion of individuals will report However, such an action profile is also consistent with an equilibrium of truthful types with different preferences. Since the types are not identified from the observed actions, structural estimation using maximum likelihood has multiple optima with different values of . However, RMAC with small will produce an interval that covers both possible type distributions.

Figure 5 shows that going from Boston to RSD can create more truthfulness in the best case but in the pessimistic case has no effect (because actions were already truthful). This transition also tends to lead to welfare decreases, although not always. For example, if all players have identical preferences, all mechanisms provide the same welfare. Moving from RSD to Boston decreases truthfulness always and does not change welfare in our situation (since in our simulations all students have same true preferences).

6.3 RMAC in Social Choice

As our last study we move to the domain of social choice. We consider the standard example of a group of individuals choosing an ideal point We assume individuals have a type and have single-peaked preferences and receive loss from a point being chosen for the group. We consider groups of 11 individuals participating in one of mechanisms: in each mechanism individuals report a number . In the mean mechanism, is chosen as the mean of the reports, in the median mechanism the median is chosen. In both the median and mean mechanism no side payments are made.

Figure 6: The median and VCG mechanisms are dominant strategy truthful but they have very different robustness properties. The mean mechanism is not well identified as types outside a narrow interval all report extreme values in equilibrium, however, RMAC bounds are defined for this case as well.

In the VCG mechanism individuals pay the mechanism their externality on everyone else (i.e. the difference in total utility from choosing the mean that includes the report of and the one that excludes it) and the mean is chosen. As in auctions we discretize the types and actions with a grid of . We sample types, calculate their optimal action in the mechanism, and run the revelation game. For the counterfactual valuation we use That is, we look for the most right or left shifted type distributions that are consistent with observed data.

Figure 6 shows our results. First, we can see that with the mean mechanism is not identified since in equilibrium a whole range types choose the and actions. However, even with small the solution becomes unique.

Even though both the median and VCG mechanisms are dominant strategy truthful they have very different robustness properties. In the median mechanism deviation from truthful reporting, in particular for extreme types, is not very costly as it can only affect outcomes if that person is pivotal. On the other hand, in the VCG any deviation also changes the price one has to pay into the mechanism, thus changing the way types can deviate under RMAC.

7 Conclusion

Structural estimation is an important area of counterfactual prediction. We have introduced RMAC as a way of dealing with situations where the standard structural assumptions of specification, equilibrium, point identification are not met. We have use the revelation game as an estimator for counterfactual quantities and adapted standard fictitious play to solve for pessimistic and optimistic equilibria of the revelation game.

There are many possible extensions to our work. We have assumed that deviations from optimal behavior can be arbitrary but must incur low regret. It is well known that in many situations deviations from rational behavior are not random but rather systematic and can even incur large regret (Camerer et al., 2011). An important extension of our work is to incorporate theoretical models from behavioral economics into RMAC predictions (Fudenberg and Levine, 2006; Ragain and Ugander, 2016; Peysakhovich and Ugander, 2017; Peysakhovich, 2019).

Our method applies a vanilla version of fictitious play but it is well known that modifications to standard learning algorithms can lead to large changes in real world performance, especially in multi-agent settings (Conitzer and Sandholm, 2007; Syrgkanis et al., 2015; Kroer et al., 2015). Thus, it is worth exploring the use of other algorithms than RFP to solve for RMAC bounds. In addition, we assume access to a tabular and discrete representation of the counterfactual game and an important future direction is to expand these ideas to more complex multi-agent environments, for example those including multiple steps and planning (Shu and Tian, 2018). Such extensions would naturally require multi-agent learning algorithms that can handle function approximation such as those based on deep learning (Heinrich and Silver, 2016; Dütting et al., 2017; Lowe et al., 2017; Feng et al., 2018; Brown et al., 2018).

We have looked at predicting counterfactual behavior in the kinds of environments studied by market designers. However, the general question of how other agents would respond to something (e.g. a behavior or change in environment) is an important problem for agent design and in particular learning whether a particular partner (or partners) are attempting to cooperate or compete (Littman, 2001; Kleiman-Weiner et al., 2016; Lerer and Peysakhovich, 2017; Shum et al., 2019). Expanding RMAC to such situations is another important future direction.


  • (1)
  • Abdulkadiroğlu et al. (2011) Atila Abdulkadiroğlu, Yeon-Koo Che, and Yosuke Yasuda. 2011. Resolving conflicting preferences in school choice: The" boston mechanism" reconsidered. American Economic Review 101, 1 (2011), 399–410.
  • Abdulkadiroğlu et al. (2005) Atila Abdulkadiroğlu, Parag A Pathak, Alvin E Roth, and Tayfun Sönmez. 2005. The Boston public school match. American Economic Review 95, 2 (2005), 368–371.
  • Abdulkadiroğlu and Sönmez (1998) Atila Abdulkadiroğlu and Tayfun Sönmez. 1998. Random serial dictatorship and the core from random endowments in house allocation problems. Econometrica 66, 3 (1998), 689–701.
  • Agarwal (2015) Nikhil Agarwal. 2015. An empirical model of the medical match. American Economic Review 105, 7 (2015), 1939–78.
  • Athey (2015) Susan Athey. 2015. Machine learning and causal inference for policy evaluation. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining. ACM, 5–6.
  • Athey and Nekipelov (2010) Susan Athey and Denis Nekipelov. 2010. A structural model of sponsored search advertising auctions. In Sixth ad auctions workshop, Vol. 15.
  • Bergemann and Morris (2005) Dirk Bergemann and Stephen Morris. 2005. Robust mechanism design. Econometrica 73, 6 (2005), 1771–1813.
  • Berry et al. (1995) Steven Berry, James Levinsohn, and Ariel Pakes. 1995. Automobile prices in market equilibrium. Econometrica: Journal of the Econometric Society (1995), 841–890.
  • Bottou et al. (2013) Léon Bottou, Jonas Peters, Joaquin Quiñonero-Candela, Denis X Charles, D Max Chickering, Elon Portugaly, Dipankar Ray, Patrice Simard, and Ed Snelson. 2013. Counterfactual reasoning and learning systems: The example of computational advertising. The Journal of Machine Learning Research 14, 1 (2013), 3207–3260.
  • Brown (1951) George W Brown. 1951. Iterative solution of games by fictitious play. Activity analysis of production and allocation 13, 1 (1951), 374–376.
  • Brown et al. (2018) Noam Brown, Adam Lerer, Sam Gross, and Tuomas Sandholm. 2018. Deep Counterfactual Regret Minimization. arXiv preprint arXiv:1811.00164 (2018).
  • Budish (2011) Eric Budish. 2011. The combinatorial assignment problem: Approximate competitive equilibrium from equal incomes. Journal of Political Economy 119, 6 (2011), 1061–1103.
  • Camerer et al. (2011) Colin F Camerer, George Loewenstein, and Matthew Rabin. 2011. Advances in behavioral economics. Princeton university press.
  • Caragiannis et al. (2016) Ioannis Caragiannis, David Kurokawa, Hervé Moulin, Ariel D Procaccia, Nisarg Shah, and Junxing Wang. 2016. The unreasonable fairness of maximum Nash welfare. In Proceedings of the 2016 ACM Conference on Economics and Computation. ACM, 305–322.
  • Chawla et al. (2017) Shuchi Chawla, Jason D Hartline, and Denis Nekipelov. 2017. Mechanism Redesign. arXiv preprint arXiv:1708.04699 (2017).
  • Conitzer and Sandholm (2002) Vincent Conitzer and Tuomas Sandholm. 2002. Complexity of mechanism design. In

    Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence

    . Morgan Kaufmann Publishers Inc., 103–110.
  • Conitzer and Sandholm (2007) Vincent Conitzer and Tuomas Sandholm. 2007. AWESOME: A general multiagent learning algorithm that converges in self-play and learns a best response against stationary opponents. Machine Learning 67, 1-2 (2007), 23–43.
  • Conitzer and Sandholm (2008) Vincent Conitzer and Tuomas Sandholm. 2008. New complexity results about Nash equilibria. Games and Economic Behavior 63, 2 (2008), 621–641.
  • Dekel et al. (2004) Eddie Dekel, Drew Fudenberg, and David K Levine. 2004. Learning to play Bayesian games. Games and Economic Behavior 46, 2 (2004), 282–303.
  • Dütting et al. (2017) Paul Dütting, Zhe Feng, Harikrishna Narasimhan, and David C Parkes. 2017. Optimal auctions through deep learning. arXiv preprint arXiv:1706.03459 (2017).
  • Erev and Roth (1998) Ido Erev and Alvin E Roth. 1998. Predicting how people play games: Reinforcement learning in experimental games with unique, mixed strategy equilibria. American economic review (1998), 848–881.
  • Feng et al. (2018) Z Feng, H Narasimhan, and DC Parkes. 2018. Optimal auctions through deep learning. AAMAS (2018).
  • Fudenberg and Levine (1998) Drew Fudenberg and David K Levine. 1998. The theory of learning in games. Vol. 2. MIT press.
  • Fudenberg and Levine (2006) Drew Fudenberg and David K Levine. 2006. A dual-self model of impulse control. American economic review 96, 5 (2006), 1449–1476.
  • Fudenberg and Peysakhovich (2016) Drew Fudenberg and Alexander Peysakhovich. 2016. Recency, records, and recaps: Learning and nonequilibrium behavior in a simple decision problem. ACM Transactions on Economics and Computation (TEAC) 4, 4 (2016), 23.
  • Guruswami et al. (2005) Venkatesan Guruswami, Jason D Hartline, Anna R Karlin, David Kempe, Claire Kenyon, and Frank McSherry. 2005. On profit-maximizing envy-free pricing. In Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms. Society for Industrial and Applied Mathematics, 1164–1173.
  • Haile and Tamer (2003) Philip A Haile and Elie Tamer. 2003. Inference with an incomplete model of English auctions. Journal of Political Economy 111, 1 (2003), 1–51.
  • Hartline et al. (2015) Jason Hartline, Vasilis Syrgkanis, and Eva Tardos. 2015. No-regret learning in Bayesian games. In Advances in Neural Information Processing Systems. 3061–3069.
  • Heinrich and Silver (2016) Johannes Heinrich and David Silver. 2016. Deep reinforcement learning from self-play in imperfect-information games. arXiv preprint arXiv:1603.01121 (2016).
  • Kleiman-Weiner et al. (2016) Max Kleiman-Weiner, Mark K Ho, Joseph L Austerweil, Michael L Littman, and Joshua B Tenenbaum. 2016. Coordinate to cooperate or compete: abstract goals and joint intentions in social interaction. In COGSCI.
  • Klemperer (2002) Paul Klemperer. 2002. What really matters in auction design. Journal of economic perspectives 16, 1 (2002), 169–189.
  • Kroer et al. (2015) Christian Kroer, Kevin Waugh, Fatma Kilinç-Karzan, and Tuomas Sandholm. 2015. Faster first-order methods for extensive-form game solving. In Proceedings of the Sixteenth ACM Conference on Economics and Computation. ACM, 817–834.
  • Lerer and Peysakhovich (2017) Adam Lerer and Alexander Peysakhovich. 2017. Maintaining cooperation in complex social dilemmas using deep reinforcement learning. arXiv preprint arXiv:1707.01068 (2017).
  • Littman (2001) Michael L Littman. 2001. Friend-or-foe Q-learning in general-sum games. In ICML, Vol. 1. 322–328.
  • Lowe et al. (2017) Ryan Lowe, Yi Wu, Aviv Tamar, Jean Harb, OpenAI Pieter Abbeel, and Igor Mordatch. 2017. Multi-agent actor-critic for mixed cooperative-competitive environments. In Advances in Neural Information Processing Systems. 6379–6390.
  • Manski (2003) Charles F Manski. 2003.

    Partial identification of probability distributions

    Springer Science & Business Media.
  • Mennle and Seuken (2015) Timo Mennle and Sven Seuken. 2015. Trade-offs in school choice: comparing deferred acceptance, the Naıve and the adaptive Boston mechanism.
  • Myerson (1981) Roger B Myerson. 1981. Optimal auction design. Mathematics of operations research 6, 1 (1981), 58–73.
  • Nekipelov et al. (2015) Denis Nekipelov, Vasilis Syrgkanis, and Eva Tardos. 2015. Econometrics for learning agents. In Proceedings of the Sixteenth ACM Conference on Economics and Computation. ACM, 1–18.
  • Peysakhovich (2019) Alexander Peysakhovich. 2019. Reinforcement learning and inverse reinforcement learning with system 1 and system 2. Proceedings of AAAI-AI Ethics and Society (2019).
  • Peysakhovich and Ugander (2017) Alexander Peysakhovich and Johan Ugander. 2017. Learning context-dependent preferences from raw data. In Proceedings of the 12th workshop on the Economics of Networks, Systems and Computation. ACM, 8.
  • Porter et al. (2003) David Porter, Stephen Rassenti, Anil Roopnarine, and Vernon Smith. 2003. Combinatorial auction design. Proceedings of the National Academy of Sciences 100, 19 (2003), 11153–11157.
  • Ragain and Ugander (2016) Stephen Ragain and Johan Ugander. 2016.

    Pairwise choice Markov chains. In

    Advances in Neural Information Processing Systems. 3198–3206.
  • Roth (2002) Alvin E Roth. 2002. The economist as engineer: Game theory, experimentation, and computation as tools for design economics. Econometrica 70, 4 (2002), 1341–1378.
  • Roth and Peranson (1999) Alvin E Roth and Elliott Peranson. 1999. The redesign of the matching market for American physicians: Some engineering aspects of economic design. American economic review 89, 4 (1999), 748–780.
  • Roth et al. (2005) Alvin E Roth, Tayfun Sönmez, et al. 2005. A kidney exchange clearinghouse in New England. American Economic Review 95, 2 (2005), 376–380.
  • Roth and Sotomayor (1992) Alvin E Roth and Marilda Sotomayor. 1992. Two-sided matching. Handbook of game theory with economic applications 1 (1992), 485–541.
  • Shu and Tian (2018) Tianmin Shu and Yuandong Tian. 2018. M^ 3RL: Mind-aware Multi-agent Management Reinforcement Learning. arXiv preprint arXiv:1810.00147 (2018).
  • Shum et al. (2019) Michael Shum, Max Kleiman-Weiner, Michael L Littman, and Joshua B Tenenbaum. 2019. Theory of Minds: Understanding Behavior in Groups Through Inverse Planning. arXiv preprint arXiv:1901.06085 (2019).
  • Syrgkanis et al. (2015) Vasilis Syrgkanis, Alekh Agarwal, Haipeng Luo, and Robert E Schapire. 2015. Fast convergence of regularized learning in games. In Advances in Neural Information Processing Systems. 2989–2997.

8 Appendix

8.1 A Mathematical Program for the General Revelation Game

We now present a mathematical program for solving the revelation game exactly for small instances. Throughout we will treat as a black box, assumed to be representable in the same class as the mathematical program it is stated within. Similarly we will assume that the Regret functions are representable within the given class. If these assumptions are not true then the problem will of course be harder than the stated class of mathematical programs.

Throughout the section we will abuse notation slightly in the name of readability and say that , i.e. the expected utility of action given the distribution over actions taken by other players in given the action assignment of the data-players.

First, we give a mathematical program for solving the general case of the revelation game. Here we let and be vectors of action and type choices, since this formulation is guaranteed to have a pure-strategy BNE:


The first constraint in (1) is an equilibrium constraint over , and therefore the general problem is a mathematical program with equilibrium constraints (MPEC). Thus the general program is quite hard. If we make the assumptions that and are nonempty convex sets, and each is a concave function in the choice of action then we can formulate the problem as a variational inequality problem:


where is the gradient operator of for the given choice of .

8.2 A Mixed Integer Program for Two Player

Next, we give a mixed integer program (MIP) for the special case where has only two players, but where we may have an arbitrary finite number of data points. Furthermore, for this MIP we assume that is discrete and finite, as well as is finite.

The program has a Boolean variable for each pair of data point and type , indicating whether data point takes on type . For each data point and action we have indicating the probability that puts on (we could make Boolean instead in order to compute a pure-strategy solution, but pure-strategy solutions are not guaranteed to exist when types are discrete).

We also have the following -BNE-enforcing variables: represents the utility achieved by type in under the computed solution, the slack variable denotes the inoptimality of when taken by type , and is an indicator variable denoting whether is played by any data-player taking type . The idea of the MIP is to ensure , i.e. that inoptimality is bounded by , whenever any data-player chooses type and puts nonzero probability on .


Note that since is finite we can preprocess it and remove all such that , and thus we do not need to enforce this constraint on in the MIP.

8.3 Proofs of Theorems

Proof of Theorem 1.

The proof relies heavily on the fact that the revelation game’s utility function is defined with respect to regret not the original utility function. Suppose that a data-player has true type but reports In revelation-game BNE this must have zero regret. But this violates the identification assumption, since we could then construct a new distribution where we reassign type to but keep the same distribution over actions in as part of a BNE. Thus the reported distribution over types must be in revelation-game BNE. Now we can use the uniqueness assumption to infer that each data-player reports their true type, as well as their action in the unique BNE of given distribution . If they report any other action they must have nonzero regret, or they would violate the uniqueness assumption. ∎

Proof of Theorem 2.

The first statement is by reduction from max-social-welfare Nash equilibrium in some game , which is NP-hard (Conitzer and Sandholm, 2008). We set , and equal to the negative social welfare in of actions . For each agent in the NE problem we instantiate a data point and create the game such that each can only take on the type corresponding to their payoffs in (this is easily done by making every other type have non-zero regret in ). Now we set . A solution to the RMAC problem now corresponds to a social-welfare maximizing Nash equilibrium of .

The second statement is by reduction from the problem of checking whether a pure-strategy BNE exists, which is NP-complete (Conitzer and Sandholm, 2008). Consider a symmetric game that we wish to find a pure-strategy BNE for. We let . For each type of we instantiate a data point such that only is a feasible type. Now the distribution over types in equals that of , and so the equilibria are in correspondence. ∎

Proof of Theorem 3.

First we show that the limit is an -BNE. Let denote a sequence of play in question. Denote by the strategy of player implied by the history up to time .

Suppose is not an BNE. Then there exists data player and revelation game actions and that are both in the support of but have the following payoff difference

for some .

Now pick such that for all we have

where is the number of pure strategy profiles. Such a exists since by assumption converges and is bounded. We then have

Thus we have that after iteration we no longer select since it is not within the set of best responses. This follow from the above algebra and the fact that is bounded above by zero (since it is the negative maximum regret).

But this implies that thus

But this is a contradiction since we assumed

Now we prove local optimality.

Suppose we do not have local optimality. Then there exists and revelation game actions such that


for some .

Since the expected value of is continuous in the empirical distribution there exists with such that for all for some sufficiently large .

Now pick such that for all , is in the -best-response set to for . Such a is guaranteed to exist by continuity of and in the empirical distribution. But then best responses never select after which is a contradiction. ∎