Mechanism Design with Informational Punishment

01/04/2022
by   Benjamin Balzer, et al.
University of Technology Sydney
0

We introduce informational punishment to the design of mechanisms that compete with an exogenous status quo: A signal designer can publicly communicate with all players even if some decide not to communicate with the designer. Optimal informational punishment ensures that full participation in the mechanism is optimal even if any single player can publicly enforce the status-quo mechanism. Informational punishment restores the revelation principle, is independent of the mechanism designer's objective, and operates exclusively off the equilibrium path. Informational punishment is robust to refinements and applies in informed-principal settings. We provide conditions that make it robust to opportunistic signal designers.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

11/02/2021

Information Spillover in Multiple Zero-sum Games

This paper considers an infinitely repeated three-player Bayesian game w...
10/02/2018

Implementing the Lexicographic Maxmin Bargaining Solution

There has been much work on exhibiting mechanisms that implement various...
03/19/2022

Incentive Compatibility in Two-Stage Repeated Stochastic Games

We address the problem of mechanism design for two-stage repeated stocha...
01/28/2020

Benchmark Design and Prior-independent Optimization

This paper compares two leading approaches for robust optimization in th...
05/18/2021

Coach-Player Multi-Agent Reinforcement Learning for Dynamic Team Composition

In real-world multiagent systems, agents with different capabilities may...
08/07/2018

Robust Pricing with Refunds

We characterize a selling mechanism that is robust to the seller's uncer...
11/20/2021

Mechanism Design with Moral Bidders

A rapidly growing literature on lying in behavioral economics and psycho...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Economics aims at improving the efficiency of institutions to govern strategic parties. If those parties hold asymmetric information, the first welfare theorem fails—we may fail to attain efficiency through any institution. In these cases, mechanism design becomes a powerful tool. Mechanism design characterizes all outcomes that parties’ strategic incentives permit. Via the revelation principle, a simple direct revelation mechanism is enough to determine the best results we can hope for under any institution. In reality, however, new institutions often replace a status-quo mechanism.

Changing to the new institution may require consent—if parties veto the proposal, the status quo prevails. Parties follow similar strategic incentives in their decision to veto the new institution as those at play when they decide on their behavior within any given institution. Vetoing a proposal can be strategic, too—a party may publicly veto the mechanism only to signal her private information.

This paper considers a setting where parties may veto a proposed mechanism and disclose their veto to others. If the proposal fails, parties revert to the play of a default game. The optimal mechanism in such settings may involve on-path rejections Celik:11—the revelation principle fails. As a result, complexity increases, and the mechanism design approach loses part of its power. We show, however, that if parties can store information and commit to releasing it at a later date, the revelation principle holds, and we can readily apply the tools of mechanism design. We refer to this technology as informational punishment. The information is stored and released in the event of a deviation. Its purpose is to punish the deviator by the release.

Informational punishment is a simple yet powerful tool. It requires only that each party has access to a signaling device that garbles the party’s information and conceals the realization of that garbling for some time. The threat to release information later suffices to discipline others and ensures full participation. Informational punishment does not interfere with the design of the mechanism itself; a decentral implementation is straightforward. Moreover, informational punishment is robust both to equilibrium refinements and restrictions on the space of available mechanisms. In addition, it applies to informed-principal problems and is immune to a designer who suffers from informational opportunism.

Examples of our environment abound. Parties in a legal conflict can coordinate to settle through an arbitration mechanism. However, each party can unilaterally enforce the default game of a trial. Political parties can solve gridlocks through a bargaining procedure. However, they can refuse to cooperate and enter a stalemate until the next elections. Firms can work together to determine the standard of the industry. However, they can also rely on a non-cooperative standards war. Countries can negotiate free trade agreements. However, if one government refuses to engage, they fall back on the WTO trade regime.

In all of these cases, vetoing the mechanisms may signal a party’s ability in the default game. The other parties interpret the signal and adjust their behavior. The change in behavior influences the default game’s outcomes. Moreover, if vetoing signals information, participating signals information too. Thus, if we cannot rule out on-path rejections of the mechanism, we need to solve all combinations of vetoing and participating and the associated outcomes of the default game to compute the optimal mechanism—a cumbersome computational problem.

Informational punishment relaxes the computational burden. It restores full participation on the equilibrium path yet only affects the players’ outside options. Incentives inside the mechanism remain unchanged. Thus, we relax participation constraints without affecting incentive constraints. Moreover, the off-path event of informational punishment solves a specific, well-defined information design problem: min-max the deviator’s continuation payoff over Bayes’ plausible information structures.

Informational punishment works because a signal realization about party has two effects. The first effect is direct and distributional: The other parties update expected payoffs because ’s private information is payoff relevant. The second effect is indirect and behavioral. A party’s strategy is a function of the information set. Altering information alters the continuation strategy. Via equilibrium reasoning, the party’s change in behavior alters the behavior of other parties too. Informational punishment exploits that channel.

Related Literature.

Signaling through vetoes in mechanisms dates back to Cramton:95. Like them, we consider a problem in which a proposed mechanism competes with a status-quo game. In line with their model, we assume that ratification is public—agents learn who vetoes the mechanism and who does not. Unlike them, we are not interested in refining the mechanism space to those that survive ratification without further ado.

Instead, and closer to Celik:11, we assume perfect Bayesian equilibrium as our solution concept, take the space of mechanisms as given, and are interested in the set of outcomes that can arise in such a setting. Different to Celik:11’s setting where “equilibrium rejection” may be optimal, we allow parties to engage in informational punishment. We show that equilibrium rejection is of no concern with informational punishment available. The set of outcomes that we can implement with full participation contains those we can implement with equilibrium rejection.111TAN2007383,DEQUIEDT2007302 are other examples of default games’ threat to participation.

gerardi2007sequential,correia2017trembling propose an alternative approach in settings with veto-constrained mechanisms. Instead of signaling information, they consider mechanisms that “tremble.” Even if all parties decide to participate, the mechanism breaks down with a small probability and invokes the default game. The mechanism can get arbitrarily close to the full-participation optimum through such trembling. However, these mechanisms rely on the assumption that a deviating party cannot credibly signal that it was her veto that invoked the default game and not the mechanism’s tremble. Instead, we allow parties to announce their veto publicly, making trembles insufficient to overcome the veto problem. Celik:13 use reciprocal mechanisms to circumvent the equilibrium-rejection problem. The main difference to our approach is that parties make a public revelation on the equilibrium path through their mechanism proposal. Depending on the environment, these revelations can interfere with incentive compatibility. Because informational punishments affect only off-path events, these concerns are absent—the set of implementable outcomes nests those in Celik:13. The reverse is not the case due to, for example, our weaker commitment assumption.

Our work on standard-setting organizations Balzer:15 applies informational punishment outside mechanism design. In that previous paper, we consider a setting where the available mechanisms contain only efficient take-it-or-leave-it offers with fixed shares. In Balzer:15 informational punishment enlarges the class of environments where full participation is feasible, yet informational punishment cannot guarantee full participation. The reason is that the space of available mechanisms is too small. In the current paper, we take a broader view and complement Balzer:15. On the one hand, we define the minimal set of available mechanisms in the designer’s toolbox such that an optimal full-participation mechanism exists. On the other hand, we show how informational punishment simplifies the mechanism designer’s task, particularly when the set of available mechanisms is large. Unlike those in Balzer:15, our results in this paper readily apply to various design problems. Moreover, we show that—given our minimal conditions—common restrictions to the designer’s problem do not affect the power of informational punishment. Specifically, we consider equilibrium refinement concepts Cho:87,Cramton:95,grossman1986perfect, informed-principal problems Myerson:83informed, and informational opportunism MartimortDequiedt15.

2 Setup

Players and Information Structure.

There are players, indexed by . Each player has a private type and is compact. The state is distributed according to a commonly-known distribution function , the prior information structure. Let , and define the marginal with support .

An information structure

is a commonly-known joint distribution over the state

. The only restriction we impose on is that it is absolutely continuous w.r.t. , that is, . Given a player’s belief about the other players’ types is the conditional distribution , where is the marginal of . Let be the set of all information structures for which is an expansion. That is,

if and only if there exists a random variable

which maps types into distributions of signals such that the realization together with and implies via Bayes’ rule.

Basic Outcomes, Decision Rules, and Payoffs.

There is an exogenously given set of basic outcomes, , with . Player values the outcome according to a Bernoulli utility function, , defined over .

We represent the rules of a game by a decision rule,

where is the set of all distribution functions over the outcome space . Each rule is a mapping from type reports to a distribution over outcomes represented by the distribution function .

Status quo.

The status quo is an exogenous game of incomplete information. We assume an equilibrium in that game exists for any information structure and take the equilibrium selection as given. For any information structure , the status quo induces a decision rule . Under the expected utility of a truthfully reporting player with type is

almost everywhere conditional on , that is, . The second line follows because is incentive compatible under information structure (-IC henceforth). Truthful reporting is optimal for all types of all players given .

The existence of equilibrium under implies that the collection of possible status-quo outcomes, with being -IC, is well-defined.

Mechanism.

The mechanism is an alternative to the status quo. Any mechanism is a game of incomplete information represented by a decision rule. The collection of decision rules is . Given and , we define each player’s optimal reporting strategy . We collect players’ reports in . An equilibrium of implements the decision rule which is -IC.222Although any decision rule in represents a direct revelation mechanism, a truthful implementation may not be guaranteed. Indeed, is shorthand for all game forms with a player’s action. The equilibrium play of each under then induces some -IC decision rule.

The set of available mechanisms, , may be restricted by legal or institutional constraints, or particular outcomes may simply be infeasible. We assume two minimal requirements on the set of available mechanisms:

  1. [label=()]

  2. , and

  3. is closed under convex combinations, that is, if , then for any it holds that .

The first property implies that the mechanism can replicate the status quo. The second property implies that if two games (1 and 2) are part of the available mechanism, so is the game in which game 1 is played for specific type reports and game 2 for the remaining type reports.

Apart from these requirements, we do not restrict . Instead, we allow for both a classical mechanism design setting and the possibility that the designer’s set of mechanisms is exogenously limited. The latter includes pure ’mediation’ within the status quo.

Informational Punishment.

We assume all players have access to a signaling device . The N-dimensional random variable maps type reports into realizations in signal space with . We denote the realization of by , that of element by .

Timing.

First, players learn their types and observe . Second, they simultaneously send a message to . Third, players simultaneously decide whether to veto the mechanism. If at least one player vetoes the mechanism, the set of vetoing players becomes common knowledge, and the signal realizations become public. Players use that information to update to an information structure and the status quo implements . If players unanimously ratify the mechanism, they report to the mechanism which implements .

Solution Concept and Veto Beliefs.

We consider all mechanism-signaling device combinations implementable as the grand game’s perfect Bayesian equilibrium (PBE) using the definition from Fudenberg:88.

We use veto information structures, : the information structure that arises after an observed veto, but before the realization . PBE implies that for any and for any . In addition, all but first-node off-path beliefs on deviators follow Bayes’ rule. The remaining off-path beliefs are arbitrary.333In our setting, a player is at most observed to deviate once. Off-path belief cascades (see sugaya2017revelation) are thus not possible in our model.

3 Analysis

3.1 Main Result

An optimal full-participation mechanism may not exist, absent informational punishment, even if is large. Consider the case in which all players participate on the equilibrium path. A deviator who vetoes guarantees herself the prior belief about the other players through that deviation. At the same time, the deviator is most punished by the “worst” off-path belief assigned to her. If ’s outside option exhibits concavities in the information structure, full participation without informational punishment is not always optimal.

With informational punishment, the designer can relax ’s participation constraint without relying on on-path rejections and (possibly) beyond what is implementable with rejections.

Proposition 1.

It is without loss of generality to focus on mechanisms that ensure full participation if informational punishment is available.

Proof.

The proof is constructive. Take any , and a veto equilibrium in which the mechanism is vetoed with positive probability on the equilibrium path. We first characterize the decision rule of the veto equilibrium. Then, we show it can be implemented with full participation using informational punishment.

Let be the probability that is vetoed given type profile . Moreover, is the likelihood that type vetoes on the equilibrium path. If players mix regarding their veto decision, the set of players that vetoed, , might be random. After a veto, players observe the set of players that vetoed and update to information structure . Outcomes realize according to . Taking expectations over all realizations of , , the ex-ante expected continuation game conditional on a veto is a lottery defined over all . is the on-path likelihood that a veto is caused by the set and not by any other set. Because and is closed under convex combinations, the lottery implies .

Conditional on no veto, the information structure is , and is the decision rule.

The grand game implements an -IC decision rule . Again, because is closed under convex combinations.

We now construct a signal such that the mechanism is implementable under full participation. By construction, is feasible and -IC. What remains is to show that no player has an incentive to veto .

We construct the following signaling device where with probability and otherwise. When observing off-path behavior (i.e., a veto) by , believes that has randomized uniformly over the entire type-space when reporting to . Thus, she disregards . We choose the off-path belief on identical to the belief attached to after observing her unilateral veto in the veto equilibrium.

No player has an incentive to veto the mechanism. If a player vetoes the mechanism, provides her with the same lottery over information structures that she expects from a veto in the veto equilibrium. Participation, in turn, gives the same outcome as the veto equilibrium. No player can improve the outcome of the veto equilibrium by vetoing .

Truthful reporting to is a best response. is on-path payoff-irrelevant. Thus, under an equilibrium with full participation in exists that implements the same outcome as the veto equilibrium. ∎

Optimal Informational Punishment

Suppose the task is to design the optimal mechanism under some objective. Proposition 1 shows that a mechanism with informational punishment and full participation can replicate the outcome of any mechanism with on-path rejection. However, it may not be immediate from Proposition 1 how that property simplifies the mechanism design problem. Here, we outline the implied simplification. Informational punishment is most effective if the signaling device conditions on the deviator’s identity. That is, is the signaling device used if the set of vetoing players is . Specifically, is the mapping from reports to realizations used if (only) vetoes the mechanism.

Informational punishment separates the full-participation problem from the mechanism design problem. The reason is straightforward in light of the proof of Proposition 1. Informational punishment affects the participation constraints only. Moreover, by Proposition 1 it is without loss to restrict attention to optimal mechanisms in which vetoes are off-path events. Therefore, we can limit our attention to unilateral vetoes when constructing the optimal mechanism.

What remains is to find that punishes deviator the most. If player vetoes, all other players hold an off-path belief about player ’s type and use information structure to update any signal sent by . Let be the probability that information structure prevails if player vetoes the mechanism. To solve for the optimal , we can solve for the lottery over information structures , such that . Formally, we solve the following simple information design problem.444See, e.g., Bergemann:13 and references therein for more information on information design problems.

where with are weights corresponding to the Lagrangian multipliers of the binding participation constraints in the mechanism design problem (see Jullien00).555In many standard environments, it holds that for some , i.e. only the participation constraint of one type binds. Moreover, for many default games, it is the case that the solution of the information design problem is independent of . Such a situation occurs in settings where there is a ‘worst’ information structure for all types of the deviator. This property holds, in particular, in games that feature strategic complements or strategic substitutes. The first constraint implies that every results from some feasible signal : the signal is Bayes’ plausible. The second constraint means that cannot reveal information about the deviator who vetoed the mechanism: the belief on the deviator, , is constant. Players do not observe whether the deviator has deviated only at the ratification stage or already before that. Therefore, they attach a single belief to the deviator’s type distribution.

The solution to the information design problem relaxes player ’s participation constraint the most. Thus, we can determine each player’s least binding participation constraint, player-by-player. Once participation constraints under informational punishment are determined, we can design the mechanism taking each player’s participation constraint as exogenously given. Using the same arguments as those in the concavification literature Aumann:95, optimal informational punishment convexifies the deviation payoffs and thus minimizes the gains from a veto.

3.2 Other Environments

This section shows that our results extend straightforwardly to more complex settings. We begin by looking at refined equilibrium concepts. Then we consider informed-principal problems. Finally, we reduce the designer’s commitment.

Refinements

Proposition 1 assumes that the designer can freely pick off-path beliefs under the PBE restriction. She can select any first-node off-path belief of the continuation game. Depending on the context and the application, such an equilibrium selection may not be reasonable. Using a refinement could instead make on-path vetoes unavoidable because it limits the designer’s equilibrium choice set in the first place.666correia2017trembling provides additional discussions on this issue and how it may interfere with the design of a mechanism.

Our second finding is that Proposition 1 is robust to most common refinements. Specifically, whenever we refine the equilibrium concept according to

full participation remains optimal.

Proposition 2.

Suppose the solution concept is perfect Bayesian equilibrium with refinement concept , and informational punishment is available. Then, focusing on mechanisms that imply full participation is without loss of generality.

Proof.

Ratifiability requires full participation in the mechanism and therefore holds trivially, as the designer can always choose a degenerate signaling device. It thus is without loss of generality to show full participation under refinement {Perfect Sequential Equilibrium, Intuitive Criterion}.

Consider the veto equilibrium used in the proof of Proposition 1. Suppose this equilibrium satisfies refinement .

We show that the full-participation equilibrium constructed in the proof of Proposition 1, , satisfies the same refinement criterion. Two aspects are crucial. First, compare the equilibrium with vetoing and that with full participation. On-path (expected) outcomes and those that are off-path but can be reached by a unilateral deviation are identical between these two equilibria for every state . First, take any state in which the mechanism is unanimously accepted in both equilibria. Then both outcomes coincide, and so does the credibility of the beliefs. Second, consider a state in which the mechanism is rejected in the veto equilibrium with positive probability. For the same state, suppose that is rejected in the full-participation equilibrium—an off-path event. The resulting off-path belief on the deviator coincides with the on-path belief on the same in the veto equilibrium.

Thus, the constructed off-path beliefs put positive mass only on those types that weakly prefer to deviate, while no such type strictly prefers to deviate. Thus, any off-path belief for type is credible in the sense of grossman1986perfect, and off-path beliefs do not violate the intuitive criterion. ∎

Informed-Principal Problems

The informed-principal environment assumes that a privately informed player proposes the mechanism which should replace the status quo.

Formally, instead of a non-strategic third party, one of the players, say , proposes a mechanism as an alternative to the default game. The setting becomes an informed-principal problem. Players are the agents.

A key concept to solve informed-principal problems is the concept of inscrutability (see [Myerson:83informed]). It states that it is without loss to assume that the informed principal, player , selects a mechanism that does not allow the other players to learn about the principal’s type from the proposed mechanism. That is, inscrutability means it is without loss to restrict attention to pooling solutions in which each principal type offers the same mechanism.

The default game can depend non-linearly on beliefs about player ’s type. Consequently, the principle of inscrutability might fail. Player may have strict incentives to signal private information via the mechanism proposal. Thereby player relaxes the other players’ participation constraints. The following result states that these concerns are irrelevant if informational punishment is available.

Proposition 3.

The principle of inscrutability holds if informational punishment is available.

Proof.

Consider an equilibrium of the grand game such that different types of player propose different mechanisms s. Let be the set of s that are proposed with strictly positive probability. Let denote the probability that player 0 type proposes mechanism .

Consider the case in which at least one type of one player vetoes some on the equilibrium path. We refer to this equilibrium as the separate-and-veto equilibrium. Recall that if is vetoed, some rule in results. Let the probability that is vetoed be . Moreover, is the probability that vetoes . The separate-and-veto equilibrium implements a -IC decision rule, . because is closed under convex combinations.

We prove the existence of the following equilibrium. All types of player propose and every player accepts it. This equilibrium leads to the -IC decision rule . We construct a signaling device to support acceptance of . Let be some invertible function. For , we construct the signal with support and associated probabilities . For any , let the signal be with probability and with remaining probability. Whenever player vetoes, a signal realizes according to . Thus, the reason why no player rejects is the same as in the proof of Proposition 1. The only difference is that the signaling function also replicates the potential signal-by-mechanism-choice behavior of the principal, captured by . ∎

Informational Opportunism

Our design approach on the side of the signal assumes that the signaling device is fixed. In other words, the signal designer has commitment power. However, if the signal is created through passing information to a third player, e.g., a journalist, then that assumption may not always hold. In such a setting, the person that keeps the information may be a strategic player. In addition, she could be unable to commit to both the signaling device and report its outcome. MartimortDequiedt15 refer to such a setting as informational opportunism.

To adequately address the situation with informational opportunism, we need to take a stance on two aspects. First, how large is the signal designer’s commitment power? Second, does the designer’s objective change once she sees a deviation? Is it credible for her to use the information to punish the deviator?

An extreme form of informational opportunism occurs if the signal designer chooses the signal realization, , after receiving a report rather than the signaling device, . In that case, the signal cannot commit to a mapping from reports to realizations. Instead, the signal becomes a cheap-talk announcement. The results of Propositions 3, 1 and 2 trivially do not hold without any commitment power, even if the signal designer wants to punish the deviator.

Instead, we focus on a signal designer whose action space is still the set of signaling devices. We make the following assumption.

Assumption 1 (No Fabricated Data).

The choice of signaling device, , becomes public together with its realization, .

Under Assumption 1 the signal designer can back up her claims by providing evidence. Indeed, all interested players can see the designer’s method to reach her conclusion, . Within this setting, informational opportunism is best seen when considering the timing of the grand game. In our baseline model, the signal designer commits to before players decide about vetoing. She is thus an impartial third-player and not an interested player in the grand game. In contrast, we assume now that the designer is an interested player in the game.

First, we assume that the signal designer can commit to an objective: to punish a potential deviator. However, she cannot commit to her signaling device at the beginning of the game. Instead, she picks after players have made their acceptance decision and the designer has elicited the information. That is, the signal designer faces ex-post incentive constraints.

Definition 1 (Informational Opportunism).

The designer of the signaling device suffers from informational opportunism if she cannot commit to a signaling device before players’ participation decision.

We are interested in situations in which our previous results are immune to informational opportunism.

Definition 2 (Immunity).

A result is immune to informational opportunism if it is implementable by a signal designer that suffers from informational opportunism.

Allowing for informational opportunism comes at a cost. We cannot make a general statement on immunity. However, we state a set of definitions that restricts the environment. These restrictions allow us to state Proposition 4, which determines a condition for immunity.

We identify a signal designer by her type. The designer’s type is the information she has elicited from the (participating) players. Observe that any signal designer type could fully reveal her type by choice of the appropriate signaling device.

Yet, if the signal designer chooses not to reveal her type, the interpretation of realization depends not only on . It also depends on the belief that players form about the signal designer. To form the belief, players observe . Each triggers a belief which, together with , leads to a lottery over information structures. Different designer types potentially have different preference rankings over lotteries for a given signal designer’s objective. We assume, however, that preferences are aligned, and all types share the same order. Thus, there is a common understanding of which information structures better achieve the desired goal.

To achieve immunity to informational punishment, we impose two properties on the environment. First, the players’ types are distributed independently. Second, the signal designer has aligned preferences across her types.

Definition 3 (Aligned Preferences).

Fix arbitrary distributions over a collection of players’ types. Let and be two (possible) distributions of the remaining player ’s type. The signal designer has aligned preferences if every designer type prefers to whenever first-order stochastically dominates .

Finally, we define an extreme notion of the desire to separate.

Definition 4 (Unraveling Pressure).

A signal designer faces unraveling pressure under signaling device if she strictly prefers to verify her type to the lottery induced by .

Suppose types are independently distributed, and preferences are aligned. In that case, the absence of unraveling pressure is necessary and sufficient to guarantee that a signaling device is implementable, as the following proposition shows.

Proposition 4.

Suppose players’ types are independently distributed, and the signal designer’s preferences are aligned. Then, a signaling device is implementable under informational opportunism if and only if no signal designer type faces unraveling pressure.

Proof.

The “only if” direction follows from Definition 4. If a signal designer type faces unraveling pressure, she prefers to reveal her type over the signaling device .

For the “if” direction, consider a signaling device . Assume player has vetoed the mechanism. The signal designer elicited the information from the non-deviating players. We want to show that no designer type, , has an incentive to announce a different device than .

Suppose signal designer type deviates by announcing which does not verify . Players observe the deviation and its realization . Using these objects, they form off-path beliefs about the types of all players. The symmetry of PBE and the independence of players’ types imply the following. Any subset of players has identical beliefs about those not in that subset.

The off-path beliefs on the signal designer’s type are only restricted by the signaling function . If a realization occurs with probability given a type , then players exclude that type from the set of possible signal designer types. Denote the set of not excluded types by . The distribution is arbitrary. That is, for every , there always exists an off-path belief about the deviating designer that rationalizes .

By assumption does not verify . Thus, . Types have aligned preferences. Thus, we can find a signal designer type such that a degenerate belief on makes every designer type other than worse off compared to that signal designer revealing her type. No unraveling pressure implies that no type benefits from the deviation. is implementable under informational opportunism. ∎

The immunity of our results to informational opportunism is a straightforward corollary to Proposition 4. Moreover, it provides a simple way to test for immunity given a candidate .

Corollary 1.

Suppose players’ types are independently distributed, and the designer’s preferences are aligned. Propositions 3, 2 and 1 are immune to informational opportunism if no signal designer type faces unraveling pressure under .

We conclude our discussion by addressing a weaker notion of informational opportunism.

The signal designer may commit to a signaling device but may choose to conceal the realization. We refer to this as weak informational opportunism.

Definition 5 (Weak Informational Opportunism).

The designer of the signaling function suffers from weak informational opportunism if she can commit to a signaling function at the beginning of the game, but not to the disclosure of the realization .

It is straightforward to see that immunity to informational opportunism implies immunity to weak informational opportunism. In addition, we can drop the no-unraveling pressure condition. The reason is that—with aligned preferences—there is a common worst realization across signal designer types. If a signal designer hides information off the equilibrium path, an off-path belief assuming a (hidden) realization of punishes every designer type the most. Consequently, using the standard unraveling arguments from the persuasion literature, [see, e.g.,][]Milgrom:81,grossman1981, no designer has an incentive to hide her information. Moreover, the result applies even if players are unaware of the signal designer’s objective.

Corollary 2.

Suppose players’ types are independently distributed, and the designer’s preferences are aligned. Propositions 3, 2 and 1 are immune to weak informational opportunism even when players have ambiguity over the signal designer’s objective function.

The reason for Corollary 2 is that if the signal designer’s types preferences are aligned, then given any objective, has a common worst signal realization. Not revealing the signal realization leads to an arbitrary off-path belief. Thus, even if the designer’s objective is unclear, parties can coordinate on an off-path belief in some PBE that puts all probability mass on the worst signal.

4 Final Remarks

Mechanism design can facilitate policy advice. It provides a simple benchmark that informs what is possible theoretically. The power of mechanism design derives from its simplicity in calculations. Invoking the revelation principle, we can derive strong results even in a complex environment.

However, suppose the environment is such that the designer cannot control the entire strategic setting, and parties can block the implementation of the mechanism. In that case, the revelation principle for the part the designer controls can fail. The reason is that parties can use their veto power to signal private information strategically. To obtain the desired benchmarks, we would have to either restrict the environment to settings in which signals through vetoes are non-profitable or delve into complex case distinctions.

In this paper, we argue that restricting to the setting in which strategic vetoes are of no concern is without loss provided that parties have access to a tool we call informational punishment. Informational punishment allows parties to store information for some time and release a garbled version of it in case of a deviation. Furthermore, we show that through informational punishment, we can transform every environment in which strategic vetoes are relevant into an equivalent setting in which they are not. Thus, we can—without loss—restrict ourselves to full-participation mechanisms under an (appropriate) outside option.

Our results go beyond classical applications of mechanism design. We derive a minimum condition on the available mechanism space that determines whether informational punishment guarantees full participation at an optimum. Informational punishment works off the equilibrium path, does not affect incentive compatibility directly, and allows for publicly verifiable rejections. We can implement informational punishment through a centralized signal, through the designer of the mechanism who could also be an informed principal or decentralized through the parties individually. Furthermore, informational punishment is robust to various additional constraints on the setting.