Simple versus Optimal Contracts

08/10/2018 ∙ by Paul Dütting, et al. ∙ Technion Stanford University 0

We consider the classic principal-agent model of contract theory, in which a principal designs an outcome-dependent compensation scheme to incentivize an agent to take a costly and unobservable action. When all of the model parameters---including the full distribution over principal rewards resulting from each agent action---are known to the designer, an optimal contract can in principle be computed by linear programming. In addition to their demanding informational requirements, such optimal contracts are often complex and unintuitive, and do not resemble contracts used in practice. This paper examines contract theory through the theoretical computer science lens, with the goal of developing novel theory to explain and justify the prevalence of relatively simple contracts, such as linear (pure commission) contracts. First, we consider the case where the principal knows only the first moment of each action's reward distribution, and we prove that linear contracts are guaranteed to be worst-case optimal, ranging over all reward distributions consistent with the given moments. Second, we study linear contracts from a worst-case approximation perspective, and prove several tight parameterized approximation bounds.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Classic contract theory.

Many fundamental economic interactions can be phrased in terms of two parties, a principal and an agent, where the agent chooses an action and this imposes some (negative or positive) externality on the principal. Naturally, the principal will want to influence which action the agent chooses. This influence will often take the form of a contract, in which the principal compensates the agent contingent on either the actions or their outcomes; with the more challenging and realistic scenario being the one in which the principal cannot directly observe the agent’s chosen action. Instead, the principal can only observe a stochastic outcome that results from the agent’s action.

For example, consider a salesperson working for a company producing a range of products with different revenue margins. The salesperson chooses the amount of effort spent on promoting the various products. The company may not be able to directly observe effort, but can presumably track the number of orders the salesperson generates. Assuming this number is correlated with the salesperson’s actions (the harder he works, the more sales of higher margin products he generates), it may make sense for the company to base his pay on sales—i.e., to put him on commission—to induce him to expend the appropriate level of effort.111We shall address the agent as male and the principal as female. Another example is the interaction between a car owner and an insurance company. The car owner’s behavior influences the risks his car is exposed to. If costs are borne only by the insurance company, the owner might not bother to maintain the asset properly (e.g., might park in a bad neighborhood). This is an example of the well-known moral hazard problem. Typically, such bad behavior is difficult to contract against directly. But because this behavior imposes an externality on the insurance company, companies have an interest in designing contracts that guard against it.

Both of these examples fall under the umbrella of the hidden action principal-agent model, arguably the central model of contract theory, which in turn is an important and well-developed area within microeconomics.222For example, the 2016 Nobel Prize in economics was awarded to Oliver Hart and Bengt Holmström for their contributions to contract theory. The prize announcement stated: “Modern economies are held together by innumerable contracts… [T]ools created by Hart and Holmström are valuable to the understanding of real-life contracts”. Perhaps surprisingly, this area has received far less attention from the theoretical computer science community than auction and mechanism design, despite its strong ties to optimization, and potential applications ranging from crowdsourcing platforms [23], through blockchain-based smart contracts [14], to incentivizing quality healthcare [8].

The model.

Every instance of the model can be described by a pair of actions and outcomes. In the salesperson example, the actions are the levels of effort and the outcomes are the revenues from ordered products. As in this example, we usually identify the (abstract) outcomes with the (numerical) rewards associated with them. The agent chooses an action , unobservable to the principal, which incurs a cost for the agent, and results in a distribution with expectation over the outcomes in . The realized outcome is awarded to the principal.

The principal designs a contract that specifies a payment to the agent for every outcome  (since the outcomes, unlike the actions, are observable to the principal). This induces an expected payment for every action . The agent then chooses the action that maximizes his expected utility over all actions (“incentive compatibility”), or opts out of the contract if no action with nonnegative expected utility exists (“individual rationality”).

As the design goal, the principal wishes to maximize her expected payoff: the expected outcome minus the agent’s expected payment , where is the action chosen by the agent. Therefore contract design immediately translates into the following optimization problem: given

, find a payment vector

that maximizes , where is incentive compatible and individually rational for the agent. We focus on the limited liability case, where the contract’s payments are constrained to be non-negative (i.e., money only flows from the principal to the agent).333Without some such assumption there is a simple but unsatisfying optimal solution for the principal when the agent is risk-neutral: simply sell the project to the agent, at a price just below the maximum expected welfare that the agent can generate by choosing an action. The agent may as well accept (and then select the welfare-maximizing action), and the principal pockets essentially the full welfare. This solution is incompatible with practical principal-agent settings, e.g., a salesperson does not typically buy the company from its owner. A detailed description of the model appears in Section 2.

Optimal contracts and their issues.

It is straightforward to solve the optimization problem associated with finding the optimal contract maximizing the principal’s expected payoff, by solving one linear program per action. Each linear program minimizes the expected payment to the agent subject to the constraint that he prefers this action to any other—including opting out of the contract—and subject to the payments all being non-negative. The best of these linear programs (achieving the highest expected payoff for the principal) gives the best action to incentivize and an optimal contract incentivizing it. However, this is not the end of the story: this approach can result in contracts that are nothing like the contracts used in practice.444A similar issue arises in auction theory: linear programs can be used to characterize optimal auctions, which often turn out to be impractically complicated and unintuitive (see, e.g., [21]). Example 1 demonstrates this critique—it is not clear how to interpret the optimal contract nor how to justify it to a non-expert. The optimal payment scheme in this example is not even monotone, i.e., a better outcome for the principal can result in a lower payment for the agent! In the salesperson example, this would create an incentive for the salesperson to manipulate the outcome, for example by hiding or canceling orders.

Example 1 ().

There are outcomes , and actions with the following outcome distributions and costs: , , , , and . The LP-based approach shows that the optimal contract in this case incentivizes action with payments . The analysis appears for completeness in Appendix A.1.

Linear contracts as an alternative.

Perhaps the simplest non-trivial contracts are linear contracts, where the principal commits to paying the agent an -fraction of the realized outcome (i.e., payments are linear in the outcomes). Unlike optimal contracts, linear contracts are the most ubiquitous contract form in practice;555To our knowledge, the only other common contract form according to the economics literature is “debt contracts,” which are similar to linear contracts except with zero payments for a set of the lowest outcomes [22]. Our results do not qualitatively change for such contracts—see Appendix F. We thank S. Matthew Weinberg for bringing this contract form to our attention.

they are conceptually simple and easy to explain; payments are automatically monotone; and the agent is guaranteed non-negative utility with probability one. From an optimization standpoint, however, they can be suboptimal even in simple settings, as the next example demonstrates:

Example 2 ().

There are outcomes , and actions and with and , respectively. The optimal contract incentivizes action with payments , resulting in expected payoff of for the principal. The maximum expected payoff of any linear contract is  (regardless of which action is incentivized).

Simple versus optimal contracts in the economic literature.

The complexity critique of optimal contracts is well known to economists. The dominant paradigm in the economics literature for addressing this is to justify simple contracts such as linear ones on robustness grounds.666For example, in their classic paper on linear contracts in dynamic environments, Holmström and Milgrom [24] write: “It is probably the great robustness of linear rules based on aggregates that accounts for their popularity.” Several works have attempted to characterize linear contracts as optimally robust in various max-min senses—see, e.g., [15, 13]; for a survey see [12]; for concurrent work see [18, 41] and Appendix C.2.

Perhaps most notable among these is the recent work of Carroll [11]. In this work, a key change to the standard principal-agent model is introduced: the set of actions available to the agent is completely unknown to the principal. Because in this new setting no guarantee is possible, Carroll relaxes the new model by adding the assumption that some set of actions is fully known to the principal (that is, she is fully aware of the distributions the actions induce over outcomes as well as of their costs). See [11, p. 546] for “[o]ne way to make sense of this combination of non-quantifiable and quantifiable uncertainty”, i.e., completely known and completely unknown actions. The main result is that a linear contract is max-min optimal in the worst case over all possible sets of actions that contain the known set (where can be anything). Carroll sees the main contribution not in a literal interpretation of the model, but rather in finding a “formalization of a robustness property of linear contracts—a way in which one can make guarantees about the principal’s payoff with very little information” [12, Sec. 2.1].

1.1 Our Contributions

Our main goal is to initiate the study of simple contracts and their guarantees through the lens of theoretical computer science. We utilize tools and ideas from worst-case algorithmic analysis and prior-independent auction design to make contributions in two main directions, both justifying and informing the use of linear contracts.

Max-min robustness of linear contracts.

Our first contribution is a new notion of robustness for linear contracts. Our robustness result fits within the family of max-min characterizations championed by economic theory, but our setup and assumptions are more natural from an optimization perspective. Instead of assuming that there are completely unknown actions available to the agent alongside a fully known action set, we assume that the principal has partial knowledge of all actions—she knows their costs and has moment information

about their reward distributions. This is similar in spirit to previous work in algorithmic game theory on prior-independent parametric auctions, which sought max-min robust mechanisms in the worst case over all distributions with known first and second moments (see

[2], and also the robust optimization approach of [7]). Our result thus offers an alternative formulation of the robustness property of linear contracts, in a natural model of moment information that is easy to interpret.

Theorem 0 (See Section 4).

For every outcome set, action set, action costs, and expected action rewards, a linear contract maximizes the principal’s worst-case expected payoff, where the worst case is over all reward distributions with the given expectations.

Approximation guarantees.

Our second contribution is to conduct the first study of simple contracts from an approximation perspective. Studying the worst-case approximation guarantees of classic microeconomic mechanisms—linear contracts in this case—has been a fruitful approach in other areas of algorithmic game theory. Applying this approach, we achieve a complete and tight analysis of the approximation landscape for linear contracts. For each of the four main parameters of the principal-agent model—number of actions , number of outcomes , range of expected rewards , and range of costs —we give tight approximation guarantees in that parameter, which apply uniformly across the other three parameters:

Theorem 0 (See Section 5).

Let denote the worst-case ratio between the expected principal payoff under an optimal contract and under the best linear contract. Then

  • [topsep=0ex,itemsep=0ex,parsep=0ex]

  • Among principal-agent settings with actions, .

  • Among settings where the ratio of the highest and lowest expected rewards is .

  • Among settings where the ratio of the highest and lowest action costs is , .

  • Among settings with outcomes, can be arbitrarily large in the worst case.

The upper bounds summarized in the above theorem hold even with respect to the strongest-possible benchmark of the optimal expected welfare (rather than merely the optimal principal expected payoff); they thus answer the natural question of how much of the “first-best” welfare a linear contract can extract. The matching lower bounds in the above theorem all apply even when we add a standard regularity condition to the principal-agent settings, called the monotone likelihood ratio property (MLRP) (see Appendix E.1). In Appendix F we show an extension of our lower bounds to all monotone (not necessarily linear) contracts: we show that among principal-agent settings with actions, the worst-case ratio between the expected principal payoff under an optimal contract and under the best monotone contract can be .

Discussion.

We view parts (a)–(c) of the above theorem as surprisingly positive results. A priori, it is completely unclear which of the many parameters of principal-agent settings, if any, governs the performance of simple contracts. Our results show that there is no finite approximation bound that holds uniformly over all of the model parameters, but that we can obtain the next best thing: by fixing just one of the model’s ingredients (either the number of actions, the range of the outcomes, or the range of the costs, as preferred), it is possible to obtain an approximation guarantee that holds uniformly over all other parameters. Our theorem shows that linear contracts are far from optimal only when the number of actions is large, and there is a huge spread in expected rewards, and there is a huge spread of action costs. Few if any of the most popular instantiations of the principal-agent model have all three of these properties.

1.2 Further Related Work

Contract theory is one of the pillars of microeconomic theory. We refer the interested reader to the classic papers of [38, 20, 35, 25], the excellent introductions of [10, 40], the comprehensive textbooks [29, 9] (see also [30, Chapters 13-14]), and the scientific background to the 2016 Nobel Prize in Economics [36].

Computational approaches to contract design.

To our knowledge, decidedly computational approaches to contract design have appeared so far only in the work of [6] (see also follow-ups [4, 5]), the work of [3], and the work of [23] (see also follow-up [28]). The first paper [6] initiates the study of a related but different model known as combinatorial agency, in which combinations of agents replace the single agent in the classic principal-agent model. The challenge in the new model stems from the need to incentivize multiple agents, while the action structure of each agent is kept simple (effort/no effort). The focus of this line of work is on complex combinations of agents’ efforts influencing the outcomes, and how these determine the subsets of agents to contract with. The second paper [3] introduces a notion of contract complexity based on the number of different payments specified in the contract, and studies this complexity measure in an -player normal-form game framework. In their framework there are no hidden actions, making our model very different from theirs. The third paper [23] develops a model of dynamic contract design: in each sequential round, the principal determines a contract, an agent arrives and chooses an action (effort level), and the principal receives a reward. Agents are drawn from an unknown prior distribution that dictates their available actions. The problem thus reduces to a multi-armed bandit variant with each arm representing a potential contract. The main focus of this line of work is on implicitly learning the underlying agent distribution to minimize the principal’s regret over time.

(Non)relation to signaling.

Since one of the main features of the principal-agent model is the information asymmetry regarding the chosen action (the agent knows while the principal is oblivious), and due to the “principal” and “agent” terminology, on a superficial level contract theory may seem closely related to signaling [39, 31, 32]. This is not the case, and the relationship is no closer than that between auction theory (screening) and signaling. As Dughmi [16] explains, the heart of signaling is in creating the right information structure, whereas the heart of contract design is in setting the right payment scheme.777“There are two primary ways of influencing the behavior of self-interested agents: by providing incentives, or by influencing beliefs. The former is the domain of traditional mechanism design, and involves the promise of tangible rewards such as […] money. The latter […] involves the selective provision of payoff-relevant information to agents through strategic communication” [16, p. 1]. Put differently, in signaling, it is the more-informed party that faces an economic design problem; in hidden-action contract theory, it is the less-informed party (i.e., the principal). For more on signaling from a computational perspective the reader is referred to [19, 34, 17].

Concurrent work on algorithmic delegation.

Two recent papers [26, 27] study algorithmic aspects of another loosely related but distinct problem called optimal delegation [1]. In this problem, a principal has to search for and decide upon a solution, and wishes to delegate the search to an agent with misaligned incentives regarding which solution to choose. Crucially, there are no monetary transfers, making the problem very different from contract design.

2 The Hidden Action Principal-Agent Model

The principal-agent model.

An instance is described by a pair of actions and outcomes.

  • [topsep=0ex,itemsep=0ex,parsep=0ex]

  • Outcomes: We identify the th outcome for every with its reward  to the principal, and assume w.l.o.g. that the outcomes are increasing, i.e., .

  • Actions: Each action is a pair , in which is a distribution over the outcomes and is a cost. The agent chooses an action and bears the cost , whereas the principal receives a random reward drawn from . Crucially, the action is hidden: the principal observes the outcome but not the action .

Notation and terminology

Denote by the probability of action to lead to outcome ; we assume w.l.o.g. that every outcome has some action leading to it with positive probability. Denote the expected outcome (i.e., reward to the principal) from action by The difference is the expected welfare from choosing action . When an action is indexed by , we write for brevity (rather than ).

Standard assumptions.

Unless stated otherwise we assume:

  1. [topsep=1ex,itemsep=0ex,parsep=0ex]

  2. There are no “dominated” actions, i.e., every two actions have distinct expected outcomes , and the action with the higher expected outcome also has higher cost .

  3. There is a unique action with maximum welfare .

  4. There is a zero-cost action with .

Assumption A1 means there is no action with lower expected outcome and higher cost than some other action, although we emphasize that there can be an action with lower welfare and higher cost (in fact, incentivizing the agent to avoid such actions is a source of contract complexity). Our main results in Sections 4-5 do not require this assumption (see Section 5 for details). Assumptions A2 and A3 are for the sake of expositional simplicity. In particular, Assumption A3 means we can assume the agent does not reject a contract with nonnegative payments, since there is always an individually rational choice of action; alternatively, individual rationality could have been imposed directly.

Contracts.

A contract defines a payment scheme with a payment (transfer) from the principal to the agent for every outcome . We denote by the expected payment for action , and by the expected payment for . Note that the payments are contingent only on the outcomes as the actions are not observable to the principal. The requirement that is nonnegative for every is referred to in the literature as limited liability [11], and it plays the same role as the standard risk averseness assumption in ruling out trivial solutions where a contract is not actually required [20]. Limited liability (or its parallel agent risk averseness) is the second crucial feature of the classic principal-agent model, in addition to the actions being hidden from the principal.

Implementable actions.

The agent’s expected utility from action given payment scheme is . The agent chooses an action that is: (i) incentive compatible (IC), i.e., maximizes his expected utility among all actions in ; and (ii) individually rational (IR), i.e., has nonnegative expected utility (if there is no IR action the agent refuses the contract). We adopt the standard assumption that the agent tie-breaks among IC, IR actions in favor of the principal.888The idea is that one could perturb the payment schedule slightly to make the desired action uniquely optimal for the agent. For further discussion see [10, p. 8]. We say a contract implements action if given its payment scheme , the agent chooses ; if there exists such a contract we say is implementable.

Optimal contracts and LPs.

The principal seeks an optimal contract: a payment scheme  that maximizes her expected payoff , where is the action implemented by the contract (i.e., is both IC and IR for the agent, with ties broken to maximize the expected payoff of the principal). Notice that summing up the agent’s expected utility with the principal’s expected payoff results in the contract’s expected welfare . A contract’s payment scheme thus determines both the size of the pie (expected welfare), and how it is split between the principal and agent.

Given an action , the linear program (LP) appearing in Appendix A either finds a payment scheme that implements at minimum expected payment to the agent, or establishes that is not implementable. The optimal contract can be found by solving such LPs—one for each action—and comparing the principal’s expected payoff in each case after paying the agent. The LP and its dual can also be used to characterize implementable actions and to show that the optimal contract will have at most nonnegative payments—see Appendices A.2-A.3 for details.

Linear/monotone contracts.

In addition to optimal contracts we consider the following simple classes.

Definition 1 ().

A contract is linear if the payment scheme is a linear function of the outcomes, i.e., for every . We refer to as the linear contract’s parameter, which is due to limited liability.

A natural generalization is a degree- polynomial contract, in which the payment scheme is a nonnegative degree- polynomial function of the outcomes: for every . If we get an affine contract; such contracts play a role in Section 4. Linear and affine contracts are monotone:

Definition 2 ().

A contract is monotone if its payments are nondecreasing in the outcomes, i.e., for .

Max-min evaluation and approximation.

We apply two approaches to evaluate the performance of simple contracts: max-min in Section 4, and approximation in Section 5. We now present the necessary definitions, starting with the max-min approach.

Definition 3 ().

A distribution-ambiguous action is a pair , in which is the action’s expected outcome and is its cost. Distribution over outcomes is compatible with distribution-ambiguous action if .

Definition 4 ().

A principal-agent setting is ambiguous if it has outcomes and distribution-ambiguous actions, and there exist distributions over the outcomes compatible with the actions.

(A setting with outcomes cannot be ambiguous since the expectation determines the distribution; moreover the conundrum of “optimal but complex” vs. “suboptimal but ubiquitous” never arises as the optimal contract has a simple form—see Appendix G.)

In ambiguous settings, it is appropriate to apply a worst-case performance measure to evaluate contracts:

Definition 5 ().

Given an ambiguous principal-agent setting, a contract’s worst-case expected payoff is its infimum expected payoff to the principal over all distributions compatible with the known expected outcomes .

We follow [11] in making the following assumption, which simplifies but does not qualitatively affect the results in Section 4.999As explained in [11, Footnote 2], Assumption A4 is simply an additive normalization of the principal’s payoffs. Without this assumption, a robustly optimal contract would take the form . Further justification for assuming is that the principal may have ambiguity not just with respect to the action distributions but also as to her possible rewards, and she prefers a contract robust to the possibility (however slim) of receiving a zero reward.

  1. [topsep=1ex,itemsep=0ex,parsep=0ex]

  2. In ambiguous principal-agent settings, the outcome belongs to , i.e., .

In Section 5, we are interested in bounding the potential loss in the principal’s expected payoff if she is restricted to use a linear contract. Formally, let be the family of principal-agent settings. For , denote by the optimal expected payoff to the principal with an arbitrary contract, and by the best possible expected payoff with a contract of the restricted form (we omit from the notation where clear from context). We seek to bound

3 Properties and Geometry of Linearly-Implementable Actions

Our goal in this section is to establish a geometric characterization of linearly-implementable actions and to derive from it several useful consequences. See Appendix B for details and missing proofs.

Definition 6 ().

In a principal-agent setting , an action is linearly-implementable if there exists a linear contract with parameter that implements .101010The requirement is w.l.o.g.

Let denote the number of linearly-implementable actions, and let denote the set of such actions. Index the actions in in order of their expected outcomes, i.e., for , . We now define two different mappings, and in Lemma 1 establish their equivalence.

  • [topsep=1ex,itemsep=0ex,parsep=0ex]

  • Linear-implementability mapping . Denote by the mapping of to either the action implemented by the linear contract with parameter  (observe there is at most one such action under our assumptions—for completeness we state and prove this in Appendix B), or to if there is no such action. So mapping is onto  by definition. Denote by the smallest such that action is implemented by a linear contract with parameter , then is the smallest such that .

  • Upper envelope mapping . For every action , consider the line and let denote the segment between and ; these segments appear in Figure 1, where the -axis represents the possible values of from 0 to 1. Take the upper envelope of the segments and consider its nonnegative portion. Let be the mapping from to either if the upper envelope is negative at , or to the action whose segment forms the upper envelope at otherwise. If there is more than one such action, let map to the one with the highest expected outcome .

Our main structural insight in this section is that the upper envelope mapping precisely captures linear implementability:

Lemma 1 ().

For every , .

Figure 1: Linearly-implementable actions via upper envelope.

Lemma 1 has three useful implications: (1) The actions appear on the upper envelope in the order in which they are indexed (i.e., sorted by increasing expected outcome); (2) These actions are also sorted by increasing welfare, i.e., ; (3) The smallest that incentivizes action (which we refer to as ) is the same that makes the agent indifferent between action and action . We denote this “indifference ” by and observe that (Observation 6 in Appendix B). Using this notation, we can rewrite the third implication as: for every .

The three implications above are formulated as Corollaries 4, 5 and 6, respectively, and appear with their proofs in Appendix B. We shall use Lemma 1 in Sections 4-5 and its corollaries in Section 5.

4 Robust Optimality of Linear Contracts

In this section we establish a robust optimality result for linear contracts. All deferred proofs appear in Appendix C. Our main result in this section is that a linear contract maximizes the principal’s expected payoff in ambiguous settings, in the worst case over the unknown distributions.

Theorem 1 (Robust optimality).

For every ambiguous principal-agent setting, an optimal linear contract has maximum worst-case expected payoff among all limited liability contracts.

In Theorem 1, “an optimal linear contract” is well-defined: For a (non-ambiguous) principal-agent setting, a linear contract is optimal if it maximizes the principal’s expected payoff over all linear contracts. For an ambiguous principal-agent setting, a linear contract has the same expected payoff to the principal over all compatible distributions, thus an optimal one is still defined as maximizing the principal’s expected payoff (see also Corollary 7 in Appendix B). In the remainder of the section we prove Theorem 1.

4.1 Main Lemma for Robust Optimality

The key step in the proof of Theorem 1 is to show that we may restrict the search for optimally robust contracts to affine contracts.

Lemma 2 ().

Consider an ambiguous principal-agent setting . For every limited liability contract with payment scheme , there exist compatible distributions and an affine contract with parameters , such that the affine contract’s expected payoff is at least that of contract for distributions .

Proof.

The payment scheme maps the outcomes to payments . Consider the two extreme outcomes and their corresponding payments . We begin by defining simple compatible distributions whose support is the extreme outcomes, as follows. For every distribution-ambiguous action , set (this is a valid probability since ; otherwise there could not have been compatible distributions). Set and let the other probabilities be zero. The expected outcome of distribution is . The defined distributions already enable us to prove the lemma for the case of :

Claim 1 ().

Lemma 2 holds for the case of .

Proof of Claim 1.

A proof of this claim appears in Appendix C.

Assume from now on that . If is affine, this means that its slope parameter must be nonnegative. Similarly, we can write and plug in our assumption that to get , and so by limited liability (), must also be nonnegative. Thus if is affine, Lemma 2 holds. We focus from now on on the case that is non-affine; this guarantees the existence of a point as appears in Figures 2 and 2. We argue this formally and then proceed by case analysis.

Claim 2 ().

If is non-affine, there exists an index such that the points , and on the Euclidean plane are non-collinear.

Proof of Claim 2.

A proof of this claim appears in Appendix C.

We introduce the following notation – denote the line between and by , the line between and by , and the line between and by (see Figures 2 and 2). We denote the parameters of line by and (i.e., and ). These naturally give rise to a corresponding affine contract. As argued above, since we have , and since and we have , so the affine contract has nonnegative parameters.

Recall that the support of compatible distributions is the endpoints of . We define alternative compatible distributions , whose support is either the endpoints of or of , as follows: For every distribution-ambiguous action , if set (by assumption this is a valid probability), and . If set and . All other probabilities are set to zero. Observe that in either case, the expected outcome of distribution is . With the two sets of compatible distributions and at hand, the analysis proceeds by addressing separately the two cases depicted in Figures 2 and 2.

Point is strictly above line .
Point is strictly below line .
Figure 2: Case distinction

The following simple observation will be useful in the case analysis:

Observation 1 ().

Consider a contract and two outcomes ; let be the line determined by points and . If action induces an outcome distribution over the support with expectation , then its expected payment to the agent is .

Claim 3 ().

Lemma 2 holds for the case that is strictly above .

Proof of Claim 3.

We prove the claim by showing that the affine contract with parameters (corresponding to ) has at least as high expected payoff to the principal as that of contract for distributions , which are defined as follows: Let be the action incentivized by the affine contract with parameters . Set and for every . We argue that for distributions as defined, contract will also incentivize action , but at a (weakly) higher expected payment to the agent compared to the affine contract.

Consider first the affine contract. By definition, for every action with expected outcome , the expected payment to the agent for choosing is . Now consider contract . For every action where , by Observation 1 the expected payment to the agent taken over is , i.e., the same as in the affine contract. For action , by Observation 1 the expected payment to the agent taken over is either or . In either case, since and are above , the expected payment (weakly) exceeds . Since maximizes the agent’s expected utility in the affine contract, and the expected payment for only increases in contract while staying the same for other actions, this completes the proof of Claim 3. ∎

Claim 4 ().

Lemma 2 holds for the case that is strictly below .

Proof of Claim 4.

A proof of this claim appears in Appendix C.

This completes the analysis of the cases depicted in Figures 2 and 2, thus proving Lemma 2. ∎

4.2 Proof of Theorem 1

Having established Lemma 2, Theorem 1 is now easy to prove using the following observation:

Observation 2 ().

Consider an affine contract with parameters . For any distributions , the expected payoff to the principal from the affine contract is at most the expected payoff from the linear contract with parameter .

Corollary 1 ().

For every affine contract with parameters , there is a linear contract with (weakly) higher worst-case expected payoff.

Proof of Theorem 1.

Consider an ambiguous principal-agent setting. For every limited liability contract , by Lemma 2 there exist compatible distributions and an affine contract with parameters such that the worst-case expected payoff of contract is at most the expected payoff of the affine contract for distributions . But the expected payoff of an affine contract is identical for all compatible distributions, and so for every limited liability contract there exists an affine contract with parameters and higher worst-case expected payoff. By Corollary 1, the optimal linear contract has even higher worst-case expected payoff, completing the proof. ∎

5 Approximation Guarantees of Linear Contracts

In this section we study linear contracts and their approximation guarantees. We show tight bounds on the approximation guarantee of linear contracts in all relevant parameters of the model. Our upper bounds in fact apply to the stronger benchmark of optimal welfare. Our lower bounds continue to hold under standard regularity assumptions (see Appendix E.1).

Tight approximation guarantee in number of actions.

Our first pair of results provides tight bounds on the approximation guarantee of linear contracts, as parametrized by the number of actions .

Theorem 2 ().

Consider a principal-agent setting with actions and linearly-implementable actions. Then the multiplicative loss in the principal’s expected payoff from using a linear contract rather than an arbitrary one is at most .

Theorem 3 ().

For every and , there is a principal-agent setting with actions and outcomes, such that the multiplicative loss in the principal’s expected payoff from using a linear contract rather than an arbitrary one is at least .

Tight approximation guarantee in range of expected rewards.

Our second pair of results is parametrized by the range of the expected outcomes normalized such that for all . Consider bucketing these actions by their expected outcomes into buckets with ranges , etc. Let be the number of non-empty buckets.

Theorem 4 ().

Consider a principal-agent setting such that for every action , its expected outcome is . The multiplicative loss in the principal’s expected payoff from using a linear contract rather than an arbitrary one is at most .

Corollary 2 ().

For every range , there is a principal-agent setting with for which , such that the multiplicative loss in the principal’s expected payoff from using a linear contract rather than an arbitrary one is at least

Tight approximation guarantee in range of costs.

Our final pair of results concerns the costs. As in the case of expected outcomes, suppose costs are normalized such that for all , consider bucketing these into buckets etc., and let be the number of non-empty buckets.

Theorem 5 ().

Consider a principal-agent setting such that for every action , its cost is . The multiplicative loss in the principal’s expected payoff from using a linear contract rather than an arbitrary one is at most .

Corollary 3 ().

For every range such that , there is a principal-agent setting with for which , such that the multiplicative loss in the principal’s expected payoff from using a linear contract rather than an arbitrary one is at least .

Additional tight bounds and optimality among all monotone contracts.

In Appendix E we strengthen Theorem 3 by showing that it applies even if , thus implying that the approximation ratio as parametrized by the number of outcomes can be unbounded. In Appendix F we show that the approximation guarantee of provided by linear contracts is asymptotically optimal among all monotone contracts.

5.1 Proofs for Selected Approximation Guarantees

To give a gist of our techniques, we now give the proofs of the upper and lower bounds in the number of actions (Theorems 2 and 3), and in the range of rewards (Theorem 4 and Corollary 2).

The upper bound in Theorem 5, the stronger version of Theorem 3 that holds for , and the strengthening of Theorem 3 to monotone contracts pose further technical challenges, and the proofs are deferred to Appendices D, E and F, respectively.

Notation.

Recall that denotes the set of linearly-implementable actions, indexed such that their expected outcomes are increasing, i.e., and . Note that by Assumption A1, , and recall that this does not imply that .

5.2 Proofs of Upper Bounds in Theorem 2 and Theorem 4

The key tools in proving the upper bounds is the following observation and the two lemmas below, which rely on the geometric insights from Section 3.

Observation 3 ().

Consider two actions such that has higher expected outcome and (weakly) higher welfare, i.e., and . Let . Then

(1)
Proof.

Since we have . Using that we get . So we can write , where the equality follows from our definition of . Hence, , as required. ∎

Below we shall apply Observation 3 to actions . In this context, the intuition behind the observation is as follows: Consider the linear contract with parameter . By Observation 6, in this contract the agent is indifferent among actions and . The left-hand side of (1) is the increase in expected welfare by switching to action from . For the agent to get the same expected utility from and , the principal must get this entire welfare increase as part of her expected payoff. The right-hand side of (1) is the principal’s expected payoff, and so the inequality holds.

The next lemma uses Observation 3 to upper-bound the expected welfare of the -th linearly-implementable action.

Lemma 3 ().

For every and linearly-implementable action ,

Proof.

The proof is by induction on . For , recall that by definition, and it trivially holds that . Now assume the inequality holds for , i.e., (*). By Corollary 5, the welfare of is at least that of , and we know that . We can thus apply Observation 3 to actions and get (**). Adding inequality (*) to (**) completes the proof for .∎

The next lemma shows an upper bound on the payoff that the principal can achieve with an optimal (unconstrained) contract.

Lemma 4 ().

Consider a principal-agent setting with linearly-implementable action set . The expected payoff of an optimal (not necessarily linear) contract is at most .

Proof.

In a linear contract with parameter , the agent’s expected utility for any action is its welfare . Thus an action is implemented by such a contract if and only if it maximizes welfare among all actions . By Corollary 4, and so must be the welfare-maximizing action. In every contract, the IR property ensures that the agent’s expected payment covers the cost of the implemented action , and so the principal’s expected payoff is always upper-bounded by . We conclude that , as required. ∎

With Lemma 3 and Lemma 4 at hand it is now relatively straightforward to show the upper bounds.

Proof of Theorem 2.

To prove the approximation guarantee of claimed in the theorem, observe that

(2)

where the first inequality holds by Lemma 4, the second inequality holds by Lemma 3, and the equality holds by Corollary 6.

Proof of Theorem 4.

Recall the bucketing of actions in by their expected outcome. For every bucket , let be the action with the highest in the bucket, and let be the action with the lowest . The bucketing is such that . Since the actions in are ordered by their expected outcome, then and are increasing in , and . For the last bucket we have that , and by Lemma 4 this implies .

Consider the subset of linearly-implementable actions . These will play a similar role in our proof as actions  in the proof of Theorem 2. Let . Observation 3 applies to actions in , and so we can apply a version of Lemma 3 to get

(3)

Our goal is now to upper-bound the right-hand side of (3).

Claim 5 ().

Proof of Claim 5.

Assume for contradiction that . By definition of we have that . Substituting we get

(4)

Since the expected outcomes of actions in are strictly increasing, it holds that , and so replacing with the larger in (4) gives . By Corollary 6, , and so the right-hand side equals . We conclude that , i.e., in a linear contract with parameter , action has higher expected utility for the agent than action . But by definition, parameter implements action , and so we have reached a contradiction. ∎

Applying Claim 5 to (3) we get the following chain of inequalities:

where the strict inequality follows from .

5.3 Proofs of Lower Bounds in Theorem 3 and Corollary 2

We conclude this section by showing matching lower bounds for the upper bounds established above.

Proof of Theorem 3.

For every , consider a family of principal-agent instances , each with actions and outcomes. For every , the th action has , i.e., deterministically leads to the th outcome . Every principal-agent instance is thus a full information setting in which the outcome indicates the action, and for which MLRP (Definition 8) holds. We define action ’s expectation (equal to outcome ) and its cost recursively:

where and .

We establish several useful facts about instance : For every , it is not hard to verify by induction that

(5)
(6)

Observe that the actions are ordered such that , , and are strictly increasing in . Plugging the values in (5) into we get

(7)

is also strictly increasing in .

Let (resp., ) denote the optimal expected payoff from an arbitrary (resp., linear) contract in the principal-agent setting . In a full information setting, the principal can extract the maximum expected welfare. This can be achieved by paying only for the outcome that indicates the welfare-maximizing action, and only enough to cover its cost. From (6) we thus get

In Lemma 7 in Appendix D, we analyze linear-implementability in the setting , showing that for every . Thus from (7) it follows that

completing the proof. ∎

Proof of Corollary 2.

We use the same construction as in the proof of Theorem 3, but set . Since , it indeed holds that . We know can achieve at least while can’t do better than , completing the proof. ∎

6 Conclusion

One of the major contributions of theoretical computer science to economics has been the use of approximation guarantees to systematically explore complex economic design spaces, and to identify “sweet spots” of the design space where there are plausibly realistic solutions that simultaneously enjoy rigorous performance guarantees. For example, in auction and mechanism design, years of fruitful work by dozens of researchers has clarified the power and limitations of ever-more-complex mechanisms in a wide range of settings. Contract theory presents another huge opportunity for expanding the reach of the theoretical computer science toolbox, and we believe that this paper takes a promising first step in that direction.

Acknowledgments

The authors wish to thank Michal Feldman and S. Matthew Weinberg for enlightening conversations.

This research was supported by the Israel Science Foundation (ISF grant No. 11/633), and has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 740282).

Appendix A Linear Programming Formulation and Implications

The LP for incentivizing action at minimum expected payment has payment variables , which by limited liability must be nonnegative, and IC constraints ensuring that the agent’s expected utility from action is at least his expected utility from any other action. Note that by Assumption A3, there is no need for an IR constraint to ensure that the expected utility is nonnegative. The LP is:

(8)
s.t.

The dual of LP (8) has nonnegative variables, one for every action other than :

(9)
s.t.