Making an Appraiser Work for You

04/23/2018 ∙ by Shani Alkobi, et al. ∙ 0

In many situations, an uninformed agent (UA) needs to elicit information from an informed agent (IA) who has unique expertise or knowledge related to some opportunity available to the UA. In many of those situations, the correctness of the information cannot be verified by the UA, and therefore it is important to guarantee that the information-elicitation mechanism incentivizes the IA to report truthfully. This paper presents and studies several information-elicitation mechanisms that guarantee truthful reporting, differing in the type of costs the IA incurs in producing and delivering the information. With no such costs, the expense for the UA in eliciting truthful information is positive but arbitrarily small. When information-delivery is costly, the extra expense for the UA (above the unavoidable cost of delivery) is arbitrarily small. Finally, when the information-production is costly, as long as the cost is sufficiently small, truthful information elicitation is possible and the extra expense for the UA (above the unavoidable cost of production) is arbitrarily small.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Often you own a potentially valuable object, such as an antique, a jewel, a used car or a land-plot, but do not know its exact value and cannot calculate it yourself. There are various scenarios in which you may need to know the exact object value. For example: (a) You intend to sell the object and want to know how much to ask when negotiating with potential buyers. (b) You want to know how much to invest in an insurance policy covering that object. (c) The object is a part of an inheritance you manage in behalf of your co-heirs, and you want to prove to them that you manage it appropriately. (d) The object is a land-plot that might contain oil, and you want to know whether to invest in developing it. (e) You are a firm and required by law to include the value of assets you own in your periodic report. (f) You are a government auctioning a public asset, and want to publish an accurate value-estimate in order to attract more firms to participate in the auction. Moreover, you may be required by law (or by political pressure from your voters) to obtain and disclose its true value, to avoid accusations of corruption.

A common solution in these situations is to buy the desired information from an agent with an expertise in evaluating similar objects Alkoby et al. (2017); Moldovanu and Tietzel (1998); Alkoby et al. (2014). Examples of such agents are: antique-shop owners, jewelers, car-dealers, or oil-firms that own nearby land-plots and thus can drill and estimate the prospects of finding oil in your plot. The problem is that, in many cases, the information is not verifiable: the information buyer (henceforth “the principal”) cannot tell if the information received is correct. This results in a strong incentive for the agent to provide an arbitrary value whenever the extraction of the true value is costly or requires some effort, knowing the principal will not be able to tell the difference. For example, if an antique agent gives you a low appraisal for an antique object, and you sell it for that low value, you will never know that you were scammed.

Even if the true value can be verified later on (e.g, due to unsuccessful drilling for oil), this might be too late — the damage due to using the wrong value might be irreversible and the agent might be too far away to be punished. Our goal is thus to —

— develop mechanisms that obtain the true value of an object by incentivizing an agent to compute and report it, even when it is costly for the agent, and even when the information is unverifiable by the principal.

The literature on information elicitation usually makes one of two assumptions: either the information is verifiable by the principal, or there are two or more information agents such that reports can be compared with peer reports (Faltings and Radanovic, 2017)

. We study a more challenging setting in which the information is unverifiable, and yet there is only a single agent who can provide it. At first glance this seems impossible: how can the agent be incentivized to report truthfully if there is no other source of information for comparison? We overcome this impossibility by allowing, with some small probability, the transfer of the object to the agent for some fee, as an alternative means of compensation (instead of directly paying the agent for the information). This is applicable as long as the agent gains value from owning the object, i.e., both capable of evaluating the object and can benefit from owning it. This is quite common in real life. For example, both the antique-shop owner and the car-dealer, who play the role of the agent in the motivating settings above, can provide a true valuation for the car/antique based on their expertise, and can also benefit from owning it (e.g. for resale). Similarly, the oil-firm who owns nearby sea-plots has access to relevant information enabling it to calculate the true value of the plot in question, and will also benefit from owning that plot.

We use this principle of selling the object to the agent with a small probability for designing a mechanism for eliciting the wanted information. Our analysis proves that the mechanism is truthful whenever the information computation cost is sufficiently small relative to the object’s expected value; the exact threshold depends on the prior distribution of the value. For example, when the object value is distributed uniformly, the mechanism is truthful whenever the computation cost is less than of the expected object value, which is quite a realistic assumption (Section 3).

While our mechanism allows the principal to learn the true information, this information does not come for free: the principal “pays” for it by the risk of selling the object to the agent for a price lower than its value. Our next goal, then, is to minimize the principal’s loss subject to the requirement of true information elicitation. We show that our mechanism parameters can be tuned such that the principal’s expected loss is only slightly more than its computation cost. This is an optimal guarantee, since the principal could not get the information for less than its computation cost even if she had the required expertise herself (Section 4).

We then show how the mechanism can be augmented to handle some extensions of the basic model. These include the case where the object is divisible, when the delivery of information is costly, when the principal and the agent have different valuations for the object, and when the cost the agent incurs when computing the information is unknown to the principal (Section 5).

In addition to our main mechanism, we present two alternative mechanisms that differ in their privacy considerations, in the sequence of roles (in the resulting Stackelberg game), and in the nature of the decisions made by the different players. Interestingly, despite their differences, all three mechanisms are equivalent in terms of the guarantees they provide. This leads us to conjecture that these guarantees are the best possible (Section 6).

Previous related work is surveyed in Section 7. Discussion and suggested extensions for future work are given in Section 8.

2. The Model

There is a principal who needs to know the true value of an object she owns or a business opportunity available to her. The monetary value of this object or opportunity for the principal is denoted by . While the principal does not know

, she has a prior probability distribution on

, denoted by , defined over the interval , with .

The principal can interact with a single agent. Initially, the agent too does not know and knows only the prior distribution . However, the agent has a unique ability to compute , by incurring some cost . Initially we assume that the cost is common knowledge; in Section 5.4 we relax this assumption.

The principal’s goal is to incentivize the agent to compute and reveal the true . However, the principal cannot verify and has no other sources of information besides the agent, so the incentives cannot depend directly on whether is correct.

Similar to the common value setting studied extensively in auction theory (Kagel and Levin, 1986), our model assumes that the value of the object to both the principal and the agent is the same. A realistic example of this setting is when both the principal and the agent are oil firms: the principal owns an oil field but does not know its value, while the agent owns nearby fields and can gain information about the oil field from its nearby drills. In Section 5.3 we relax this assumption and allow the two values to be different.

The mechanism-design space available to the principal includes transferring the object/opportunity to the agent, as well as offering and/or requesting a payment to/from the agent. The principal has the power to commit to the mechanism rules, i.e, the principal is assumed to be truthful. The challenge is to design a mechanism that will incentivize truthfulness on the side of the agent too.

The agent is assumed to be risk-neutral and have quasi-linear utilities. I.e, the utility of the agent from getting the object for a certain price is the object’s value minus the price paid. If the agent calculates , then the cost is subtracted from his utility too.

The primary goal of the principal is to elicit the exact value from the agent. Subject to this, she wants to minimize her expected loss, defined as the object value (if transferred to the agent) minus the payments received.

3. Truthful Value-Elicitation Mechanism

The mechanism most commonly used in practice for eliciting information is to pay the agent the cost (plus some profit margin) in money. However, when information is not verifiable, monetary payment alone cannot incentivize the agent to actually incur the cost of calculation and report the true value.

Instead, in our mechanism, the principal “pays” to the agent by transferring the object to the agent with some small probability. The mechanism guarantees the agent’s truthfulness, meaning that, under the right conditions (detailed below), a rational agent will choose to incur all costs related to computing the correct value, and report it truthfully. The mechanism is presented as Mechanism 1. It is parametrized by a small positive constant

, and a probability distribution represented by its cumulative distribution function

.

  1. The principal secretly selects at random, distributed in the following way:

    • With probability , this is distributed uniformly in ;

    • With probability , this is distributed like .

  2. The agent bids a value .

  3. The principal reveals and then:

    • If , the principal gives the object to the agent, and the agent pays to the principal.

    • If , no transfers nor payments are made.

Mechanism 1 Parameters: : a constant, : a cdf.

The underlying idea is to make the agent “feel like” in a Vickrey auction. For the agent, the random price is just like a second price in a Vickrey auction; therefore, if the agent knows , it is optimal for him to bid . The challenge is to show that it is optimal for the agent to actually calculate . This crucially depends on the selection of the cdf . Below we prove a necessary and sufficient condition for the existence of an appropriate . We assume throughout the analysis that is positive but infinitesimally small (i.e, ).

There exists a function with which Mechanism 1 is truthful, if-and-only-if . One cdf with which the mechanism is truthful in this case is:

(1)

Before proving the theorem, we illustrate its practical applicability with some examples.

Example

is uniform in . Then so Mechanism 1 is applicable iff — the object’s appraisal cost should be less than one quarter of the object’s expected value. With the function in (1), Mechanism 1 selects in the following way: with probability , is selected uniformly at random from ; with probability , .

Example

has a symmetric triangular distribution in with mean . Here, Mechanism 1 is applicable iff .

Both these conditions are realistic, since usually the cost of appraising an object is at least one order of magnitude less than the object’s expected value. For example, used cars usually cost tens of thousands of dollars (even very cheap ones cost at least $2000), and the cost of a pre-purchase car inspection generally ranges from $100 to $200. Similarly, a used engagement ring typically costs thousands of dollars, while hourly rates of a diamond ring appraisals range from $50 to $150.

  • If is distributed uniformly in , then Mechanism 1 is applicable iff .

  • If

    is a discrete random variable that equals

    with probability and with probability , then Mechanism 1 is applicable iff .

  • If is distributed exponentially with mean , then the mechanism is applicable iff (where ).

  • If is distributed exponentially with mean , then the mechanism is applicable iff (where ), which can be quite high.

It is reasonable to assume that the cost of calculating an object’s value is at least one order of magnitude lower than its maximum possible value (for example, the cost of appraising a car is at most several hundreds while the maximum possible value of a car might be hundreds of thousands). Thus, the mechanism is applicable in the uniform and exponential case, and also in the discrete case when .

Proof of Theorem 3.

The agent has essentially two possible strategies. We call them, following Faltings and Radanovic (2017)

, “cooperative” and “heuristic”:

  • In the cooperative strategy, the agent computes and uses the result to determine a bid .

  • In the heuristic strategy, the agent does not compute , and determines based only on the prior .

The agent will use the cooperative strategy iff its expected utility is larger than the expected utility of the heuristic strategy by more than . Therefore in the following paragraphs we calculate the expected utility of the agent in each strategy, showing that under the condition given in the theorem there always exists a cdf for which the above holds and otherwise the condition cannot hold. For the formal analysis we denote by the integral of : .

In the cooperative strategy, the agent gets the object iff , and then his utility is . Therefore his expected utility is: 111 We consider only cdfs that are continuous and differentiable almost everywhere, so is well-defined almost everywhere. In points in which is discontinuous (i.e., has a jump), can be defined using Dirac’s delta function.

The integrand is positive iff . Therefore the expression is maximized when , and hence it is a strictly dominant strategy for the agent to bid . In this case, his utility is . We assume that ,222 If , then reporting the true is only a weakly-dominant strategy: the agent never gains from reporting a false value, but may be indifferent between false and true value. For example, if the true value is and the cdf is uniform in and zero elsewhere, then the agent is indifferent between reporting and reporting , since in both cases he loses the object with probability 1. Making even slightly above 0 prevents this indifference and makes reporting strictly better than any other strategy. However, to attain this strict-truthfulness, it is sufficient to have arbitrarily small. Hence, in the following analysis we assume for simplicity that . so that the gain is approximately . By integrating by parts, one can see that this expression equals . Hence, before knowing , the expected utility of the agent from the cooperative strategy, denoted , is:

where denotes expectation taken over the prior .

In the heuristic strategy, the agent’s expected utility as a function of the bid is:

The integrand is positive iff , so it is a strictly dominant strategy for the agent to bid . In this case, his gain when , denoted , is:

The net utility of the agent from being cooperative rather than heuristic is the difference:

(2)

The mechanism is truthful iff , i.e, the net utility of the agent from being cooperative is larger than the cost of being cooperative. Therefore, it remains to show that there is a cdf satisfying , iff . This is equivalent to showing that is the maximum possible value of , over all cdfs . This is a non-trivial maximization problem since we have to maximize over a set of functions. We first present an intuitive solution and then a formal solution.

Intuitively, to maximize we have to make as small as possible, and subject to that, make as large as possible. The smallest possible value of is 0, so we let . Therefore we must have for all . Now, to make as large as possible, we must let it increase at the largest possible speed from onwards; therefore we must make its derivative as large as possible, so we let for all . All in all, the optimal is the step function: , which gives as claimed.

To prove this formally, we use mathematical tools that have been previously used in the analysis of revenue-maximizing mechanisms (Manelli and Vincent, 2007; Hart and Nisan, 2017). In particular, we use Bauer’s maximization principle:

In a convex and compact set, every linear function has a maximum value, and it is attained in an extreme point of the set — a point that is not the midpoint of any interval contained in the set.

Denote by the set of all cumulative distribution functions with support contained in . is a convex set, since any convex combination of cdfs is also a cdf. Moreover, it is compact with respect to the sup-norm topology (see Manelli and Vincent (2007)). The objective function is linear. Therefore, to find its maximum value it is sufficient to consider the extreme points of . We claim that the only extreme points of are 0-1 step functions — functions for which for all . Indeed, suppose that is not a step function, so there is some for which . Then the following two functions are different elements of :

is the midpoint of the segment (see the figure below), so is not an extreme point of .

Therefore, it is sufficient to maximize on cdfs of the following form, for some parameter :

Integrating yields , so by (2):

To find the that maximizes , we take its derivative with respect to :

  • The derivative of the leftmost term is , which is always between and .

  • The derivative of the rightmost term is when and when .

Therefore, is increasing when and decreasing when . Therefore its maximum is attained for and it is as claimed. ∎

4. Minimizing the principal’s loss

The function from the proof of Theorem 3 allows the principal to elicit true information for a large range of costs. However, the information does not come for free: the principal “pays” for the information by the possibility of giving away the object. As stated earlier, obtaining the information is mandatory. Hence, the principal naturally seeks to minimize the loss resulting from giving away the object. In other words, from the set of all cdfs with which Mechanism 1 is truthful, the principal would like to choose a single cdf (possibly different than ) which minimizes her loss.

The principal loses utility whenever the agent gets the object, i.e., whenever the agent bids . In this case, the principal’s net loss is . Therefore, the expected loss of the principal (when ), denoted , is:

A simple calculation shows that this expression equals , which is exactly — the utility of the agent from playing the cooperative strategy. This is not surprising as the agent and the principal are playing a zero-sum game.

To induce cooperative behavior, must equal , for some . Therefore the principal’s loss must be too. Fortunately, the principal can attain this loss even for arbitrarily close to . We define:

(3)

With this , Mechanism 1 selects as follows. With probability , is selected using the function from (1), which guarantees the agent a utility of . With probability , the principal chooses so large that the agent never gets the object. Therefore the expected utility of the agent is . Consequently the loss of the principal is too. When , the principal’s loss approaches the theoretic lower bound — she obtains the information for only slightly more than its computation cost.

The probability that the principal has to sell the object is quite low in realistic settings. For example, consider the case mentioned in Example 3 where is uniform in . Suppose the object is a car. Typical values are $200 and $40K. Here . So the information is computed and delivered truthfully with probability 1, whereas with probability 98% the principal keeps the car (and incurs no loss), and with probability 2% the principal sells the car for its expected value of (and loses the difference ).

An interesting special case is , i.e., the agent already knows . In this case, the principal does not need to know the prior distribution and can simply use

(4)

If , the agent’s net utility is positive so the mechanism is truthful; when , the principal’s loss approaches 0.

5. Variants and Extensions

The proposed mechanism can be augmented in various ways to support extensions of the underlying model.

5.1. Divisible objects

So far we assumed the object is indivisible, so it is either sold entirely to the agent or not at all. In general the object may be divisible. For example, it may be possible to sell only a part of an oil field, or only sell shares in the field’s future profits. In this case, instead of using the function of (3), we can run Mechanism 1 using the function , but if the object needs to be sold in step 3 — only a fraction of it is actually sold. The analysis of this mechanism is the same — cooperation is still a dominant strategy for the agent whenever , and the principal’s loss is still . The advantage is that a risk-averse agent may prefer to buy a fraction of the object with certainty, than to buy the entire object with probability . Moreover, a risk-averse principal may prefer to keep a fraction of the object with certainty, than to risk selling the entire object with probability .

5.2. Cost of information delivery

So far we assumed that the information production is costly, but once the information is available — its delivery is free. In general, the information delivery might also be costly. For example, the agent might have to write a detailed report about how was calculated, and have the report signed by the firm’s accountants. The agent incurs the cost of delivery whether the delivered information is true or false. The principal can handle this case by promising the agent to pay the delivery cost if the agent participates in Mechanism 1. This makes the strategic situation of the agent identical to the situation analyzed in Section 3. Mechanism 1 is still truthful whenever . The principal’s loss is the sum of the production and delivery costs, which is the smallest loss possible since the agent’s expenses must be covered.

5.3. Different values

So far we assumed that the object’s value is the same for the principal and the agent. In general the values might be different. For example, suppose the principal the owner of a used car and the agent is a car dealer. While the dealer certainly has a positive value for owning the car, it may be lower (in case the dealer already has several cars of the same model) or higher (in case the dealer can fix the car at a reduced cost) than its value for the car owner.

Let be the object’s value for the principal and its value for the agent. The agent’s utility is calculated as before, using instead of :

(5)

Therefore, using the cdf , Mechanism 1 is still truthful and elicits , as long as the condition of Theorem 3 holds on , i.e.:

If and are correlated, then the principal can use the knowledge of to gain some knowledge about .

However, in contrast to the common-value setting, the game here is no longer zero-sum — the principal’s loss does not equal the agent’s utility, so it may be larger or smaller than . The principal’s expected loss is now:

(6)

So, when the principal’s loss is larger than the agent’s utility, and when it is smaller.

Suppose we want the agent’s net utility to be at least , for some . Then, the principal’s minimization problem is:

minimize
subject to

This is still a problem of minimizing a linear goal over a convex set of functions, so the minimum is still attained in the extreme points of the set. However, finding the extreme points and minimizing over that points is much harder. We leave it to the future.333 To get an idea of the loss magnitude, consider a special case in which and are fully correlated: suppose there is a constant such that . Suppose also, for the sake of the example, that is distributed uniformly in . Suppose the principal uses Mechanism 1 with the of (3). Then, using the expressions in the text body, we find that the principal’s loss is at most . So when the principal’s loss is exactly (which may be very near ), but when the loss is more than and when the loss is less than , as can be expected. It is interesting that the loss (when is fixed) is linear function of . We do not know if this is true in general.

5.4. Unknown Cost of Computation

So far, we assumed that the principal knows the costs incurred by the agent. This assumption is realistic in many cases. For example, when the object is a car, the mechanic can reveal its condition by running a set of standard checks that consume a known amount of time, so their cost can be reasonably estimated. However, in some cases the cost might be known only to the agent. In this subsection we assume that the principal only knows a prior distribution on , given by pdf and cdf , with support . For simplicity we consider here the common value setting, .

If the principal must get the information at all costs (e.g., due to regulatory requirements), then she can simply run Mechanism 1 with the cdf of (3), taking . This ensures that the agent calculates and reports the true information whenever , and the principal’s loss is .

However, in some cases the principal might think that is too much to pay for the information. In this case, it may be useful for the principal to determine a utility of obtaining the information. We denote the principal’s utility from knowing the information by , and assume that it is measured in the same monetary units as the function of Section 4. In other words, the principal’s loss is:

  • — when she elicits the true value using Mechanism 1 with cdf ;

  • — when she does not elicit the true value.

If , it is definitely not optimal for the principal to use Mechanism 1 with the cdf taking . What should the principal do in this case?

To gain insight on this situation, we compare it to bilateral trading. In standard bilateral trading, a single consumer wants to buy a physical product from a single producer; in our setting, the principal is the consumer, the agent is the producer, and the “product” is information. This is like bilateral trading, with the additional difficulty that the consumer cannot verify the “product” received.

The case when the production cost is unknown in bilateral trading was studied by Baron and Myerson (1982). They define the virtual cost function of the producer by:

(it is analogous to the virtual valuation function used in Myerson’s optimal auction theory). By Myerson’s theorem, the expected loss of the consumer in any truthful mechanism equals the expected virtual cost of the producer in that mechanism, . The “loss” of the consumer from not buying the product is her utility from having this product, which we denote by . Therefore, to minimize her expected loss, the consumer should buy the product if-and-only-if .

Under standard regularity assumptions on , the virtual cost function is increasing with . In that case, the optimal mechanism for the consumer is to make a take-it-or-leave-it offer to buy the product for a price of:

(7)

With this mechanism, the producer agrees to sell the product iff , which occurs iff .

We now return to our original setting, in which the “product” is the information about an object’s value. We emphasize that there are two values: the value for both agents of the object itself, which we denoted by , and the value for the principal of knowing , which we denote here by . We assume that the object’s value and the cost of computing are independent random variables.

Similarly to the setting of Baron and Myerson (1982), the principal has to ensure that the agent sells the information iff , which happens iff . Analogously to equations (3), we define:

The principal has to run Mechanism 1 using as the cdf. As explained after equations (3), this gives the agent an expected net utility of , so the agent will agree to participate in the mechanism iff . This decision rule of the agent is in fact the one that maximizes the expected utility of the principal.

Example

Suppose the cost is distributed uniformly in . Then, the virtual cost function is , so and .

Consider first a physical product. If , then the consumer offers , the producer always sells, and the consumer’s loss is . If , then the consumer offers and the producer sells only if the cost is less than . This happens with probability , so the consumer’s expected loss is , which is less than .

Now suppose that the “product” is information about an object’s value. Suppose that, a priori, the object’s value is distributed uniformly in . As calculated in Example 3, in this case . We make the realistic assumption that (the maximum possible cost for appraising an object is less than a quarter of the expected value of the object). Therefore the following expression defines a valid probability:

The principal should run Mechanism 1 with the cdf . The expected net utility of the agent from participating is .

If , then , so the agent always participates, and the principal always obtains the information for an expected loss of .

If , then and it might be higher or lower than the actual cost . If then the agent participates and the principal obtains the information for an expected loss of ; if then the agent refuses to participate and the principal does not obtain the information, so her loss is . All in all, the principal’s expected loss is , which is less than .

6. Alternative Mechanisms

In addition to Mechanism 1, we developed two alternative mechanisms for solving the same problem — eliciting a true value from a single information agent.

In Mechanism 2 below, the price of the object is not determined by the principal but rather calculated as a function of the agent’s bid, similarly to a first-price auction.

  1. The agent bids .

  2. With probability , the agent buys the object for:

    where is the integral of : .

Mechanism 2 Parameters: : a constant, : a cdf.

In Mechanism 3 below, there is no bid at all — the principal publicly posts a price and the agent decides whether to buy the object at this price or not.

  1. The principal publicly posts the price .

  2. The principal asks the agent whether he wants to buy the object for or not.

  3. If the agent say yes, then with probability he buys the object from the principal for . Otherwise the object is not sold.

  4. If the object is not sold in step 3, then the principal runs Mechanism 1 with the function of equation (4).

Mechanism 3 Parameters: — a constant, — a probability.

The three mechanisms are apparently different in various aspects such as the role of the different players in the underlying Stackelberg game (leader vs. follower), and whether or not there is a requirement for secrecy (Mechanism 1 requires to keep secret while Mechanism 2 need no secrecy). Interestingly, they are equivalent in the conditions they impose on the cost and the principal’s loss. We present a proof sketch below. Note that we consider the general case of different valuations of the agent and the principal (as in Subsection 5.3) — the agent’s value is and the principal’s value is .

In Mechanism 2, the agent’s expected utility for bidding is:

The agent calculates the optimal bid by solving an optimization problem. The derivative w.r.t. is:

Since is an increasing function, this expression is positive when and negative when . Therefore the expected utility of the agent is maximized by bidding . In this case his expected utility is:

Similarly, when the agent does not compute , his utility is optimized by bidding , which gives him a utility of:

These utilities are exactly as in Mechanism 1 and Equation (5). Therefore, Theorem 3 is valid as-is for Mechanism 2, and the mechanism is applicable iff , using the same cdf of (1).

Moreover, the principal’s loss is exactly as in equation (6). Therefore the principal has to solve the same optimization problem for minimizing the loss, and the minimal loss is the same. In particular, in the common value setting , the principal’s loss can be made arbitrarily close to the information production cost .

In Mechanism 3, in step 2, the agent has to decide whether to buy the object or not. Calculating the true value may help the agent decide:

  • If the agent calculates , he buys the object with probability iff , so his utility is .

  • Otherwise, he buys the object iff , so his expected utility is .

In case the agent decides to calculate , in step 3 the situation is similar to the case mentioned at the end of Section 4 — the agent already knows the information so the cost for calculating it now is 0. Therefore, at step 4 the principal elicits the true information for almost zero additional loss. Define the following functions (depending on the mechanism parameters ):

With these definitions, the agent’s utilities when computing / not computing are exactly as in equation (5), and the principal’s loss in step 2 is the same as in equation (6). So Theorem 3 is valid, and the mechanism is truthful iff , by taking and .

Additionally, Mechanism 1 itself can be extended by adding a probability of sale — before actually selling the object to the agent for , the principal tosses a biased coin with probability of success (where is a fixed parameter), and makes the sale only in case of success. This extension is actually already supported by the current mechanism: for any cdf and parameter , we can create a new cdf by putting a probability mass of on values larger than and a probability mass of on the original . This attains exactly the same effect as a sale with probability , since with probability , the will be so high that the object will never be sold. Hence, Theorem 3 applies to this extension too, so even with this generalization, the mechanism works iff .

The fact that several different mechanisms lead to the same applicability conditions and loss expressions lead us to conjecture that these results are valid universally.

Conjecture.

(a) There exists a mechanism for truthfully eliciting a single agent’s value , if-and-only-if:

(b) In any mechanism for truthfully eliciting from a single agent, the principal’s loss is at least:

where the minimization is over all cumulative distribution functions satisfying .

While our main goal in showing three inherently different mechanisms is primarily theoretic (supporting our conjecture), there are some practical advantages for preferring the use of some of them in specific cases. For example, an advantage of Mechanism 2 is that it does not require a mediator. Mechanism 1 requires a mediator to keep the reservation value secret and reveal it only after the agent’s bid (and this recurs in Mechanism 3 as it requires the principal to run also Mechanism 1 entirely). The mediator might collude with the agent and reveal the reservation value to him before the bid, allowing him to bid and win the object without giving any information. The mediator might also collude with the principal and reveal a false reservation value after hearing the bid . In Mechanism 2 no such problems arise.

Additionally, Mechanism 1 requires the agent to believe that the principal really draws from the advertised distribution (or alternatively, use a third-party for doing the lottery). In Mechanism 2 the lottery is much simpler: a probability is calculated in a transparent manner, and the object is sold with probability . Such lottery can be carried out transparently in front of the agent, so no trust is required.

On the other hand, Mechanism 1 has the advantage that the optimal strategy of bidding is more intuitive. In Mechanism 2, once the agent knows , he needs to solve a complex optimization problem in order to calculate the optimal and realize that it equals . In contrast, in Mechanism 1, once the agent knows , it is easy to realize that it is optimal to bid : by bidding higher he might buy the object at a price higher than its value, and by bidding lower he might miss buying the object at a price lower than its value.

7. Related Work

Mechanisms by which an uninformed agent tries to elicit information from an informed agent are as old as King Solomon’s judgment (I Kings:3). About two centuries ago, the German poet Goethe invented a mechanism for eliciting the value of his new book from his publisher (Moldovanu and Tietzel, 1998), but without considering the computation cost.

Various new mechanisms for truthful information elicitation have been studied in recent years, including mechanisms based on proper scoring rules (Armantier and Treich, 2013; Hossain and Okui, 2013), the Bayesian truth serum (Prelec, 2004; Barrage and Lee, 2010; Weaver and Prelec, 2013), and its variants (Offerman et al., 2009; Witkowski and Parkes, 2012), the peer truth serum (Radanovic et al., 2016), the ESP game, credit-based mechanisms Hajaj et al. (2015), and similar output agreement mechanisms (Waggoner and Chen, 2013). See also Kong and Schoenebeck (2019) for a recent unifying framework for several different kinds of mechanisms. Faltings and Radanovic (2017)

provide a comprehensive survey of such mechanisms in the computer science literature. They classify them into two categories:

The principle underlying all truthful mechanisms is to reward reports according to consistency with a reference. (1) In the case of verifiable information, this reference is taken from the ground truth as it will eventually be available. (2) In the case of unverifiable information, it will be constructed from peer reports provided by other agents.

The present paper provides a third category: the information is unverifiable, and yet there is a single agent to elicit it from. 444 When there are many agents, our problem becomes easier. For example, with three agents the following mechanism is possible: (a) Offer each agent to sell you the information for . (b) Collect the reports of all agreeing agents. (c) If one report is not identical to at least one other report, then file a complaint against this agent and send her to jail. This creates a coordination game where the focal point is to reveal the true value, like in the ESP game. In our setting there is a single agent, so this trick is not possible.

Interactions between an informed agent and an uninformed principal have also been studied extensively in economics. A typical setting is that the agent is a seller holding an object and the principal is a buyer wanting that object (contrary to our setting, where the principal is the object owner). In some settings, the agent is a manager of a firm and the principal is a potential investor. Common to all cases is that the agent holds information that may affect the utility of the principal, and the question is if and how the agent can be induced to disclose this information.

The seminal work of Akerlof (1970) shows that, when information is not verifiable and is not guaranteed to be correct (as in our setting), the incentive of the agent to provide false information might lead to complete market failure. In contrast, if the information is ex-post verifiable (i.e, the agent can hide information but cannot present false information), then market forces may be sufficient to push the agent to voluntarily disclose his information (Grossman and Hart, 1980; Grossman, 1981; Hajaj and Sarne, 2017). Mechanisms for information elicitation have been developed for settings where the information is verifiable (Hart et al., 2016), partially verifiable (Green and Laffont, 1986; Glazer and Rubinstein, 2004) or verifiable at a cost (Ben-Porath et al., 2014; Moscarini and Smith, 2001; Wiegmann et al., 2010; Emek et al., 2014).

An additional line of research assumes that the information is unverifiable, however, if it is purchased, it is always correct. Moreover, their goal is to maximize the agent’s revenue rather than minimize the principal’s loss Babaioff et al. (2012); Alkoby and Sarne (2017); Alkoby et al. (2015); Sarne et al. (2014); Alkoby and Sarne (2015).

Our work is also related to contract theory (Bolton et al., 2005), in which a principal tries to incentivize an agent to choose an action that is favorable to the principal. There, while the principal cannot observe the agent’s action, she can observe the (probabilistic) outcome of his action. In contrast, in our setting the principal has no way of knowing whether or not the agent calculated the true information.

Our work is motivated by government auctions for oil and gas fields. A lease for mining oil/gas is put to a first-price sealed-bid auction. One of the participating firms owns a nearby plot and can, by drilling in its own plot, compute relevant information about the potential value of the auctioned plot. Hendricks et al. (1994) show that, in equilibrium, the informed firm underbids and gains information rent, while the uninformed firms have zero expected value. Porter (1995) provides empirical evidence supporting this conclusion from almost 40 years of auctions by the US government. It indicates that information asymmetry causes the government to lose about 30% of the potential revenue. As a solution, Hendricks et al. (1993) suggest to exclude the agent from the auction and induce him to reveal the information by promising him a fixed percentage of the auction revenues. However, they note that in practice it may be impossible to exclude a firm from a government auction. Our mechanism provides a different solution: the government (the principal) can use our mechanism to elicit the information from the informed firm (the agent). Then it can release the information to the other firms and by this remove the information asymmetry.

The decision rule in our Theorem 3 is somewhat similar to the ones used in optimal stopping problems, e.g., the one derived by Weitzman (1979) for Pandora’s Box problem. While the latter considers a single player and has no strategic aspect, our model considers a strategic setting. Still the essence of the decision is somehow similar as it consider the marginal expected profit from knowing the true value (as opposed to acting based on the best value found so far in Pandora’s problem, or the expected value in our case).

Mechanism 1 uses the reservation-price concept, which can be found in literature on auctions where agents have to incur a cost for learning their own value. For example, Hausch and Li (1993) discuss an auction for a single item with a common value. Each bidder incurs a cost for participating in the auction, and additional cost for estimating the value of the item. Persico (2000) discuss an extended model where the bidders can pay to make their estimate of the value more informative.

8. Discussion and Future Work

Information-providers are now ubiquitous, enabling people and agents to acquire information of various sorts. As self-interested agents, information providers typically seek to maximize their revenue. This is where the failure of direct payment becomes apparent, especially when the information provided is non-verifiable. The importance of the mechanisms provided and analyzed in the paper is therefore in their guarantee for truthfulness in the information elicitation process.

An important challenge for future work is to study the theoretic limitations of the setting studied in this paper — a single information-agent and unverifiable information. In particular, we conjecture that any truthful mechanism for this setting must sell the object with a positive probability, although we did not yet prove this formally. Settling the conjecture in Section 6 is an interesting challenge too. Some other directions for future work are:

Unknown distribution of value

Mechanism 1 requires to calculate , which requires knowledge of the prior distribution of . When the distribution is not known, truthfulness can be guaranteed only when , since in this case can be chosen independently of the distribution (see end of subsection 4). It is interesting whether true information can be elicited by a prior-free mechanism also for .

Risk-averse agents

The current model assumes that the agent is risk-neutral, so that his utility from a random mechanism is the expectation of the value. It is interesting to check what happens in the common case in which the agent is risk-averse. Suppose there is an increasing function that maps the value of the agent to his utility. So (no value means no utility) and for all (more value always means more utility). When the agent is risk-neutral (as we assumed so far), is constant; when the agent is risk-averse, is decreasing. Then, when the agent is cooperative and computes , he gets the object iff , and then his utility is . Therefore his expected utility is:

Since is positive iff is positive, the integrand is positive iff . Therefore the expression is maximized when , and hence it is still a strictly dominant strategy for the agent to bid . This is encouraging, since it means that at least the second part of Mechanism 1 (revealing the value after it is computed) remains truthful regardless of the agent’s risk attitude. However, computing the agent’s utility in the cooperative vs. the heuristic strategy is much harder.

Different effort levels

In our setting, the agent has only two options: either calculate the accurate value, or not calculate it at all. When , it is optimal for the agent to not calculate the value at all. Then, it is optimal for the agent to bid . It is never optimal for the agent to bid an inaccurate value.

In more realistic settings, the agent may have three or more options. For example, it is possible that the agent can, by incurring a cost , get an inaccurate estimate of (e.g., the agent learns some value such that the true value is distributed uniformly in , where is the inaccuracy parameter). Then, the analysis of the agent’s behavior becomes more complex since there are more paths in which the agent may decide to calculate the true value: he may decide to incur the cost already from the start, or incur only the cost , and after observing the results — decide whether to incur an additional cost of . The principal’s goal is to learn the true value at the end — regardless of how many intermediate calculations are done by the agent. It is interesting to characterize the mechanisms that let the principal attain this goal.

9. Acknowledgments

This paper benefited a lot from discussions with the participants of the industrial engineering seminar in Ariel University, the game theory seminar in Bar Ilan University, the game theory seminar in the Hebrew University of Jerusalem and the Israeli artificial intelligence day. We are particularly grateful to Sergiu Hart and Igal Milchtaich for their helpful mathematical ideas. This research was partially supported by the Israel Science Foundation (grant No. 1162/17).

References

  • (1)
  • Akerlof (1970) George A Akerlof. 1970. The market for "lemons": Quality uncertainty and the market mechanism. The quarterly journal of economics (1970), 488–500.
  • Alkoby and Sarne (2015) Shani Alkoby and David Sarne. 2015. Strategic Free Information Disclosure for a Vickrey Auction. In International Workshop on Agent-Mediated Electronic Commerce and Trading Agents Design and Analysis. Springer, 1–18.
  • Alkoby and Sarne (2017) Shani Alkoby and David Sarne. 2017. The Benefit in Free Information Disclosure When Selling Information to People.. In AAAI. 985–992.
  • Alkoby et al. (2015) Shani Alkoby, David Sarne, and Sanmay Das. 2015. Strategic free information disclosure for search-based information platforms. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems. International Foundation for Autonomous Agents and Multiagent Systems, 635–643.
  • Alkoby et al. (2014) Shani Alkoby, David Sarne, and Esther David. 2014. Manipulating information providers access to information in auctions. In Technologies and Applications of Artificial Intelligence. Springer, 14–25.
  • Alkoby et al. (2017) Shani Alkoby, David Sarne, and Igal Milchtaich. 2017. Strategic Signaling and Free Information Disclosure in Auctions.. In AAAI. 319–327.
  • Armantier and Treich (2013) Olivier Armantier and Nicolas Treich. 2013. Eliciting beliefs: Proper scoring rules, incentives, stakes and hedging. European Economic Review 62 (2013), 17 – 40. https://doi.org/10.1016/j.euroecorev.2013.03.008
  • Babaioff et al. (2012) Moshe Babaioff, Robert Kleinberg, and Renato Paes Leme. 2012. Optimal mechanisms for selling information. In Proceedings of the 13th ACM Conference on Electronic Commerce. ACM, 92–109.
  • Baron and Myerson (1982) P. Baron and Roger B. Myerson. 1982. Regulating a Monopolist with Unknown Costs. Econometrica (1982), 911–930. https://doi.org/10.2307/1912769
  • Barrage and Lee (2010) Lint Barrage and Min Sok Lee. 2010. A penny for your thoughts: Inducing truth-telling in stated preference elicitation. Economics letters 106, 2 (2010), 140–142.
  • Ben-Porath et al. (2014) Elchanan Ben-Porath, Eddie Dekel, and Barton L Lipman. 2014. Optimal allocation with costly verification. The American Economic Review 104, 12 (2014), 3779–3813.
  • Bolton et al. (2005) Patrick Bolton, Mathias Dewatripont, et al. 2005. Contract theory. MIT press.
  • Emek et al. (2014) Yuval Emek, Michal Feldman, Iftah Gamzu, Renato PaesLeme, and Moshe Tennenholtz. 2014. Signaling schemes for revenue maximization. ACM Transactions on Economics and Computation 2, 2 (2014), 5.
  • Faltings and Radanovic (2017) Boi Faltings and Goran Radanovic. 2017.

    Game Theory for Data Science: Eliciting Truthful Information.

    Synthesis Lectures on Artificial Intelligence and Machine Learning

    11, 2 (2017), 1–151.
  • Glazer and Rubinstein (2004) Jacob Glazer and Ariel Rubinstein. 2004. On optimal rules of persuasion. Econometrica 72, 6 (2004), 1715–1736.
  • Green and Laffont (1986) Jerry R Green and Jean-Jacques Laffont. 1986. Partially verifiable information and mechanism design. The Review of Economic Studies 53, 3 (1986), 447–456.
  • Grossman (1981) Sanford J Grossman. 1981. The informational role of warranties and private disclosure about product quality. The Journal of Law and Economics 24, 3 (1981), 461–483.
  • Grossman and Hart (1980) Sanford J Grossman and Oliver D Hart. 1980. Disclosure laws and takeover bids. The Journal of Finance 35, 2 (1980), 323–334.
  • Hajaj et al. (2015) Chen Hajaj, John P Dickerson, Avinatan Hassidim, Tuomas Sandholm, and David Sarne. 2015. Strategy-proof and efficient kidney exchange using a credit mechanism. In Twenty-Ninth AAAI Conference on Artificial Intelligence.
  • Hajaj and Sarne (2017) Chen Hajaj and David Sarne. 2017. Selective opportunity disclosure at the service of strategic information platforms. Autonomous Agents and Multi-Agent Systems 31, 5 (2017), 1133–1164.
  • Hart et al. (2016) Sergiu Hart, Ilan Kremer, and Motty Perry. 2016. Evidence Games with Randomized Rewards. Technical Report. working paper.
  • Hart and Nisan (2017) Sergiu Hart and Noam Nisan. 2017. Approximate revenue maximization with multiple items. Journal of Economic Theory 172 (2017), 313–347.
  • Hausch and Li (1993) Donald B Hausch and Lode Li. 1993. A common value auction model with endogenous entry and information acquisition. Economic Theory 3, 2 (1993), 315–334.
  • Hendricks et al. (1993) Kenneth Hendricks, Robert H Porter, and Guofu Tan. 1993. Optimal selling strategies for oil and gas leases with an informed buyer. The American Economic Review 83, 2 (1993), 234–239.
  • Hendricks et al. (1994) Kenneth Hendricks, Robert H Porter, and Charles A Wilson. 1994. Auctions for oil and gas leases with an informed bidder and a random reservation price. Econometrica: Journal of the Econometric Society (1994), 1415–1444.
  • Hossain and Okui (2013) Tanjim Hossain and Ryo Okui. 2013.

    The binarized scoring rule.

    Review of Economic Studies 80, 3 (2013), 984–1001.
  • Kagel and Levin (1986) John H Kagel and Dan Levin. 1986. The winner’s curse and public information in common value auctions. The American economic review (1986), 894–920.
  • Kong and Schoenebeck (2019) Yuqing Kong and Grant Schoenebeck. 2019. An Information Theoretic Framework For Designing Information Elicitation Mechanisms That Reward Truth-telling. ACM Trans. Econ. Comput. 7, 1, Article 2 (Jan. 2019), 33 pages. https://doi.org/10.1145/3296670
  • Manelli and Vincent (2007) Alejandro M Manelli and Daniel R Vincent. 2007. Multidimensional mechanism design: Revenue maximization and the multiple-good monopoly. Journal of Economic theory 137, 1 (2007), 153–185.
  • Moldovanu and Tietzel (1998) Benny Moldovanu and Manfred Tietzel. 1998. Goethe’s Second‐Price Auction. Journal of Political Economy 106, 4 (1 Aug. 1998), 854–859. https://doi.org/10.1086/250032
  • Moscarini and Smith (2001) Giuseppe Moscarini and Lones Smith. 2001. The optimal level of experimentation. Econometrica 69, 6 (2001), 1629–1644.
  • Offerman et al. (2009) Theo Offerman, Joep Sonnemans, Gijs Van de Kuilen, and Peter P Wakker. 2009. A Truth Serum for non-Bayesians: Correcting Proper Scoring Rules for Risk Attitudes. The Review of Economic Studies 76, 4 (2009), 1461–1489.
  • Persico (2000) Nicola Persico. 2000. Information acquisition in auctions. Econometrica 68, 1 (2000), 135–148.
  • Porter (1995) Robert H Porter. 1995. The Role of Information in US Offshore Oil and Gas Lease Auction. Econometrica (1995), 1–27.
  • Prelec (2004) Dražen Prelec. 2004. A Bayesian Truth Serum for Subjective Data. Science 306, 5695 (2004), 462–466.
  • Radanovic et al. (2016) Goran Radanovic, Boi Faltings, and Radu Jurca. 2016. Incentives for Effort in Crowdsourcing Using the Peer Truth Serum. ACM Trans. Intell. Syst. Technol. 7, 4, Article 48 (March 2016), 28 pages. https://doi.org/10.1145/2856102
  • Sarne et al. (2014) David Sarne, Shani Alkoby, and Esther David. 2014. On the choice of obtaining and disclosing the common value in auctions. Artificial Intelligence 215 (2014), 24–54.
  • Waggoner and Chen (2013) Bo Waggoner and Yiling Chen. 2013. Information Elicitation Sans Verification. In Proceedings of the 3rd Workshop on Social Computing and User Generated Content (SC13).
  • Weaver and Prelec (2013) Ray Weaver and Drazen Prelec. 2013. Creating Truth-telling Incentives with the Bayesian Truth Serum. Journal of Marketing Research 50, 3 (2013), 289–302.
  • Weitzman (1979) Martin L Weitzman. 1979. Optimal search for the best alternative. Econometrica: Journal of the Econometric Society (1979), 641–654.
  • Wiegmann et al. (2010) Daniel D Wiegmann, Kelly L Weinersmith, and Steven M Seubert. 2010. Multi-attribute mate choice decisions and uncertainty in the decision process: a generalized sequential search strategy. Journal of mathematical biology 60, 4 (2010), 543–572.
  • Witkowski and Parkes (2012) Jens Witkowski and David C Parkes. 2012. A Robust Bayesian Truth Serum for Small Populations. In AAAI.