DeepAI
Log In Sign Up

Algorithmic Price Discrimination

12/12/2019
by   Rachel Cummings, et al.
0

We consider a generalization of the third degree price discrimination problem studied in Bergemann et al. (2015), where an intermediary between the buyer and the seller can design market segments to maximize any linear combination of consumer surplus and seller revenue. Unlike in Bergemann et al. (2015), we assume that the intermediary only has partial information about the buyer's value. We consider three different models of information, with increasing order of difficulty. In the first model, we assume that the intermediary's information allows him to construct a probability distribution of the buyer's value. Next we consider the sample complexity model, where we assume that the intermediary only sees samples from this distribution. Finally, we consider a bandit online learning model, where the intermediary can only observe past purchasing decisions of the buyer, rather than her exact value. For each of these models, we present algorithms to compute optimal or near optimal market segmentation.

READ FULL TEXT VIEW PDF
05/10/2022

Optimal Price Discrimination for Randomized Mechanisms

We study the power of price discrimination via an intermediary in bilate...
12/11/2019

Third-degree Price Discrimination Versus Uniform Pricing

We compare the revenue of the optimal third-degree price discrimination ...
07/03/2020

Learning Utilities and Equilibria in Non-Truthful Auctions

In non-truthful auctions, agents' utility for a strategy depends on the ...
11/27/2019

Generalizing Complex Hypotheses on Product Distributions: Auctions, Prophet Inequalities, and Pandora's Problem

This paper explores a theory of generalization for learning problems on ...
05/18/2022

Slowly Changing Adversarial Bandit Algorithms are Provably Efficient for Discounted MDPs

Reinforcement learning (RL) generalizes bandit problems with additional ...
02/21/2020

Common knowledge equilibrium of separable securities in distributed information market

We investigate common knowledge equilibrium of separable securities in d...
12/31/2009

On a Model for Integrated Information

In this paper we give a thorough presentation of a model proposed by Ton...

1 Introduction

Third degree price discrimination occurs when a seller uses auxiliary information about buyers to offer different prices to different populations, e.g., student and senior discounts for movie tickets. A modern version of this arises in the context of online platforms that match sellers and buyers. Here an intermediary observes information about buyers and may pass on some of this information to the seller to help him price discriminate. One natural example where price discrimination could be (and often is) used in practice is an ad exchange, which matches buyers and sellers of online ad impressions. A buyer is an advertiser, and a seller is a publisher, and the impression is sold via an auction where the seller sets a reserve price. The ad exchange commonly has additional data about the user viewing the impression or about the buyers. It could share some of this data with the seller before he sets the reserve price.

The seminal work of Bergemann et al. (2015) shows the following surprising result in such a setting. Usually, there is a tradeoff between social welfare which is the value generated by the sale, and seller revenue. Seller revenue is maximized by setting an appropriate price. Social welfare is maximized by selling the item to the buyer as long as his value for the item is , but this generates 0 revenue for the seller. Almost magically, Bergemann et al. (2015) show that an intermediary can segment the market such that it not only maximizes social welfare, but also guarantees that the seller revenue doesn’t change in the process. This shows that price discrimination can be used to benefit the customer, contrary to the belief that it exploits the customer, thus making it palatable.

While this is a strong result, it requires that the intermediary knows the buyer’s exact value, which is a very strong assumption, and is often not satisfied in practice. What is more reasonable is that the intermediary can estimate a personalized probability distribution once the buyer is seen. For instance, if the intermediary observes that the buyer is a student, it may estimate a lower willingness-to-pay, but is unlikely to know the buyer’s exact value. Realistically, the intermediary may wish to use machine learning techniques to estimate the personalized probability distribution for a new buyer based upon their observed characteristics and past market data.

In this paper, we analyze the power of third degree price discrimination in this setting where the intermediary has only a noisy signal of a buyer’s value.

1.1 Model and Results

The seller sells a single item, and there is a single buyer. We consider value distributions with a finite support. We assume that the intermediary observes finitely many types of buyers; each type is associated with a different distribution over the values. We denote the set of values by , the set of types by , and the distribution over values given a type by . We denote the distribution over types by . Given this, the mechanism proceeds as follows. This is illustrated in Figure 1.

  1. A segmentation is a pair of a segment set and a segment map where denotes the set of all probability distributions over a given domain. Once the intermediary decides on a segmentation, it is revealed to the seller.

  2. When a buyer arrives, her type and value are drawn from the prior distribution. The intermediary observes only her type but not the value .

  3. Intermediary draws a segment from the distribution and reveals it to the seller.

  4. On observing a segment , the seller posts the monopoly price for the value distribution conditioned on observing .

  5. Buyer buys the item if and only if her value .

The model in Bergemann et al. (2015) is the special case where the type set is identical to the value set, and the distributions are point masses.

Figure 1: Time line of a single round.

A price is said to be a monopoly price if it maximizes seller revenue for a given distribution, and this revenue is called the monopoly revenue. We call the marginal distribution over values as the prior distribution. The seller is always guaranteed at least the monopoly revenue for the prior distribution, since he can ignore the segment information and set a monopoly price.

The intermediary’s objective is some given positive linear combination of seller revenue and consumer surplus. Consumer surplus is the expectation of the buyer’s utility, which is if the buyer with value buys the item at price , and is 0 otherwise. Of particular interest is the special case of maximizing consumer surplus alone. We consider three informational models of increasing difficulty for the intermediary, and show the following results.

Bayesian:

The intermediary and the seller know the value-type distributions: for all , and

. We show that the optimal segmentation can be computed using a linear program (LP). The range of achievable values for consumer surplus and revenue depend on the distribution, and one may not always be able to achieve the full consumer surplus as in

Bergemann et al. (2015). Some other nice properties may not hold as well, see Appendix A for examples.

Sample Complexity:

The intermediary and the seller observe a batch of signal-value pairs sampled from the underlying distribution. We are interested in the number of samples required to get an approximation. We first cosntruct a distribution for which no bounded function of is sufficient. The ’s in this example satisfy both boundedness and regularity, which are standard assumptions in the sample complexity of mechanism design. This points to further limitations on what such an intermediary can do: in case of noisy signals, the distribution cannot be arbitrary. Motivated by this, we identify a property about the distributions, which we call MHR-like, and show (via an algorithmic construction) that a polynomial number of samples are sufficient. This is the technically most challenging part of the paper and most of the focus in the main body of the paper is on this part.

Online Learning:

The intermediary must learn the segmentation online using only bandit feedback from the buyer’s decision to purchase or not at the seller’s chosen price. The last step of the timeline depicted in Figure 1 is modified in this setting so that the intermediary and seller only observe the buyer’s purchase decision, not her value. Here we give no-regret learning algorithms. Clearly, we need certain assumptions on the seller’s behavior for any nontrivial result; there is not much we can do if the seller picks prices randomly all the time. Our assumptions can accommodate natural no regret learning algorithms on the seller side, including the Upper-Confidence-Bound (UCB) algorithm and the Explore-then-Commit (ETC) algorithm.

1.2 Contributions to the Sample Complexity of Mechanism Design

Pioneered by Balcan et al. (2005), Elkind (2007), and Dhangwatnotai et al. (2015), and formalized by Cole and Roughgarden (2014)

, the sample complexity of mechanism design, in particular, the revenue maximization problem, has been a focal point in algorithmic game theory in the last few years

Morgenstern and Roughgarden (2015); Balcan et al. (2016); Devanur et al. (2016); Morgenstern and Roughgarden (2016); Hartline and Taggart (2019); Cai and Daskalakis (2017); Gonczarowski and Nisan (2017); Gonczarowski and Weinberg (2018); Huang et al. (2018b); Guo et al. (2019).

This paper adds to the literature of sample complexity of mechanism design in two-folds. The first one is conceptual: we formulate the first sample complexity problem from the viewpoint of an intermediary rather than the seller, and for the task of designing information dispersion rather than allocations and payments. We show impossibility results for the general case and, more importantly, identify sufficient conditions under which we derive positive algorithmic results.

Conceptually new models often lead to new technical challenges. Our second contribution is an algorithmic ingredient that tackles such a new challenge. Let us start with a thought experiment: consider a more powerful intermediary who knows the true distributions; the seller, however, still acts according to some beliefs formed from the observed samples. Does the problem become trivial? Can the intermediary simply run the optimal segmentation w.r.t. the true distributions and expect near optimal outcomes?

The answers turn out to be negative. Consider a segment for which there are two prices and , such that is the monopoly price with a sale probability close to , while gets near optimal revenue with a sale probability close to . If the intermediary includes this segment, however, the seller’s beliefs may overestimate the revenue of and/or underestimate that of and, thus, deduce that is the monopoly price instead of . As a result, the resulting social welfare may be much smaller than what the intermediary expects from the true distributions.

This example shows that, unlike existing works on the sample complexity of mechanism design, where the difficulties arise purely from the learning perspective of the problem, our problem presents an extra challenge from the uncertainties in the seller’s behavior due to his inaccurate beliefs.

Intuitively, the intermediary would like to convert the optimal segmentation w.r.t. the true distributions into a more robust version, such that for any approximately accurate beliefs that the seller may have, the resulting objective is always close to optimal. We will refer to this procedure as robustification, and the result as the robustified segmentation.

2 Bayesian Model

We start with an example through which we will illustrate the main ideas in this section.

Example 1.

The example is parameterized by a noise level, . The value set is identical to the type set . Each type corresponds to the distribution :

When , all ’s equal the uniform prior, and no non-trivial segmentation is possible. At the other extreme, when each is a pointmass at , which is the Bergemann et al. (2015) model.

Simplex View.

As observed by Bergemann et al. (2015), the key idea is to identify segments with probability distributions over . The only thing that matters given a segment is the posterior distribution on conditioned on the intermediary choosing . Since is finite, it is easier to think of as the unit simplex in the appropriate dimensions. Then, a segment is simply a point in this simplex. Further, all that matters for a segmentation is the distribution over ’s as observed by the seller, i.e., it is sufficient to specify a distribution over the simplex . The only constraint on this distribution is that its expectation must equal the prior distribution over values, which is another point on the simplex, denoted by .

Going further, it is sufficient to only consider some special points on the simplex. We denote these special points by , for a subset : this is the equal revenue distribution with support equal to the set , i.e., these are distributions supported on such that is the same for all . These special points partition the simplex into regions for each : each distribution in is such that is a monopoly price for it.

(a) Partition of the simplex into ’s and

the uniform distribution over


 
(b) Convex hull of types for . Solid inner triangle is for ; dashed one is for .
Figure 2: Simplex view for Example 1

We now describe this through Example 1. Figure 1(a) shows the unit simplex, and the points for all . The red region is , blue is and green is . The uniform distribution over is represented by . An optimal segmentation with no noise (when ) corresponds to representing as the following convex combination of the vertices of the blue polytope:

This corresponds to the following segmentation with corresponding to and resp. and ; ; and .

Bergemann et al. (2015) consider segmentations that only consist of vertices of where is a monopoly price of the prior distribution (which is the blue region in Figure 1(a)). They assume that ties for monopoly price are broken in favor of the lowest price, which is the lowest value in the support. This implies that the item is always sold thus maximizing social welfare.

Generalization of the Simplex View.

We extend this simplex view to our model. When the number of types is at most the number of values , and the type distributions are non-degenerate, we can continue to consider the simplex on the set of values, as we have done so far. This will be the case for Example 1. A more general view is to consider the simplex on the set of types. Most of the intuition extends to this view, although geometrically the picture is somewhat different. (This is the view we use in the proofs; the simplex on values is used just for illustration.)

The case when is depicted in Figure 1(b). The main difference from the previous picture is that we are not allowed to choose any point on the simplex for our segmentation. Instead, we are restricted to only choose the points in the convex hull of the s, for all . We denote this convex hull by , by abuse of notation. For , the figure shows that is contained entirely inside the blue region. Thus no matter what segmentation is used, the seller always sets the monopoly price of 2; segmentation is therefore useless. For , the figure shows that segmentation is possible because intersects with all three regions, .

We introduce some notation now. Given any segmentation , this induces a distribution over segments, denoted , and a posterior distribution on the values for each segment , which we abuse notation and denote by . For any distribution , let denote its monopoly revenue. Let denote the consumer surplus when the seller sets the monopoly price for distribution . Our goal is to find a segmentation to maximize a linear combination of revenue and consumer surplus, i.e., for some parameter , maximize:

From now on, we let denote probability distributions over the types. We first formalize the claim that segmentation schemes correspond to probability distributions over with a given expectation. The proofs in the section are deferred to Appendix B.

Lemma 2.1.

Let denote the point in the simplex corresponding to the distribution : . There is a 1:1 correspondence between segmentations and probability distributions over such that the expectation is , i.e.,

(1)

Using this lemma, we switch our design space to probability distributions that satisfy (1). We use and interchangably.

We now partition the simplex into areas, , one for each value/price in , such that the price is a monopoly price for any segment in . For any distribution and any price , let denote the revenue of price on distribution , and denote the consumer surplus. For any , define:

Since the revenue function is linear in , the set is the intersection of the simplex and a polytope defined by linear constraints. Further, if we restrict our domain to points , we have that these are linear functions in

We next observe that this implies that it is sufficient to choose at most one point from each . The idea is that we can replace the distribution conditioned on by its expectation.

Lemma 2.2.

There is an optimal segmentation such that the distribution is supported on at most one point from each , i.e., a finite set of the form .

Using this lemma, we now show that the following linear program (LP) captures the optimal segmentation. The variables are . We denote by the region that is the convex hull of and the origin. and extend naturally to s.

(2)
s.t.
Theorem 2.1.

We can find an optimal segmentation in polynomial time by solving LP (2).

3 Sample Complexity Model

We scale the values to be in , i.e., . This treatment simplifies the notations in the proofs, and separates the two roles of : the scale of the values (less interesting, always has the same degree as ), and the number of possible values. To translate the bounds into the original scaling, replace with everywhere. We further assume the type distribution to be uniform to simplify discussions. This is w.l.o.g. up to duplication of types.

Following standard notations in algorithmic mechanism design, we refer to the sale probability of a price as its quantile. We will consider the revenue curve in the quantile space where the

and coordinates are the quantile of a price and its revenue, respectively.

3.1 Model and Results

Intermediary:

The intermediary has access to the value distributions of different types only in the form of i.i.d. samples per type. She chooses a segmentation based on these samples, and then the chosen segmentation is evaluated on a freshly drawn type-value pair, i.e., the test sample. The expectation of the objective is taken over the random realization of the samples per type as well as the test sample, and potentially the randomness in the choice of the segmentation.

Buyer:

The buyer bids truthfully since the seller effectively posts a take-it-or-leave-it price.

Seller:

We need to further define how the seller acts. Consider the following candidate models:

  1. The seller knows the value distributions exactly. Hence, given the segmentation and the realized segment, which induces a mixture of the value distributions of different types, the seller posts the monopoly price of the mixture.

  2. The seller can access the same set of samples per type, and believes that the value distributions are the empirical distributions, i.e., the uniform distributions over the corresponding samples. Hence, she posts the monopoly price of the mixture of empirical distributions.

  3. The seller further has access to other sources of samples.

  4. The seller further has access to other sources of prior knowledge.

This is only a nonexclusive list of many potential models that are equally well-motivated in our opinion, depending on the actual applications. Is there a unifying model that allows us to study all these settings in one shot and get non-trivial positive results?

To this end, this paper considers the following overarching model (the subscript indicates that these variables are associated with the seller):

For , the seller forms beliefs ’s, , such that for any type the Kolmogorov-Smirnov distance between and is at most , i.e., for any value , ’s quantiles w.r.t.  and differ by at most . Then, she posts the monopoly price of the mixture of the beliefs.

The choice of is based on a standard concentration plus union bound combination on the empirical distributions over the samples that the intermediary can access. In other words, we assume that the seller’s beliefs are at least as good as what could have been estimated using the intermediary’s samples. All aforementioned candidate models are special cases of ours.

We start with an impossibility result for general value distributions. See Section A for details.

Theorem 3.1.

If the value distributions are allowed to have multiple monopoly prices whose social welfare differ by at least , e.g., the uniform distribution over , no algorithm can obtain any -approximation using a bounded number of samples.

(Discrete) MHR-like Distributions.

Given the above impossibility result that relies on value distributions that have multiple monopoly prices whose respective values of social welfare are vastly different, we intuitively need the value distributions to be unimodal and far from having a plateau. The family of continuous monotone hazard rate (MHR) distributions, a standard family of distributions in the literature, has all the nice properties that we need, except that they are continuous. They are unimodal since they have concave revenue curves in the quantile space (folklore). In fact, their revenue curves in the quantile space are strongly concave near the monopoly price (Huang et al., 2018b, Lemma 3.3). They also admit other useful properties: the optimal revenue is at least a constant fraction of the social welfare (Dhangwatnotai et al., 2015, Lemma 3.10); and the monopoly price has a sale probability lower bounded by some constant (Hartline et al., 2008, Lemma 4.1).

There is an existing notion of discrete MHR distributions by Barlow et al. (1963) that mimics the functional form of the continuous version. However, it loses some useful properties. In particular, it contains some distributions that have two monopoly prices, e.g., the uniform distribution over , and as a result still suffers from the impossibility result.

Instead, we define a family of (discrete) MHR-like distributions directly from the aforementioned benign properties of continuous MHR distributions. Hence, unlike the existing notion of discrete MHR distributions, our definition truly inherits the main features of continuous MHR distributions. We remark that the constants and in the following definition are merely copied from the continuous counterparts; our results still hold asymptotically if they are replaced by other constants.

Definition 3.1 (MHR-like Distributions).

A discrete distribution is MHR-like if it satisfies:

  1. (Concavity) Its revenue curve is concave in the quantile space.

  2. (Strong concavity near monopoly price) For its monopoly price and any other price , suppose their quantiles are and respectively; then, we have:

  3. (Large monopoly sale probability) Its monopoly price’s sale probability is at least .

  4. (Small revenue and welfare gap) Its monopoly revenue is at least .

The main difference of our MHR-like distribution and the notion in Barlow et al. (1963) is property 2 in definition 3.1. An MHR-like distribution can be made by discretizing an continuous MHR distribution, meanwhile ensuring that there is a gap between the optimal revenue and any sub-optimal ones.

We show that polynomially many samples are sufficient for learning an -optimal segmentation, with only the mild assumption on seller’s behavior discussed earlier in the section.

Theorem 3.2.

With i.i.d. samples, we can learn a segmentation that is optimal up to an additive factor in time.

3.2 Robustification: Motivation and Definition

(a) True distribution
(b) Seller’s belief
Figure 3: Plateau Example. In a revenue curve in the quantile space, the and coordinates are the quantile of a price and its revenue respectively. On the left, the solid curve is the revenue curve of a segment w.r.t. the true distributions; the dotted curves are those of the type distributions mixed in the segment. On the right are the counterparts w.r.t. the seller’s beliefs. Prices and are monopoly prices of the segment w.r.t. the true distributions and the seller’s beliefs respectively.

Recall the thought experiment in Section 1. Consider a more powerful intermediary who has exact knowledge of the true distributions; the seller, however, still acts according to her approximately accurate beliefs. Further, recall the example where the problem remains nontrivial even when the intermediary has more power; we give more details below. Consider a segment for which there are two prices and , such that is the monopoly price with a quantile close to , while gets close-to-optimal revenue with a quantile close to . Even though a single MHR-like distribution cannot have such a plateau, a mixture of MHR-like distributions can.111In fact, every distribution on is a mixture of MHR-like distributions, because point masses are MHR-like. See Figure 2(a) for an illustrative example. If the intermediary includes this segment, the seller’s belief may overestimate the revenue of and/or underestimate that of and thus deduce that is the monopoly price instead of . See Figure 2(b). The resulting social welfare may be much smaller than what the intermediary expects from the true distributions.

Intuitively, we would like to convert the optimal segmentation w.r.t. the true distributions into a more robust version, such that for any approximately accurate beliefs that the seller may have, the resulting social welfare and revenue are both close to optimal. As mentioned in Section 1, we will refer to this procedure as robustification, and the result as the robustified segmentation.

In the following definition, the subscripts of and indicate they are the additive errors that the seller and intermediary are aiming for, respectively. Further, it states that the robustified segmentation must keep all segments in the original version (). We ignore insignificant segments due to technical difficulties in achieving the stated properties for them,222If the expected value is tiny in the first place, all prices are -optimal. Hence, we cannot achieve robustness. and that their roles in the revenue and social welfare are negligible. For any significant segment, the first two conditions state that its weight and mixture of types are preserved approximately; the third condition gives the desirable robustness against the uncertainties in the seller’s behavior.

Definition 3.2 (Robustified Segmentation).

Suppose that (1) is a segmentation, represented by and weight , ; and (2) is an optimal price w.r.t. , . For any , is an -robustified segmentation, represented by and weight with , if for any , either is insignificant in that , or:

  1. (Weight preservation) ;

  2. (Mixture preservation) ; and

  3. (Robustness) no -optimal price w.r.t.  has a quantile smaller than that of by .

The next lemma shows that the technical conditions in the definition of robustified segmentation indeed lead to robust bounds in terms of both social welfare and revenue and, by induction, their linear combinations. The proof follows from straightforward calculations and therefore is deferred Appendix C.1.

Lemma 3.1.

For any prices ’s that are -optimal w.r.t. segments in the robustified segmentation, we have the following in terms of social welfare and revenue:

3.3 Robustification: Algorithm

This subsection introduces an algorithm that finds such an -robustified segmentation in polynomial time for any sufficiently large , i.e.:

(3)
Lemma 3.2.

There is an algorithm that computes in polynomial time an -robustified segmentation, for any and that satisfy Eqn. (3).

3.3.1 Proof Sketch of Lemma 3.2

(a) Robustify a significant segment
(b) Counterbalance to restore the centroid
Figure 4: Robustification.
Step 1:

Robustify the significant segments one by one, ignoring the centroid constraint. Any significant segment is represented by a point and weight , whose intended price is . In the simplex view, we want to move slightly away from such that we end up far from all regions where price gives a small consumer surplus. How do we find which direction to move towards? (Since we are in high dimensions, we cannot rely on geometric intuition.)

The choice of direction to move towards relies on a structural result about the mixtures of MHR-like distributions stated as Lemma 3.3. The lemma promises that there exists a type such that for the distribution , for prices whose quantiles are less than that of by at least , there is a revenue gap of . Once we prove existence, we can find such a type by enumerating over all types and checking if the property holds.

In the simplex view, see Figure 3(a). We want to move (red point) towards the vertex that corresponds to type (to the green point); we want to be at . To do this, decrease the probability of mapping each type to by an factor; then, increase the probability of mapping to additively by to restore the original weight. Clearly, this satisfies the mixture preservation condition.

Let be any price whose quantile is smaller than that of by at least . The revenue gap between and for is , and we moved towards by , therefore the revenue gap between and for the mixture is at least As a result, cannot be an -optimal price in the resulting segment. Thus, we have the robustness condition.

Step 2:

See Figure 3(b). We will add a counterbalancing segment(yellow) to restore the centroid. After the first step, the centroid may be shifted from its intended location, i.e., the middle of the simplex, by up to . Consider the line that crosses the intended centroid and the shifted one. Add a counterbalancing segment at its intersection with the boundary of the simplex on the opposite side of shifted centroid, with an appropriate weight that restores the centroid. The weight is only because the distance between the intended centroid and the counterbalancing segment, in fact, any point on the boundary of the simplex in general, is at least by basic geometry.

Finally, the total weight may now exceed by up to . Normalize the weights of all segments to restore a total weight of . It decreases the weights of the segments by at most and therefore satisfies the weight preservation condition.

We will show the formal algorithm and analysis in Appendix C.2.

3.3.2 Structural Lemma and the Proof Sketch

Lemma 3.3.

For any segment that is significant in the sense that , and its corresponding monopoly price , there is a type such that for any price whose quantile w.r.t.  is smaller than that of by , we have:

This is technically the most challenging part of the proof. We will show the full proof in Appendix C.3. By concavity of the revenue curves of MHR-like distributions, it suffices to consider the inequality when is the smallest price whose quantile w.r.t.  is smaller than that of by . Let this price be .

Recall the plateau example in Figure 3. From the picture, it is tempting to pick the type that corresponds to the “right-most” dotted revenue curve, as it has the desirable shape that the revenue rapidly decreases when the price increases from . There are several problems with this approach. First, the concept of “right-most” revenue curve is underdefined. Is it the one with the smallest monopoly price? Or the one with the largest monopoly sale probability? Second, even if we find a type whose revenue curve has the desirable shape, it still may not prove Lemma 3.3. For example, it may not have a large enough optimal revenue in the first place and, thus, the RHS of the inequality in the lemma is negative.

Instead, we will prove the lemma by contradiction. Intuitively, the contradiction will be that there is a type such that the revenue curve of has a large plateau; this is not possible for MHR-like distributions. The assumption to the contrary guarantees that the revenue of is not much above that of any price between and . The following additional conditions formalize the ‘large plateau’ notion:

  1. Revenue between and is not much higher: ,

  2. The plateau is high, i.e., revenue of is large: .

  3. The plateau is wide, i.e., the quantiles of and differ by at least .

The rest of the subsection assumes to the contrary that the inequality in the lemma fails to hold for all types. Then, we use a probabilistic argument to show that there must be a type such that the distribution satisfies the conditions mentioned above, and argue that these lead to a contradiction.

Probabilistic Argument.

Consider sampling a type according to the mixture induced by the segment. We show that under the assumption to the contrary, the probability that condition 1 is violated is less than (Lemma C.3). Further, we prove that the revenue of w.r.t. , which is optimal for this distribution, is at least an fraction of the social welfare w.r.t. (Lemma C.4). Then by the assumption that this segment is significant, this is at least (Lemma C.5). Then, by a Markov inequality type argument, there is at least an probability that Condition 2 is satisfied (Lemma C.6). Putting together, there is a positive chance that we sample a type that satisfies the first two conditions. Finally, we finish the argument by showing that the first two conditions actually imply the third one (Lemma C.7).

Contradiction.

The proof is a case by case analysis, so we present the bottleneck case which forces the choice we made in Eqn. (3). This is when the monopoly price of type is smaller than both and . A complete proof that includes the other cases are deferred to Appendix C.3 (Lemma C.8).

The concavity, and strong concavity near monopoly price, of MHR-like distributions, along with the fact that both and are larger than the monopoly price, imply that the revenue gap is at least the revenue of times the square of the quantile gap between the prices. Further by the second and third conditions above, of having large revenue and large quantile gap, the revenue gap between prices and is at least:

This is greater than by our choice of in Eqn. (3).

3.4 Proof of Theorem 3.2: Project, Optimize, and Robustify

  1. Construct empirical distributions ’s, , from samples.

  2. Find MHR-like empirical distributions ’s, , such that the Kolmogorov-Smirnov distances between them and the corresponding empirical distributions are small: .

  3. Find optimal segmentation w.r.t. MHR-like empirical distributions .

  4. Construct the robustified segmentation and return it.

Algorithm 1 Learn a (Robust) Segmentation from Samples

Finally, we show how to use the robustification technique to design an algorithm, presented as Algorithm 1, that learns a (robust) segmentation in the sample complexity model.

Algorithm.

Similar to the existing works on the sample complexity of mechanism design, the algorithm starts by constructing the empirical distributions. Then, we project them back to the space of MHR-like distributions w.r.t. the Kolmogorov-Smirnov distance , i.e., the maximum difference in the quantile of any value. The feasibility of this step comes from the fact that the true distributions are MHR-like and satisfy the inequality. We explain in Appendix D how to compute in polynomial time approximate projections that relax the RHS of the inequality by a constant factor; other constants in our analysis need to be changed accordingly but the bounds stay the same asymptotically. Further, we optimize the segmentation according to the MHR-like empirical distributions. Finally, we robustify the resulting segmentation using Lemma 3.2.

Analysis.

Note that for any type , the distances between the true distribution and the seller’s belief , between and the empirical distribution , and between and the MHR-like empirical distribution are bounded by . Hence, the distance between the seller’s belief and the MHR-like distribution is at most by the triangle inequality and, thus, the same conclusion holds replacing types with mixtures induced from the segments. Therefore, for any segment in the segmentation chosen by the algorithm, the seller’s monopoly price w.r.t. her beliefs is a -optimal price w.r.t. the MHR-like empirical distributions. By Lemma 3.1, the performance of the algorithm is an -approximation comparing with the optimal w.r.t. the MHR-like empirical distributions.

It remains to show that the optimal w.r.t. the MHR-like empirical distributions is an -approximation to the optimal w.r.t. the true distributions. To do that, it suffices to find a good enough segmentation achieving this approximation. For this we once again resort to Lemma 3.2, in particular, the existence of an -robustified segmentation for the optimal segmentation w.r.t. the true distributions ’s. Note that the MHR-like empirical distributions ’s are at most away from the corresponding true distributions ’s, by triangle inequality. Therefore, running on the MHR-like distributions, with a seller who posts the monopoly price w.r.t. the MHR-like distributions, gives an -approximation by Lemma 3.1.

4 Bandit Model

In the bandit model, the intermediary interacts with the seller and the buyer repeatedly for rounds for some positive integer , with the buyer’s type-value pair freshly sampled in each round. The goal is to maximize the cumulative objective during all rounds. There are a large variations of models depending on the modeling assumptions. Next, we explain our choice.

Intermediary’s Information:

The intermediary does not know the value distributions at the beginning and, therefore, must learn such information through the interactions in order to find a good enough segmentation. Further, the intermediary observes in each round only the purchase decision of the buyer, but not her value. This is similar to the bandit feedback in online learning and hence the name of our model. We remark that the alternative model where the intermediary can observe the values, which corresponds to full-information feedback in online learning, easily reduces to the sample complexity model as the intermediary may simply run the algorithm in the sample complexity model using the bids in previous rounds as the samples.

Since the intermediary can observe the buyer’s type in each round, she can easily learn the type distribution through repeated interactions. To simplify the discussions, we will omit this less interesting aspect of the problem and will assume that the type distribution is publicly known. Following the treatment in previous models, we further assume that it is a uniform distribution.

Buyer’s Behavior:

We assume the buyer is myopic and, therefore, buys the item in each round if and only if the price posted is at most her value. In other words, the buyer does not take into account that her behavior in the current round may influence how the intermediary and the seller acts in the future. This challenge of non-myopic buyers was partly addressed in the online auction problem by Huang et al. (2018a). Their techniques, however, do not directly apply to our problem.

Seller’s Information:

We assume that, like the intermediary, the seller does not have any information about the value distributions of the buyer at the beginning, and must learn such information through bandit feedback. With this assumption, we will investigate how to encourage the seller to explore on the intermediary’s behalf. What makes it challenging is that the seller’s objective (revenue) and the intermediary’s objective (e.g., social welfare) may not be aligned.

Seller’s Behavior:

Any algorithm by the intermediary must rely on some assumptions on the seller’s behavior to get a non-trivial performance guarantee. Informally, we need the seller to pick an (approximately) optimal price in terms of revenue when there is enough information for finding one; there is not much we can do if the seller simply ignores any information and picks prices randomly. On the other hand, we also need the seller to explore at a reasonable rate in order to learn the value distributions. If the seller could have other sources of information which allow him to estimate the distribution accurately, he may severely limit his exploration on prices whose confidence intervals suggest high-potential (and high uncertainty). Our assumption must disallow such strategies and ensure that the seller learns the distributions only via observing the buyer’s actions.

What are the mildest behavioral assumptions (on the seller) that allow the intermediary to have a non-trivial guarantee in bandit model?

Note that the seller herself faces an online learning problem with bandit feedback. Our model is driven by the exploration-exploitation dilemma in her viewpoint. First, we introduce the upper confidence bound (UCB) and the lower confidence bound (LCB) of the quantile of any value and any type given past observations in the form of (type, price, purchase decision)-tuples.

  1. For any value and any type , suppose there are past observations with type and price , among which the buyer purchases the item in observations. Then, for some constant that depends on the desired confidence level, let:

  2. Noting that quantiles are monotone, we define the UCB and LCB as follows:

  3. This further induces the UCB and LCB of the quantile of value w.r.t. each segment :

For some target average regret of the seller, we say that she exploits in a round if the segment and her price satisfy that:

Otherwise, we say that she explores. We assume that the seller is an -canonical learner in the sense that she exploits in all but at most an fraction of the rounds.

Among others, we give two example algorithms that satisfy this definition. First, the Upper-Confidence-Bound (UCB) algorithm satisfies this with . Further, consider the following simple Explore-then-Commit (ETC) algorithm, with . The seller explores in the first rounds by posting random prices. Since a price-type pair shows up with probability in each of these rounds, she learns the quantile of every price w.r.t. the value distribution of every up to an additive error of . Then, in the remaining rounds, she can pick any price that is not obviously suboptimal in the sense that its UCB is smaller than the LCB of some other price.

4.1 Algorithm

In each round, the algorithm seeks to place the intermediary in a win-win situation by maintaining a set of optimistic hypothetical value distributions for the types, together with a robustified version of the optimal segmentation w.r.t. the hypothetical distributions. If the seller indeed posts a price that is consistent with our optimistic hypothesis, we use the analysis from the previous section (Thm. 3.2) to show that the objective in this round is close to optimal. Otherwise, if the seller posts a price that is inconsistent with our optimistic hypothesis, we argue that there must be a sufficiently large gap between its UCB and LCB and, thus, the intermediary gets some useful new information.

There is a caveat, however, when the algorithm constructs the robustified segmentation: it needs to replace with some slightly larger parameter . In particular, let be such that if we define using Eqn. (3), replacing with , we have: . Solving it gives that and .

We show that for -canonical learners with a sufficiently small , which is satisfied by both aforementioned examples, we can get sublinear regret.

  1. Let be the set of MHR-like distributions with support such that the quantile of each value is between the corresponding UCB and LCB.

  2. Let , , be such that is maximized.

  3. Find , the optimal segmentation w.r.t. , via the algorithm in Section 2.

  4. Construct , a robustified version of , via Algorithm 3, using in place of .

Algorithm 2 Segmentation algorithm in the bandit model (in each round)
Theorem 4.1.

Algorithm 2 gets at least per round on average, provided that the seller is an -canonical learner with .

If the seller explores in a round such that the corresponding UCB and LCB differs by not only but by at least , we call it a major exploration. We first upper bound the number of rounds that involve such major explorations in the following lemma.

Lemma 4.1.

The expected number of rounds that are major explorations is at most .

Proof.

Every time that the seller makes a major exploration on some price in a round, say, in response to a segment represented by a point , the gap between the UCB and the LCB is at least . Then, the expected gap between and is also at least when type is sampled according to . This implies that, with probability at least , the realized type actually has a gap of at least between and . Note that for any type , and any price , this cannot happen by more than times by the definitions of and . Hence, the expected number of times that the seller explores cannot exceed times. ∎

We now prove Theorem 4.1.

Case 1: Seller picks an undesirable price.

Suppose the seller fails to pick a price that is at least -optimal w.r.t. the optimistically chosen distributions . Instead, she chooses a price . Then, either she is not exploiting in the sense of the definition in Section 4, which cannot happen in more than an fraction of the rounds, or the UCB of is at least the maximum UCB among all prices less . This is larger than the expected revenue induced from the optimistically chosen distribution by at least , by the assumption that is not -optimal. Note that the latter is weakly larger than the LCB. Hence, we conclude that the UCB and LCB differs by at least , which means that this is a major exploration that cannot happen in more than an fraction of the rounds (Lemma 4.1).

Case 2: Sale probability is lower than expected.

Next, consider the case when the seller picks a price that is indeed -optimal w.r.t. the optimistically chosen distributions , but the sale probability of the price given by the true distributions is smaller than by by more than . In this case, note that both sale probabilities are bounded between the UCB and LCB; we again conclude that there is a gap of at least between them. Hence, this is a major exploration which cannot happen in more than fraction of the rounds.

Case 3: Everything goes as expected.

Finally, consider the good case, when the seller indeed picks a price that is at least -optimal least 4 w.r.t. the optimistically chosen distributions , and that the sale probability of the price given by the true distributions is at least that by less . Then, by Lemma 3.1, we get that the expected objective in this round is at least .

Since the first two cases cannot happen in more than an fraction of the rounds, the bound stated in Theorem 4.1 follows.

5 Further Related Work

The problem of price discrimination is highly related to screening in games of asymmetric information, pioneered by Spence (1973); Courty and Hao (2000), where the less informed player moves first in hopes of combating adverse selection. In our setting, the seller wishes to screen buyers by charging different prices depending on the buyer’s value. The intermediary’s segmentation allows the seller to screen more effectively. The intermediary themselves face a signaling problem, as their choice of segmentation is effectively a signaling scheme to the seller.

As such, our work is related to the broad literature on signaling and information design, where a mediator designs the information structures available to the players in a game Bergemann and Morris (2016). A special case of this is known as Bayesian persuasion Kamenica and Gentzkow (2011); Dughmi and Xu (2016): an informed sender (here the intermediary) sends a signal about the state of the world to a receiver (here the seller), who must take an action that determines the payoff of both parties. The goals of the sender and receiver may not be aligned, so the sender must choose a signaling scheme such that the receiver’s best response still yields high payoff for the sender. See Dughmi (2017); Bergemann and Morris (2018) for surveys on these topics.

Our results for online learning are also related to work on iteratively learning prices Blum and Hartline (2005); Medina and Mohri (2014); Cesa-Bianchi et al. (2015); Paes Leme et al. (2016); Bubeck et al. (2017); Huang et al. (2018a). Both lines of work consider the seller’s problem of incentive design or learning, but do not have an intermediary or market segmentation component. Our model is also somewhat related to the literature on dynamic mechanism design, which considers the incentive guarantees of multi-round auctions where the same bidders may participate in multiple rounds. Bergemann and Valimaki (2010); Kakade et al. (2013); Pavan et al. (2014) gave truthful dynamic mechanisms for maximizing social welfare and revenue.

Our results are related to recent work on incentivizing exploration in a bandit model Frazier et al. (2014); Mansour et al. (2015, 2016). These papers typically model a myopic decision-maker in each round, and an informed non-myopic principle who can influence the decision-maker to explore rather than exploit. In our setting, the seller is myopic decision-maker who sets prices, and the intermediary can influence that decision by changing the segmentation. The previous results do not directly apply to our setting, as an action corresponds to setting a price in the observed segment. Hence there are exponentially many actions, so one should not hope for polynomial run time or good regret guarantees by directly applying those results. Additionally, the intermediary chooses the segmentation but the observed segment is chosen randomly, so the intermediary cannot force the seller to play any particular action.

6 Future Work

We view our results as initiating a new line of work on algorithmic price discrimination under partial information. We believe there are many promising open problems left to be explored in this direction, and hope this paper inspires future work under other informational models and market environments. We now present some of the most interesting directions for future work.

Competitive Markets.

This paper and Bergemann et al. (2015) consider a monopolist seller, which is a good fit for something like an ad exchange. In many online marketplaces the sellers are in a competitive rather than a monopolistic setting. The products are differentiated so sellers can exert some pricing power, which still incurs deadweight loss. It would be very interesting to extend this theory of price discrimination to such competitive markets.

Strategic Buyers.

When a seller uses past buyer behavior in the form of auction bids or purchase decisions to decide future prices, and a buyer has repeated interactions with such a seller, the buyer may be incentivized to strategize. Even if each interaction in isolation is strategyproof, the buyer may forgo winning an earlier auction in order to get a lower price in the future. When each buyer represents an insignificant fraction of the entire market, techniques from differential privacy can address this issue Huang et al. (2018a). In this paper we ignore this strategic aspect and assume that buyers are myopic. It would be very interesting to get results analogous to Huang et al. (2018a), since their techniques do not readily extend to our model.

Worst Case Model.

In online learning, even for arbitrary sequence of inputs, we can often get a regret guarantee matching that for an i.i.d. input sequence. In particular this is true for a monopolistic seller learning an optimal price or an optimal auction Bubeck et al. (2017); Blum and Hartline (2005). It is tempting to conjecture that the same holds for our setting as well, but we run into difficulties even modeling the problem. How does the seller behave in such a scenario? In this paper, we modeled the seller behavior based on the underlying distribution. Defining a seller behavior model in the absence of such a distribution that is both reasonably broad and allows regret guarantees in the worst case setting is an interesting challenge.

References

  • M. Balcan, A. Blum, J. D. Hartline, and Y. Mansour (2005) Mechanism design via machine learning. In Proceedings of the 46th IEEE Symposium on Foundations of Computer Science, pp. 605–614. Cited by: §1.2.
  • M. Balcan, T. Sandholm, and E. Vitercik (2016) Sample complexity of automated mechanism design. In Advances in Neural Information Processing Systems, pp. 2083–2091. Cited by: §1.2.
  • R. E. Barlow, A. W. Marshall, F. Proschan, et al. (1963) Properties of probability distributions with monotone hazard rate. The Annals of Mathematical Statistics 34 (2), pp. 375–389. Cited by: §3.1, §3.1.
  • D. Bergemann, B. Brooks, and S. Morris (2015) The limits of price discrimination. American Economic Review 105 (3), pp. 921–57. Cited by: §A.1, §A.3, §A.3, Algorithmic Price Discriminationthanks: To appear in 31st Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), 2020., §1.1, §1.1, §1, §2, §2, §6, Example 1.
  • D. Bergemann and S. Morris (2016) Information design, bayesian persuasion, and bayes correlated equilibrium. American Economic Review 106 (5), pp. 586–591. Cited by: §5.
  • D. Bergemann and S. Morris (2018) Information design: a unified perspective. Note: Cowles Foundation Discussion Paper No. 2075R3 Cited by: §5.
  • D. Bergemann and J. Valimaki (2010) The dynamic pivot mechanism. Econometrica 78 (2), pp. 771–789. Cited by: §5.
  • A. Blum and J. D. Hartline (2005) Near-optimal online auctions. In Proceedings of the 16th Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 1156–1163. Cited by: §5, §6.
  • S. Bubeck, N. R. Devanur, Z. Huang, and R. Niazadeh (2017) Online auctions and multi-scale online learning. In Proceedings of the 18th ACM Conference on Economics and Computation, pp. 497–514. Cited by: §5, §6.
  • Y. Cai and C. Daskalakis (2017) Learning multi-item auctions with (or without) samples. In Proceedings of the 58th IEEE Annual Symposium on Foundations of Computer Science, pp. 516–527. Cited by: §1.2.
  • N. Cesa-Bianchi, C. Gentile, and Y. Mansour (2015) Regret minimization for reserve prices in second-price auctions. IEEE Transactions on Information Theory 61 (1), pp. 549–564. Cited by: §5.
  • R. Cole and T. Roughgarden (2014) The sample complexity of revenue maximization. In

    Proceedings of the 46th Annual ACM Symposium on Theory of Computing

    ,
    pp. 243–252. Cited by: §1.2.
  • P. Courty and L. Hao (2000) Sequential screening. The Review of Economic Studies 67 (4), pp. 697–717. Cited by: §5.
  • N. R. Devanur, Z. Huang, and C. Psomas (2016) The sample complexity of auctions with side information. In Proceedings of the 48th Annual ACM Symposium on Theory of Computing, pp. 426–439. Cited by: §1.2.
  • P. Dhangwatnotai, T. Roughgarden, and Q. Yan (2015) Revenue maximization with a single sample. Games and Economic Behavior 91, pp. 318–333. Cited by: §1.2, §3.1.
  • S. Dughmi and H. Xu (2016) Algorithmic bayesian persuasion. In Proceedings of the 48th Annual ACM Symposium on Theory of Computing, pp. 412–425. Cited by: §5.
  • S. Dughmi (2017) Algorithmic information structure design: a survey. SIGecom Exchanges 15 (2), pp. 2–24. Cited by: §5.
  • E. Elkind (2007) Designing and learning optimal finite support auctions. In Proceedings of the 18th Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 736–745. Cited by: §1.2.
  • P. Frazier, D. Kempe, J. Kleinberg, and R. Kleinberg (2014) Incentivizing exploration. In Proceedings of the 15th ACM Conference on Economics and Computation, pp. 5–22. Cited by: §5.
  • Y. A. Gonczarowski and S. M. Weinberg (2018) The sample complexity of up-to- multi-dimensional revenue maximization. In Proceedings of the 59th IEEE Annual Symposium on Foundations of Computer Science, pp. 416–426. Cited by: §1.2.
  • Y. A. Gonczarowski and N. Nisan (2017) Efficient empirical revenue maximization in single-parameter auction environments. In Proceedings of the 49th Annual ACM Symposium on Theory of Computing, pp. 856–868. Cited by: §1.2.
  • C. Guo, Z. Huang, and X. Zhang (2019) Settling the sample complexity of single-parameter revenue maximization. In Proceedings of the 51st annual ACM Symposium on Theory of Computing, Cited by: §A.4, Appendix D, §1.2.
  • J. Hartline, V. Mirrokni, and M. Sundararajan (2008) Optimal marketing strategies over social networks. In Proceedings of the 17th International Conference on World Wide Web, pp. 189–198. Cited by: §3.1.
  • J. Hartline and S. Taggart (2019) Sample complexity for non-truthful mechanisms. In Proceedings of the 20th ACM Conference on Economics and Computation, Cited by: §1.2.
  • Z. Huang, J. Liu, and X. Wang (2018a) Learning optimal reserve price against non-myopic bidders. In Advances in Neural Information Processing Systems, pp. 2038–2048. Cited by: §4, §5, §6.
  • Z. Huang, Y. Mansour, and T. Roughgarden (2018b) Making the most of your samples. SIAM Journal on Computing 47 (3), pp. 651–674. Cited by: §1.2, §3.1.
  • S. Kakade, I. Lobel, and H. Nazerzadeh (2013) Optimal dynamic mechanism design and the virtual-pivot mechanism. Operations Research 64 (4), pp. 837–854. Cited by: §5.
  • E. Kamenica and M. Gentzkow (2011) Bayesian persuasion. American Economic Review 101 (6), pp. 2590–2615. Cited by: §5.
  • Y. Mansour, A. Slivkins, V. Syrgkanis, and Z. S. Wu (2016) Bayesian exploration: incentivizing exploration in bayesian games. In Proceedings of the 17th ACM Conference on Economics and Computation, pp. 661–661. Cited by: §5.
  • Y. Mansour, A. Slivkins, and V. Syrgkanis (2015) Bayesian incentive-compatible bandit exploration. In Proceedings of the 16th ACM Conference on Economics and Computation, pp. 565–582. Cited by: §5.
  • A. M. Medina and M. Mohri (2014) Learning theory and algorithms for revenue optimization in second price auctions with reserve. In Proceedings of the 31st International Conference on Machine Learning, pp. 262–270. Cited by: §5.
  • J. H. Morgenstern and T. Roughgarden (2015) On the pseudo-dimension of nearly optimal auctions. In Advances in Neural Information Processing Systems, pp. 136–144. Cited by: §1.2.
  • J. Morgenstern and T. Roughgarden (2016) Learning simple auctions. In Conference on Learning Theory, pp. 1298–1318. Cited by: §1.2.
  • R. Paes Leme, M. Pal, and S. Vassilvitskii (2016) A field guide to personalized reserve prices. In Proceedings of the 25th International Conference on World Wide Web, pp. 1093–1102. Cited by: §5.
  • A. Pavan, I. Segal, and J. Toikka (2014) Dynamic mechanism design: A Myersonian approach. Econometrica 82 (2), pp. 601–653. Cited by: §5.
  • T. Roughgarden and O. Schrijvers (2016) Ironing in the dark. In Proceedings of the 17th ACM Conference on Economics and Computation, pp. 1–18. Cited by: Appendix D.
  • M. Spence (1973) Job market signaling. The Quarterly Journal of Economics 87 (3), pp. 355–0374. Cited by: §5.

Appendix A Further Examples

a.1 Benefit of Segmentation

We now reproduce an example from Bergemann et al. (2015) that shows how a segmentation can eliminate deadweight loss.

Example 2.

The value set is identical to the type set , and each distribution is a pointmass at . The prior distribution over values/types is uniform. The monopoly reserve of the uniform prior distribution is . When the seller does not segment the market and posts the monopoly reserve price, revenue is , consumer surplus is , and the deadweight loss is .

Consider instead the following segmentation with where ; ; and . Recall that is the distribution of signals sent by the intermediary upon observing type . This signaling scheme generates three market segments, one corresponding to each of . The seller can compute the conditional distribution of values within each segment, and will post the monopoly price for that distribution. Within segment , the distribution of values is: with probability 1/2, with probability 1/6, and with probability 1/3. This happens to be the equal revenue distribution on values . Within segment , only buyers of types 2 or 3 will be present, since a buyer of type will never be mapped into this segment. The conditional distribution of values in this segment is: with probability 1/3 and with probability 2/3, which also happens to be the equal revenue distribution on values . Type is the only type with positive probability of being mapped to segment , so the value distribution in segment is a point mass on value 2.

Since each market segment has an equal revenue distribution of values, the seller can maximize his profit by posting any price in the support of that distribution. For the sake of this example, we will assume the seller breaks ties by posting the lowest optimal price.333 This can be strictly enforced using arbitrarily small perturbations. In market segment , the seller will post price , which will generate revenue of and consumer surplus of . In , the seller will post price , which will generate revenue of and consumer surplus of . Finally, in market segment , the seller will post price , which will generate revenue of and consumer surplus of .

Given the prior distribution of values, one can also compute the probability of the intermediary generating each market segment. In this case, segment is drawn with probability , segment is drawn with probability , and segment is drawn with probability . Combining these, we see that the expected revenue of this segmentation is , and the consumer surplus is . This segmentation has zero deadweight loss, which means that full economic efficiency is achieved.

a.2 Noisy Types

Now consider the above example with noise. This example has been briefly discussed in Section 2. This subsection includes more details.

Example 3.

Given a noise level of , each type corresponds to the distribution given by

Note that when , all equal the uniform prior, and no further (non-trivial) segmentation is possible. At the other extreme, when each is a pointmass at , which corresponds to Example 1. For , it turns out that market segmentation cannot help at all. Any segmentation will result in the seller always setting the monopoly reserve price of , and the result is the same as no segmentation! As in Example 1, revenue from no segmentation is , consumer surplus is , and the deadweight loss is .

On the other hand, when segmentation is possible, but it is no longer possible to achieve full economic surplus due to the noisy types. If the intermediary implemented the same segment map as in Example 1, the resulting conditional value distributions in each segment would no longer be equal revenue distributions because types are no longer perfectly correlated with values. If the intermediary perfectly segments the market by types using the deterministic segment map , this will result in revenue , consumer surplus , and deadweight loss . Note that under the noiseless setting of Example 1, this segmentation would have allowed the seller to perfectly price discriminate, resulting in revenue , consumer surplus , and deadweight loss . These changes in economic outcome are a direct result of the fact that types are only a noisy signal of the buyer’s value. For example in market segment , the seller will still set monopoly price , but only a -fraction of the segment will have value and purchase the item.

a.3 Impossibility of Bergemann et al. (2015) Style Characterization

Recall the simplex view and Figure 2. In the setting of Bergemann et al. (2015), the segmentation only consists of points in where is the monopoly price of the prior distribution (which is the blue region in Figure 1(a)). As mentioned earlier, Bergemann et al. (2015) assume that ties for monopoly price are broken in favor of the lowest price. This implies that even though the market is now segmented, the price in each segment is at most . In other words, segmentation can only lower prices!

One could hope for a similar characterization even in the noisy signal case: after all we can still write as a convex combination of points in the intersection of (the blue region) and (the convex hull of the type distributions, depicted as the triangles in the interior of the simplex in Figure 1(a)), as can be seen in Figure 1(b). Unfortunately we show that this is not without loss of generality in the following example. In particular, the example shows that restricting to such segments may lead to a strictly smaller social welfare than otherwise.

(a) The two types correspond to the points and . The line joining them is the convex hull from which we can pick our segments.
(b) The solid line represents the social welfare as you move along the line from to . The dotted line corresponds to the social welfare of the corresponding convex combination of the two end points
Figure 5: Example in Appendix A.3
Example 4.

In this example, there are two types, corresponding to the points and , i.e., type 1 is the uniform (equal revenue) distribution over , and type 2 is the point mass on 3. The distribution over types is still uniform, which once again gives the same prior distribution as before.

The difference now is that we can only pick points from the line joining these two points, as depicted in Figure 4(a). Figure 4(b) shows the social welfare obtained as we move along this line. The point mass on the left corresponds to , for which we assume that the seller picks a price of 1. The left segment corresponds to the blue region , and the right to the green region . Using only the blue region means using either the point mass on the left or the left segment. The dotted line in Figure 4(b) shows the social welfare obtained from taking a convex combination of the end points. This corresponds to the segmentation where the intermediary simply reveals the type that he observes. As can be seen from the figure, this is strictly better than restricting to points in .

Nonetheless, we can add this as an additional constraint if so desired (at some loss in the objective). Our algorithm extends to handle this easily: just skip iterating over for prices .

a.4 Unbounded Sample Complexity in the General Case

So far we have assumed that the prior distribution is given to us as input and is common knowledge to all players. How does this happen? What if you only have samples from the distribution? How many samples do you need in order to get within an of the optimum? These questions have been studied under ‘sample complexity of auction design’ quite intensively in the last few years. (See Section 5 for details on this line of work.) Only recently, the optimal sample complexity of single item auctions has been resolved Guo et al. (2019). In this paper, we consider the sample complexity of market segmentation.

Unlike auction design, here the seller sets the price to maximize revenue but the intermediary’s objective may be something different, such as consumer surplus. For an equal revenue distribution, statistically the samples will indicate that the high price is revenue-optimal with a significant probability. This is still true even for distributions that are “close to” equal revenue where the low price is strictly revenue-optimal. Higher prices correspond to lower social welfare, since the buyer is less likely to purchase the good; thus the segmentation based on the samples could have a much smaller consumer surplus compared to the optimal segmentation for the distribution. This is particularly problematic because as saw earlier, the optimal segmentation only picks segments with equal revenue distributions. (Recall that the vertices of the colored regions correspond to equal revenue distributions.) We demonstrate this via the following example.

Example 5.

Consider the distribution on where the probability of seeing is for some very small . We will make the segmentation problem trivial: there is only one type, i.e., the intermediary observes no signal. The monopoly price is 1, and consumer surplus is therefore ; this is trivially the optimal consumer surplus we can obtain via segmentation.

When drawing multiple samples from this distribution, there is a constant probability of seeing see more 2s than 1s. In this case, the seller sets a monopoly price of 2, leading to a consumer surplus of 0. For any bounded function of , we can set small enough such that with samples, this happens with probability ; we cannot therefore hope to get to within of the optimum. This example shows that we need a stronger assumption on the input distributions, as compared to those for single item auctions. The standard assumptions there are regularity and boundedness, both of which are satisfied by the distribution in the example above.

In the above example, we assumed that the seller sets the monopoly price on the empirical distribution on the samples. This is not necessarily an accurate assumption. If the seller follows the literature on the sample complexity of single item auctions, he should consider a robust or a regularized version of the empirical distribution. The seller might also have some additional sources of information that allow him get an even more accurate estimate. We make a mild assumption on the seller behavior: that his beliefs are at least as accurate as the intermediary’s estimate from the samples. For a formal definition and more detailed discussion, see Section 3.

Given the discussion so far, it may be tempting to assume that the type distributions are far from the boundaries of s. This is too strong when you consider larger value ranges: two prices, say and , may have almost identical quantiles and, thus, revenues (which means the distribution close to the boundary between and ), but this is not a problem if they both give similar consumer surpluses. We make a milder assumption, which is a discrete version of the monotone hazard rate (MHR) assumption.444 MHR distributions have non-decreasing hazard rate , where and are the pdf and the cdf of the distribution respectively. Here, the naive way to generalize MHR to discrete distributions is to use the functional form. This doesn’t seem to work; we instead require the distribution to have some of the properties that continuous MHR distributions are known to satisfy. Once again, a detailed discussion on this with formal statements are in Section 3.

Appendix B Missing Proofs in Section 2

Proof of Lemma 2.1 From Segmentations to Distributions. Given any segmentation , consider the set that is the union of the following points. For any segment , let there be a point such that:

The probability distribution over is defined as . It is easy to verify that this is indeed a probability distribution and that it satisfies the expectation constraint (1).

From Distributions to Segmentations. Given any that satisfies (1), consider the following segmentation . Let be such that there is some bijection between and , where is uniquely mapped to . For any type , let follow a distribution given by

The pair satisfies (1) implies that are valid probability distributions:

Proof of Lemma 2.2 Consider any optimal segmentation with an arbitrary, or even unbounded number of segments. We show how to transform it into a segmentation with the same objective and yet having at most one segment within each region , replacing the distribution conditioned on by its expectation.

Consider a specific area . Let be the probability of realizing a segment in . Let . By the convexi