1 Introduction
We study the problem of facilitating trade between buyers and sellers that arrive online. We consider one of the simplest settings in which each trader, buyer or seller, is interested in trading a single item, and all items are identical. Each trader has a value for the item; a seller will sell to any price higher than its value and a buyer will buy for any price lower than its value. Upon encountering a trader, the online algorithm makes an irrevocable price offer, the trader reveals its value and, if the value is at the correct side of the offered price the item is traded. After buying an item from a seller, the online algorithm can store it indefinitely to sell it to later buyers. Of course, the online algorithm can only sell to a buyer if it has at least one item at the time of the encounter.
We consider online algorithms that offer prices based on the sequence of past values and we assume that the online algorithm knows only the number of buyers and sellers, but not their values. The values of the sellers and buyers are selected adversarially and are randomly permuted. In that respect, the problem is a generalization of the wellknown secretary problem. The secretary problem corresponds to the special case in which there are only buyers, the algorithm starts with a single item, and the objective is to maximize the total welfare, which is to give the value to a buyer with as high value as possible.
Extending this to both sellers and buyers, creates a substantially richer setting. One of the most important differences between the two settings is that besides the objective of maximizing the total welfare, we now have the objective of maximizing the gainfromtrade. For both objectives, the algorithm must buy from sellers with low values and sell to buyers with high values. The objective is that at the end, the items end up at the hands of the traders, sellers or buyers, with the highest values. The welfare of a solution is defined as the value of the buyers and sellers that have an item. The gainfromtrade of a solution is the difference between the welfare at the end of the process minus the welfare at the beginning. At optimality the two objectives are interchangeable: an algorithm achieves the maximum welfare if and only if it achieves the maximum gainfromtrade. But for approximate solutions, the two objectives are entirely different, with the gainfromtrade being the most demanding one.
The Bayesian version of the problem, in which the values of the buyers and sellers are drawn from known probability distributions has been extensively considered in the literature. Optimal mechanisms for bilateral trading, that is, the offline case of a single seller and a single buyer, were first analysed by Myerson and Satterthwaite in
[17] and played a pivotal role in the development of the area. The online Bayesian case was considered in [9], where the values are drawn from a known distribution but the sequence is adversarially ordered.A generalization of our model is when the items are not identical and each buyer has different value for each one of them, i.e., each seller has a value for its item and each buyer has a vector of values, one for every pair buyerseller. This is also a generalization of the wellstudied online maximummatching problem
[14, 12]. One can cast the online maximummatching problem as the version in which the sellers arrive first and have zero value for their item. The optimal online algorithm for this problem has competitive ratio , when the objective is the welfare (which in the absense of seller values is identical to the gainfromtrade). Our model is incomparable to the online maximummatching problem: it is simpler in the sense that the items are identical (a single value for each buyer instead of a vector of buyeritem values), and at the same time more complicated in that the items are not present throughout the process, but they are brought to the market by sellers that have their own utility. The fact that in our model the buyeritem values are related, allows for a much better competitive ratio regarding the welfare, (almost) instead of . More importantly, our algorithm is truthful, while in contrast, no good truthful algorithm is known for the online maximummatching problem, which remains one of the main open problems of the area. On the other hand, the introduction of sellers poses new challenges, especially with respect to the objective of the gainfromtrade.There are also similarities between our model and the extension of the classical secretary problem to secretaries. From an influential result by Kleinberg [13] we know that this problem has competitive ratio which is asymptotically tight, and can be transformed into a truthful algorithm. This result depends strongly on the knowledge of . In our case the equivalent measure, the number of trades is not known from the beginning and has to be learned, with a degree of precision that is crucial, especially for the gainfromtrade objective. The fact that the gainfromtrade is not monotone as a function of time highlights the qualitative difference between the two models; the gainfromtrade temporarily decreases when the algorithm buys an item, with the risk of having a negative gain at the end. More generally, with the mix of buyers and sellers, wrong decisions are penalized more harshly and the monotone structure of the problem is disrupted.
1.1 Our results
We consider the case when both the number of buyers and the number of sellers is . For welfare we show a competitive ratio of , where hides logarithmic factors.
Actually we can compare an online algorithm with two offline benchmarks: the optimal benchmark, in which all trades between buyers and sellers are possible, independently of their order of appearance, and the expected sequential optimal in which an item can be transferred from a seller to a buyer only if the seller precedes the buyer in the order.
Our online algorithm achieves a competitive ratio of against the optimal benchmark. To achieve this, it has a small sampling phase of length
to estimate the
median of the values of all traders, and then uses it as a price for the remaining traders. But if the optimal number of trades is small, such a scheme will fail to achieve competitive ratio almost one, because with constant probability there will not have enough items to sell to buyers with high value. To deal with this risk, the algorithm not only samples values at the beginning but it additionally buys sufficiently many items, , from the first sellers^{1}^{1}1Buying from the first sellers cannot be done truthfully unless the algorithm knows an upper bound on their value. But this is not necessary since there is an alternative that has minor effects on the competitive ratio: the algorithm offers each seller the maximum value of the sellers so far. This is a truthful scheme that buys from all but a logarithmic number of sellers, in expectation.. The number of bought items balances the potential loss of the welfare that results from removing items from sellers to the expected loss from not having enough items for buyers of high values.The term in the competitive ratio seems to be optimal for a scheme that fixes the price after the sampling phase and relates to the number of items needed to approximate the median to a good degree. It may be possible to improve this term to by a more adaptive scheme, as in the case of the secretary problem [13]. Finally, it may be possible to remove the logarithmic factors from the competitive ratio, but we have opted for simplicity and completeness.
For the objective of gainfromtrade, we give a truthful algorithm that has a constant competitive ratio, assuming that the algorithm starts with an item. The competitive ratio is high, approximately , but it drops to a small constant when the optimal number of trades is sufficiently high. The additional assumption of starting with an item is necessary, because without it, no online algorithm can achieve a bounded competitive ratio.
The main difficulty of designing an online algorithm for gainfromtrade is that even a single item that is left unsold at the end has dramatic effects on the gainfromtrade. The online algorithm must deal with the case of many traders, large welfare, but few optimal trades and small gainfromtrade.
To address this problem, our algorithm, unlike the case of welfare, has a large sampling phase. It uses this phase to estimate the number of optimal trades and two prices for trading with buyers and sellers. If the expected number of optimal trades is high, the algorithm uses the two prices for trading with the remaining traders. But if the number is small, it runs the secretary algorithm with the item that it starts with.
The analysis needs high concentration bounds on the expected number of trades to minimize the risk of having items left unsold. Our algorithm is ordinal, in the sense that it uses only the order statistics of the values not the actual values themselves. This leaves little space for errors and it may be possible that cardinal algorithms that use the actual values can do substantially better.
1.2 Related Work
The bilateral trade literature was initiated by Myerson and Satterthwaite in their seminal paper [17]. They investigated the case of a single sellerbuyer pair and proved their famous impossibility result: there exists no truthful, individually rational and budget balanced mechanism that also maximizes the welfare (and consequently, the gain from trade). Subsequent research studied how well these two objectives can be approximated by relaxing these conditions. Blumrosen and Mizrahi [2] devised a approximate, Bayesian incentive compatible mechanism for the gain from trade assuming the buyer’s valuation is monotone hazard rate. Brustle et al. expanded in this direction in [3] for arbitrary valuations and downwards closed feasibility constraints over the allocations. In the case where there are multiple, unit demand, buyers and sellers, McAfee provided a weakly budget balanced, approximate mechanism for the gain from trade in [15], where is the number of trades in the optimal allocation. This was later extended to be strongly budget balanced by SegalHalevi et al. in [18]. McAfee also proved a simple approximation to the gain from trade if the buyer’s median valuation is above the seller’s [16]. This was significantly improved by ColiniBaldeschi et al. in [leonardiGFT] to and , where is the probability that the buyer’s valuation for the item is higher than the seller’s. Recently, Giannakopoulos et al. [9] studied an online variant of this setting where buyers and sellers are revealed sequentially by an adversary and have known prior distributions on the value of the items.
The random order model we are using has its origins in the wellknown secretary problem, where items arrive in online fashion and our goal is to maximize the probability of selecting the most valuable, without knowing their values in advance. The matroid secretary problem was introduced by Babaioff et al. [1]. In this setting, we are allowed to select more than item, provided our final selection satisfies matroid constraints. A variety of different matroids have been studied, with many recent results presented by Dinitz in [6]. Of particular interest to our problem are secretary problems on bipartite graphs. Here, the left hand side vertices of the graph are fixed and the right hand side vertices (along with their incident) edges appear online. The selected edges must form a (incomplete) matching and the goal is to maximize the sum of their weights. Babaioff et al. in [1] provided a competitive algorithm for the transversal matroid with bounded left degree , which is a special case of the online bipartite matching where all edges connected to the same left hand side vertex have equal value. This was later improved to by Dimitrov and Plaxton [5]. The case where all edges have unrelated weights was first considered by Korula and Pal in [14] who designed a competitive algorithm, which was later improved to the optimal by Kesselheim et al. [12]. Another secretary variant which is close to our work is when the online selects items instead of one, where Kleinberg [13] showed an asymptotically tight algorithm with competitive ratio .
The wide range of applications of secretary models (and the related prophet inequalities) have led to the design of posted price mechanisms, that are simple to describe, robust, truthful and achieve surprisingly good approximation ratios. Hajiaghayi et al. introduced prophet inequality techniques in online auction in [11]. The choice secretary described above was then studied in [10] which combined with [13] yielded an asympotically optimal, truthful mechanism. For more general auction settings, postedprice mechanisms have been used by Chawla et al. in [4] for unit demand agents and expanded by Feldman et al. in [8] for combinatorial auctions and [7] for online budgeted settings.
2 Model and Notation
The setting of the random intermediation problem consists of sets and containing the valuations of the buyers and sellers. For convenience, we assume that they are all distinct. The intermediary interacts with a uniformly random permutation of which is presented to him one agent at a time, over steps. The intermediary has no knowledge of before step . We use and to denote the th highest valued seller and th lowest valued seller respectively.
We study posted price mechanisms that upon seeing the identity of agent offer price . This price can not depend on the entire valuation function; only the values within which are revealed at this point. We buy or sell one item from sellers or buyers who accept our price, respectively. Of course, we can only sell items if we have stock available. Formally, let be the number of items at time . Starting with items (with for welfare and for the gainfromtrade):
The set of sellers from whom we bought items during the algorithm’s execution is
and similarly the set of buyers we sold to is
. Notice that these are random variables, depending on
.The social welfare of online algorithm is the sum of the valuations of all agents with items. In particular, after executing it is: . The gain from trade (or GFT) produced by algorithm throughout the run is the difference between the final and starting welfare: .
We are interested in the competitive ratio of our online algorithm compared to the offline algorithm . In this setting there are two different offline algorithms to compare against: optimal offline and sequential offline. They both know , but the first can always achieve the maximum welfare, whereas the second operates under the same constrains as we, namely he can only perform trades permitted by , which is unknown. We say that algorithm is competitive for welfare (or gain from trade) if for any we have:
(1) 
for some fixed .
Often we will refer to the matching between a set of buyers and a set of sellers. Let , where is the set of sellers and buyers with whom we trade (or are matched, in the sense that the items move from sellers to buyers) in a welfare maximising allocation and the optimal gain from trade. Note that this does not contain pairs: only the set of each side of the matching. Similarly, let be the matching generated by only trading with sellers valued below and buyers above . In a slight abuse of notation, we will use for the size of the matching and , where is any set operation. For convenience, we refer to where is a sequence of agents.
3 Welfare
In order to approximate the welfare, the online algorithm uses a sampling phase to find the median price, in an attempt to transfer items from agents below the median to more valuable ones above it. The two main challenges, in terms of its performance, are estimating the median with a small sample and not missing too many trades due to the online nature of the input. Before we delve into the actual algorithm, it is useful to state two probability concentration results, similar to the familiar AzumaHoeffding inequality, but for the setting where sampling happens without replacement as is our case.
Lemma 1.
Let where , and for some integer . Consider sampling values of uniformly at random without replacement and let be the value of the draw. For , we have that for any :
(2) 
and
(3) 
Proof.
Let be the Doob martingale of , exposing the choices of the first draws. Clearly we have that , since the knowledge of one draw cannot change the expectation by more than . Applying Azuma’s inequality, we obtain:
(4) 
Let for indicate if was chosen. Since only these contribute to , we have that . Repeating the previous martingale construction, we get:
(5) 
Note that this result is not superfluous: by immediately applying Hoeffding’s inequality for sampling with replacement, we would obtain:
which is only tight if is large compared to . The concentration
should intuitively work if is a large fraction of as well: imagine .
Similarly, we often encounter a situation where we are interested in the number of trades between sellers and buyers, arriving in a uniformly random permutation. Assuming we buy from all sellers, occasionally we would encounter a buyer without having any items at hand. This results shows that even though this is the case, few trades are lost.
Lemma 2.
The number of trades , where is a uniformly random sequence containing buyers and sellers, is:
(7) 
assuming all sellers are valued below all buyers.
Proof of Lemma 2.
Since we buy from all sellers and attempt to sell to all buyers, let be if at step a seller is encountered and if it is a buyer. We define the following martingale, with :
Basically, keeps track of the unsold items: sellers pull away from 0 and buyers towards 0. We need to define since is not a martingale: . Inductively, it is easy to show that . Therefore, the number, of unsold items at time is at most . By a simple case analysis we have that . Thus, by Azuma’s inequality we have:
(8) 
Since items are bought, we have:
(9) 
∎
All the machinery is now in place analyse sequential algorithms in this setting. We first show a key property of the offline algorithm.
Proposition 1.
The optimal offline algorithm sets a price , equal to the median of all the agents’ valuations and trades items from sellers valued below to buyers valued above .
Proof.
Since there are only items available, if we could freely redistribute the items we would choose the top agents with highest valuations. Let be the value of the th most valuable agent. If there are buyers valued above we have buyers and sellers valued below it. Thus, buying from all sellers below and selling to all buyers above it is an optimal algorithm. ∎
However, the optimal sequential offline algorithm would not just trade at this price. For instance, if there is buyer and sellers above and seller and buyers below, trading at this price would give a probability of transferring the item, since only one transfer increases the welfare and the agents have to appear in the right order. Therefore, if that buyer has a much larger valuation than anyone else, this algorithm would only be competitive. However, we can modify this approach with a bias towards buying more items than needed, in order to maximise the probability of finding high valued buyers.
Lemma 3.
The optimal sequential online algorithm is competitive against the optimal offline for welfare.
Proof.
The optimal online algorithm adjusts the price according to the following two cases, where .

. In this case the same price is used. At the end, the online algorithm will still keep the highest valued sellers and by Lemma 2 will match
buyers in expectation. The offline optimum will of course keep the highest sellers and buyers, leading to a competitive ratio of at most:

.
In this case, suppose two prices are used: to buy from the lowest sellers and to sell to the highest buyers. For the buyers, the online does at least as well as the previous case. In particular, it it obtains a uniformly random sample of size at least by Lemma 2, amongst the top buyers with probability at least . Since the buyers matched by the optimal offline are contained within the highest buyers, the ratio just from buyers remains the same as before.
From the sellers side, the online keeps the highest sellers, while the offline keeps at most , for a ratio at most .
Combining both cases, the ratio is asymptotically at most:
(10) 
Note that the choice of to separate the two cases is optimal. ∎
The next step is to design an online algorithm without knowing or beforehand. The algorithm is as follows:

Record the first agents and calculate their median . Buy from all sellers during this sampling phase.

After the sampling starts the trading phase:

Buy from seller if .

Sell to buyer is an item is available and .

For the analysis of this algorithm, we first need a concentration result on the sample median .
Lemma 4.
Let and select elements from without replacement. Then, their sample median satisfies:
(11) 
Proof.
We have that:
Since we are sampling without replacement, this is equivalent to selecting elements uniformly at random from containing 0’s and 1’s and having their sum be greater than . Using , and taking samples in Lemma 1, we have:
By symmetry, the same holds for : just reverse the ordering of the agents. ∎
This shows that our sample median might have at most agents more on one side compared to the true median . However, this loss is negligible asymptotically, as these agents are a uniformly random subset of the . We now show that buying from sellers during the sampling phase, before considering any buyers, can only increase the number of trades in the next phase.
Lemma 5.
Let be a sequence containing buyers and sellers. Move an arbitrary seller the beginning of the sequence to obtain . Then we have:
Proof.
Let be the first buyer not to receive an item in . Clearly, if doesn’t exist then the number of items sold in both cases is . Assume we sell the item bough from only if it is the last item left. Then, it is sold to : otherwise would not be the first buyer not to be sold an item is . There are two cases:

If appears in before : both sequences continue identically as we have no items in stock after .

If appears after in : there is one fewer seller in after , since was moved to the front. However, this can result in at one lost sale.
∎
Actually, we have shown that moving sellers to the beginning can only increase trades, which is slightly more powerful. We are now ready to state one of the main results of this section.
Theorem 1.
This algorithm is competitive for welfare.
Proof.
As before, let be the size of the optimal offline matching. The following analysis assumes that the event of Lemma 4 did not occur and and split the agents in two sets, differing by at most . Given this, we analyse the algorithm in three steps. First show that we never buy too many items from highly valued sellers, therefore we keep most of the sellers’ contribution to the final welfare. Then we show that we always match a high proportion of the valuable buyers by considering two cases: if there are few such buyers then they are matched to the sellers we obtained during the sampling phase, otherwise we have enough sellers below to match them to.
We introduce some notation useful to the analysis: let be the set containing the top highest valued agents. Then let be the number of sellers and buyers respectively in and be how many of them appeared after the sampling phase. valued agents. To show the competitiveness of our algorithm, it suffices to find the fraction of that is achieved at the end of the sequence: being competitive against the top agents implies a ratio of
against all agents above the median and therefore the optimal offline.
We first show that we never lose too much welfare by buying from sellers, both in the sampling and trading phase.
Given , the only occasion on which a seller in is bought is if he is amongst the first sellers. This event is clearly independent from the condition on , meaning in expectation we keep
(12) 
highly valued sellers. Therefore, enough of the sellers’ original value is kept. The rest of the analysis will only focus only the proportion of buyers in who get an item. For the number of items bought during the sampling phase, the following holds by Lemma 1:
(13) 
as there are out of agents are sellers and we sample of them. Therefore, we enter the trading phase with an excess of at least
items with high probability.
To analyse the number of buyers in matched, we consider two cases.
:
In this case there are few valuable buyers and all we need to show is that the excess of items bought during sampling is enough to trade with most of them. We first need to find , which is slightly more complicated, since we have conditioned on approximating the median. Given , at least agents were above the median value during the sampling phase. Note that all of the agents in are above the median. Therefore, any of the agents in the upper half of the sampling phase could be replaced by a buyer in . At worst, agents from are in the sampling phase, which means that in a random permutation, we have:
We might also consider up to extra buyers, if underestimated . However, given that with high probability, every buyer in will be matched with an item, giving the claimed competitive ratio for this case.
:
Let be the number of trades the optimal offline algorithm would perform. Since the median might be underestimated, the number of sellers we consider is at least and buyers at most . We show that, with the help of the extra items we bought during sampling, we have more items than buyers in total, with high probability. Let the number of sellers and buyers below and above after the sampling phase. By by Lemma 1 we expect to find
(14)  
(15) 
by sampling out of with important elements. Similarly we have
(16) 
It is important to note that these quantities are almost equal, other than a factor which is insignificant compared to . Then, with high probability (well , relatively high):
(17)  
(18)  
(19) 
given than . Since we bought at least items during the sampling phase, the total number of items bought is higher than the total number of buyers considered for large enough. Also, by Lemma 5 having these items ready before encounter buyers is beneficial.
Therefore, we get a lower bound on the number of buyers in that actually acquire an item using Lemma 2. The number of items sold in expectation is at least:
(20) 
However, we are interested only in the fraction of buyers in who acquired an item. The algorithm does not differentiate between any buyer above , the sequence is uniformly random and all buyers in are contained within the top buyers. By lower bounding with Lemma 1:
(21) 
and using (20), the fraction of buyers in matched is at least:
(22) 
with probability , which is asymptotically high as . ∎
4 Gain from Trade
Compared to the welfare, the gain from trade is a more challenging objective. The main reason is that even for large , the actual trades that maximise the GFT can be very few and quite well hidden. Moreover, buying from a single seller and being unable to sell could completely shatter the GFT, while it could have very little effect on the welfare.
First of all, the setting has to be slightly changed. We give the online algorithm one extra, free item at the beginning to ensure that at least one buyer can acquire an item, even when the initial sampling has been inconclusive. For fairness, the offline algorithm is also provided with this starting item. We show that this modification is absolutely necessary to study this setting under competitive analysis.
Theorem 2.
Starting with no items, there exist such that the competitive ratio for the GFT is arbitrarily high.
Proof.
Consider two different valuations. The first has and . In the second has . We tweak the value of the buyer so that the trade from instance one no longer increases welfare, but add one extra seller to keep the optimal GFT positive.
Let be the probability of the online algorithm buying from , conditioned on arriving later. This must be , otherwise his expected gain from trade will be 0, compared to the generated by the offline.
However, in the second instance the algorithm should buy from instead of . But, if appears first, the first algorithm should buy from him too, as the information received so far is the same:
So the online algorithm has a positive chance of buying the item from the wrong seller. Assuming in all other cases maximum gain from trade is extracted, we have:
(23) 
Since is independent of , we can set
which leads to whereas the offline has .
In any case, no online algorithm can perform well in both instances. ∎
To avoid the previous pitfall, we assume the intermediary starts with one
item. Roughly, the algorithms starts by estimating the total volume of
trades in an optimal matching by observing the first segment of the
sequence. Using this information, two prices are
computed, to be offered to agents in the second part. This being an ordinal
mechanism, the goal is to maximise the number of trades and leave
no item unsold. During the trading phase we are also much more
conservative:
at most one item is kept in stock and we stop buying items well before the end
of the sequence, to make sure that there are enough buyers left to sell
everything.
The online algorithm
contains parameters
whose values will be specified later.
The idea is to use the first part of the sequence to estimate the matching . If a large (in terms of pairs) GFT maximising matching is observed, it is likely that a proportionate fraction of it will be contained in the second half. In that case, sellers and buyers are matched in non overlapping pairs to avoid buying too many items. However, if the observed matching is too small, then the algorithm defaults to selling only the starting item, as it is very likely that will not contain enough buyers for anything more.
Before moving on to the analysis of the algorithm, we need a simple lemma on the structure of GFT maximising matchings, to explain the prices set.
Lemma 6.
For any and :

can be obtained by setting two threshold prices and trading with buyers above and sellers below them.

Choosing and such that for yields .

and .
Proof.
For Property 1, assume ,
such that
. But, instead of two matches we can just match to
instead: , thus any such
pair of matched agents cannot be part of . Setting and we have and the result follows. This is essentially
the same observation as using the median price to trade, but using two
different prices for robustness, as we will see later.
Property 2 follows because contains the highest value pairs for . Property 3 is straightforward. ∎
Theorem 3.
is competitive for the gain from trade.
Proof.
Let . We bound the gain from trade for the case where , contain their analogous proportion of and show that the losses are insignificant otherwise. In particular, let
be the well mixed probability, where an approximate chunk of the matching appears in both parts. The two events are not independent. To bound , it suffices to study the distribution of and , the sets of agents comprising the optimal matching. By Lemma 6, we know that any seller in can be matched to any buyer in . Since we only care about the size of the matching in and , not its actual value, we can rewrite as:
(24)  
(25) 
which is easier to handle.
It is useful to think the input as being created in two steps: first the volume of agents in is chosen and afterwards their exact values are randomly assigned. As such, a lower bound on the fraction of the size of the online to the offline matching provides the same bound on the gain from trade. We begin by bounding .
Lemma 7.
The probability the matching is wellmixed is
Proof.
Let and be the prices achieving the matching , by Lemma 6. We need to show that the prices computed achieve a constant approximation of . Since is well mixed and by using Lemma 6 we have that:
(28) 
where the second inequality holds since is a gain from trade maximising matching and the third because at least a fraction of appeared in . In particular, we have that is the highest value part of and , thus and leading to:
(29) 
by Eq. 28. Therefore, the prices computed find a relatively large subset of . We now need to find just how many of the trades in are achieved by our algorithm. Let and . We need a high probability guarantee on the size of and .
Lemma 8.
Assuming the matching is well mixed:
Proof.
In the wellmixed case, we have that
(30) 
which leads to
(31) 
To get a lower bound on the size, we have:
(32)  
(33)  
(34) 
where Eq. 32 follows from Eq. 30. Eq. 33 follows since if is greater than the median, then at worst case all elements from are less than , which still leaves plenty of sellers in . Eq. 34 follows since draws are not actually independent, but this works in the inequality’s favour. From Eq. 29 we know is greater than at least a fraction of sellers. Since the ‘bad case’ is choosing all sellers below the median, this happens with higher probability if each draw is with rather than without replacement, leading to the result. ∎
Clearly, Lemma 8 holds for buyers as well. The
proof is almost identical, keeping in mind that buyers are ordered the
opposite way.
At this point we have a clear indication of how many sellers and buyers the prices cover in the second part of the sequence. Since this is an ordinal mechanism, we want to maximise the number of trades provided no item is left unsold. There are no a priori guarantees on the welfare increase of each trade, even a single unsold item ruins our gain from trade guarantees, in the worst case.
Lemma 9.
Let and . Then, the probability that no item is left unsold is at least . Moreover, the expected number of trades in this case is at least:
(35) 
Proof.
We begin by calculating the probability of having an unsold item, which
is
easy: it is at most as much as the probability of not encountering a buyer
within the last agents. Using a similar argument as
Lemma 8, this probability is at most .
We now need to calculate the expected number of trades. Let be a random variable indicating that an item was sold to the th agent. We have:
since the previous transaction being buying from a seller occurs with probability as there are sellers in
Comments
There are no comments yet.