1 Introduction
Online labor market platforms (e.g., Upwork, Freelancer, and Amazon Mechanical Turk for online work; and Thumbtack and TaskRabbit for offline work) provide the ability for clients to hire freelancers through the platform on a taskbytask basis. These platforms use data collected from past match outcomes to learn about freelancers, and improve matching in the future. These platforms are primarily onetoone matching platforms, i.e., clients hire one freelancer at a time. However, for many tasks, clients need more than one freelancer; in such cases, they are left to form a team on their own – platforms do not generally provide algorithmic team matching services. As organization structures such as “flash teams” and “flash organizations” begin to be enabled by online platforms [RRT14, VRT17], it will be increasingly incumbent on the platform to ensure it can optimally match teams of available freelancers to jobs at hand.
Existing research in the area of workforce utilization has focused on designing effective roles for workers and organization structures that streamline collaboration between individuals with different types of skills (“horizontal” differentiation) [TGW05, ED89]. The problem of matching teams of workers to available jobs in the face of quality uncertainty (“vertical” differentiation), however, has largely remained unaddressed. In this paper, we bring this problem into focus.
Without any assumptions on how the composite performance of a team depends on the quality of its constituent individuals, the problem of learning the optimal composition of teams is a combinatorial search problem over all possible team combinations, and is intractable in many situations [RMT17]. In practice, it is natural to assume that the performance of a team depends in some form on the individual qualities of workers that constitute the team [NWC99]. It is also natural to assume that the performance of a team can be evaluated based on the quantitative feedback received from the employer on completion of the assigned task, as is commonly seen in many online platforms.
Such quantitative evaluation of the performance of a team reveals some information about the inherent qualities of constituent workers, and thereby the performance of other potential teams that contain some of these individuals. These learning spillovers depend on how the characteristics of workers combine to determine the performance of the team, and they can be effectively utilized to reduce the number of attempts required to obtain a good partitioning of the workers. However, optimally leveraging these spillovers requires the platform to solve the challenging problem of designing an intelligent and adaptive sequence of matches that learns at the expense of minimal loss in performance.
In this paper, we develop a stylized model to investigate this challenge; in the model we consider, the platform only observes a single aggregate outcome of each team’s performance on a job. Thus identifying the quality of each worker requires observing their performance in distinct teams across multiple jobs. Our model, despite its simplicity, shows surprisingly intricate structure, with the emergence of novel explorationexploitation tradeoffs that shed light on the qualitative features of optimal matching policies in these settings.
The model we consider consists of a large number of workers. Each worker is of one of two types: “high” (labeled “1”) or “low” (labeled “0”); we assume each worker is independently 1 with probability
. We consider a model in discrete time, where at each time workers are matched into pairs (i.e., teams of size two) to complete jobs. We consider two models for payoffs: the Weakest Link model, where the payoff of a team is the minimum of their types; and the Strongest Link model, where the payoff of a team is the maximum of their types. The platform is able to observe the payoff from each pair, but not the types of individual workers; these must be inferred from the sequence of team outcomes obtained. The goal of the platform is to maximize payoffs, so it must use the team matching to learn about workers while minimizing loss in payoff, or regret.The Weakest Link model is most natural when every worker’s output is essential to completing the task successfully. As an example, suppose two workers are hired to complete a web development job: one may need to complete front end development, and another back end development. If either fails, the job itself is a failure. Our model assumes that assessment of the task is based on the final outcome (i.e., whether the site is functional), without attribution to the individual workers. Another example is house cleaning: if any cleaner performs her set of cleaning tasks poorly, the entire cleaning job may receive a poor rating. The Strongest Link model is most natural when a strong worker can cover for shortcomings of her partner. This is a more natural representation of tasks that have planning and execution components, where high expertise is needed for effective planning, although a single high quality worker suffices for this, and the execution is less sensitive to skill.
The goal of the platform is to adaptively match the workforce into pairs at each stage so as to maximize the expected longrun payoff. In contrast to many online learning problems in which it is not possible to stop incurring regret after any finite time, in this problem any good policy will, in finite time, gather enough information to be able to make optimal matches (for example, one way to do this is to make all the possible matches in an initial phase, then choose an optimal partition going forward). We therefore measure performance by computing cumulative regret against a policy that knows all the worker types to begin with, until such a time when no additional regret is incurred thereafter. Our main contributions involve an analysis of this regret in the Weakest Link and Strongest Link models. We now summarize our main contributions.
Weakest Link model. For the Weakest Link model, informally, the goal is to quickly identify all the 1 workers so that they can be matched to each other, while minimizing the number of matches in the process. The only way one can discover a 1 worker is if she gets matched to another 1 worker, resulting in a payoff of . On the other hand, the only way to discover a 0 worker is if she gets matched to a worker who is known to be of a 1, or is later discovered to be a 1. Thus matching a worker with unknown quality to a 1 worker, either known or unknown, is critical to learning. This brings us to the two central questions. First, suppose that the platform has discovered a certain number of 1 workers. Should these workers be matched amongst themselves to generate high feedback (“exploit”) or should these workers be matched to workers whose qualities are unknown so as to speedup learning (“explore”)? Second, even if we want to only “exploit” and not “explore" with known 1 workers, i.e., match the known 1 workers amongst themselves, what is the best way to adaptively match the unknown pool of workers amongst themselves?
It turns out that the answers to these questions depends on the expected proportion of high quality workers in the population . As , i.e., when one expects there to be an abundance of 1 workers in the population, we construct a policy for matching unknown workers that we call Exponential cliques, that is asymptotically optimal (in ), without requiring the discovered 1 workers to explore. That is, the 1 workers can be simply matched amongst themselves after identification. This is a bit counterintuitive because in this regime, the number of 1 workers that get identified in the first stage is high enough to be able to use these workers to learn the quality of all the remaining workers in the second stage. One incurs no regret thereafter. Although this type of an approach appears to be tempting from an operational perspective – it is certainly fastest in terms of learning – we nevertheless show that it is strictly suboptimal. In stark contrast, for low enough, we show that any optimal matching policy must match discovered 1 workers with workers of unknown quality.
Strongest Link model. For the Strongest Link model, informally, the goal is to match 1 workers with 0 workers to the extent possible. If the number of workers is large and , then some matches are inevitable, and thus one wants to minimize the number of matches. On the other hand, if , then some matches are inevitable, and thus one wants to minimize the number of matches. In either case, 0 workers get discovered when they are matched to other 0 workers, whereas 1 workers are identified by either being matched to a known 0 worker, or to an unknown 0 worker who later gets identified as being a 0. Thus, matching a worker with unknown quality to a 0 worker, either known or unknown, is critical to learning. But in contrast to the Weakest Link model, the question of what is to be done with the discovered 0 workers is less uncertain. Though it may seem natural at first glance to not match known 0 workers to anyone but each other, it turns out that it is strictly better to utilize these knownquality workers to explore the unknown worker set. The central question then becomes: how does one optimally explore using these known 0 workers? Restricting ourselves to a natural class of exploration policies, we uncover a sharp transition in the structure of the optimal exploration policy at .
Organization of the paper. The remainder of the paper is organized as follows. We discuss relevant literature in Section 2. We introduce the model, the problem formulation and certain reductions of the problem in Section 3. In Sections 4 and 5, we focus on the Weakest Link and the Strongest Link models, respectively. We conclude with a discussion of our results in 6. The proofs of all of our results are deferred to the appendix.
2 Related Work
In this paper, we focus on optimizing platform performance while learning to form optimal teams, under the presence of uncertainty about worker qualifications and ultimately each worker’s contribution to a job. There has been some work on the problem of optimal team formation, not emphasizing the aspects of learning or optimization, but the incentivization of workers by the platform to participate or exert effort [BFN06, CE10]. This line of work uses game theoretic models to analyze optimal aggregate performance when the platform can see only the project outcome and not the individual contributions. The platform can offer prices or contracts to strategic agents to motivate them to work in teams.
Kleinberg and Raghu [KR15] propose a method of learning which individuals, among a pool of candidates, will together form the best team. This method shows that even when a project depends on interactions between the members and the performance is a complex function of the particular subset chosen, in certain cases nearoptimal teams can be formed by looking at individual performance scores. This work assumes the ability to perform many tests upfront, while we focus on the case of online optimization.
Our work on simultaneous learning and optimization of labor platforms has clear ties to MultiArmed Bandit (MAB) problems [BCB12] and indeed Johari et al. [JKK16] use MABs to match single workers (with quality uncertainty) to known job types (without uncertainty), though this work does not deal with the complex interdependency in the worker population when teams of workers are matched. Our work has closer ties to combinatorial bandits [CBL12] and semibandits [KWAS15]. In combinatorial bandits, there are finite possible basic actions each giving a stochastic reward and at each time one chooses a subset of these basic actions and receives the sum of the rewards for the subset chosen. To apply this framework to our problem, we can model each team as a basic action and at each time step we choose a subset that corresponds to a partition of the workers into teams. Existing regret bounds either assume independence between basic actions [CSP15], which our setting violates, or allow for correlations [KWAS15] but do not fully leverage the additional knowledge gained from how these basic actions are related, e.g. how the performance of two teams are related if they share team members. In this way, the teamwork setting we consider induces additional structure that can be leveraged to minimize regret.
Another line of related work within the MAB literature is [SJR16], which considers the standard multiarmed bandit setting with arms, but here the decision at each step is to choose out of these arms, and only the highest reward out of these arms is revealed. The goal is to learn the best subset containing arms. Although there are similarities with our “Strongest Link” payoff structure, this setup models the problem of determining the best team for a single task at each time step and does not model a labor platform which must fill numerous jobs simultaneously.
3 The Model
We now describe our model formally.

Workers and Types: Suppose that we have a single job type and we have workers denoted by the set . We assume that is even. Each job requires two workers to complete. Each worker has type . We assume that the are i.i.d. random variables. For convenience, we denote . The type of the worker represents the skill of the worker at the given job. We sometimes refer to workers of type as high quality workers and those of type as low quality workers. Let be the number of 1 workers in the population, which is distributed as . We assume that the platform knows , but the specific type of each worker is unknown. We are interested in the regime where the number of workers is large, i.e., we allow to scale to infinity while the expected proportion of 1 workers in the worker pool, , remains constant.

Decisions and System Dynamics: Matching decisions are made at times . The workers enter the system at time and the platform has no information about them except for the prior . Workers stay in the system indefinitely. At each time, the platform creates a partition of the worker pool into pairs of size 2. Let denote the set of all possible pairings. Let denote the pairings of the workers at time . Each team is assigned to a distinct job (jobs are assumed to be abundant) and receives a payoff at the end of the period, which is perfectly observed by the platform.

Feedback model: We consider two different feedback models. In each of the two models, the score that a pair receives upon completion is a deterministic function of the worker types:

Weakest Link: The score received by a pair is ;

Strongest Link: The score received by a pair is .
Let denote the pairings and the payoffs observed for each pairing at time .


Policies: In any period, the platform has access to the history of all previous pairings made and scores observed. A policy for the platform is a sequence of mappings, indexed by time, from the history (where the initial history is defined to be ) to the next action . We denote the set of all policies for the setting with workers by . The total payoff generated for the platform at time step is .

Objective: Let be a type assignment across workers and let be the set of all such type assignments. The distribution of the type of each worker induces a distribution on the type assignments in . All the expectations in the remainder of the paper are defined with respect to this distribution. The performance of a policy under type uncertainty can be compared to the performance of the optimal matching policy when the identities of the high quality workers are known. For a fixed policy and a type assignment , we define the period perworker regret as,
Note that this is a deterministic quantity given the type assignments . Since the workers are expected to stay in the platform indefinitely, the platform will eventually learn every worker’s type and not incur any additional regret thereafter. For instance, this can be achieved in stages by matching every worker to every other worker. To see this, simply enumerate all pairs, observe that pairs can be covered in a single stage; thus everyone will be matched to everyone else in stages.
Define to be the (random) time taken till no additional regret is incurred under a given policy . The platform then seeks to minimize the expected regret until . Defining
where the expectation is over the randomness in , the platform seeks to solve the optimization problem:
In fact, we will show that there exist good policies such that time steps.
3.1 A sufficient statistic for the state: A collection of graphs
In general, a matching policy could depend on the history of matches and rewards, but this state space quickly becomes intractable. We need a sufficient statistic that only preserves information relevant to the decision problem. At first glance we might believe that it is sufficient to simply preserve the marginal posterior distribution of each worker’s type. Further consideration would reveal that this is not so and that we need the joint posterior distribution of the types of all workers, which is a complex object. We nevertheless make the following observations that considerably simplify the analysis: 1) Once a worker’s type has been identified, one does not need to preserve information about the worker’s previous matchings and 2) The joint posterior distribution does not depend on the order in which matches were made.
These two observations allow us to represent the state space as a collection of graphs. We describe this space in the Weakest Link model. Let us define the unknown worker graph as follows:


if and .
The vertex set represents the set of unknown workers at time and an edge exists between two vertices if the two workers have previously been matched and received a reward of 0. A similar description can be done in the Strongest Link model also, except that an edge exists between two vertices if the two workers have previously been matched and received a reward of 1.
In either setting, we can compress the history into the statistic :
(1) 
We can then show the following.
Lemma 1.
In the Weakest Link (and the Strongest Link) model, there is no loss in objective if we restrict ourselves to policies that depend only on in each period .
4 Weakest Link
In this section, we describe our policies and results for the Weakest Link model. Before we do so, a brief discussion on the aspects of regret and learning is in order.
Regret in the Weakest Link Model. In the Weakest Link model, recall that for any pair of workers with , the reward observed for this pair is and thus a positive reward is generated only when . Thus, if the worker qualities are known, then the optimal matching strategy is to match the 1 workers amongst themselves. In the case where the number of 1 workers is even, every time a 1 worker is matched to a 0 worker, the platform incurs a (perworker) regret of . Thus minimizing regret amounts to minimizing the number of
matches. When the number of 1 workers is odd, this is not precisely true, since one
match is inevitable in each stage. But nevertheless, this is a good enough approximation to the objective when is large since we will be considering policies that incur no additional regret after in an expected time steps. This is formalized in the following lemma.Lemma 2.
Consider a policy such that . Define to be times the total number of matches between high and low type workers under the policy until . Let . Then .
Learning in the Weakest Link Model. Under the Weakest Link model, a 1 worker gets identified when he is matched to other 1 worker, either known or unknown, and a feedback of is observed. While 0 workers are identified when they get matched to another worker who is known to be a 1 or is later identified as being 1. Thus matching of a worker with another 1 worker is critical to her quality identification.
Inevitability of Regret. Thus, informally, the goal of any optimal algorithm is to speedup the identification of all 1 workers, while minimizing the number of matches in the process. But the identification of unknown 1 workers requires pairing them with other 1 workers, and this in turn inevitably exposes unknown 0 workers to matches with 1 workers, thus incurring regret.
The two central questions in this setting are: 1) How should the workers with unknown quality be matched amongst themselves? And 2) Should the identified high quality workers be matched amongst themselves or should they be matched to unknown workers? In order to address the two questions separately, we define the following class of policies.
Definition 4.1 (A nonlearning policy).
A policy is called nonlearning if it always pairs workers known to have type with each other and not with workers whose types are still unknown to the system.
First note that feasible nonlearning policies exist, because when unknown workers are matched with each other, 1 workers are always identified in pairs and so we do not have a case when known 1 workers are forced to be matched to unknown workers. To define such a nonlearning policy, we need only to specify how workers whose qualities are unknown are matched amongst themselves. In other words, we can define the policy on the unknown graph, where vertices are unknown workers and edges indicate previous matchings with a reward of 0.
This class of nonlearning policies may seem attractive to the platform, as it guarantees that proven high quality workers will have positive future experiences and the platform may not wish to expose good workers to possible negative experiences. These policies are also myopic and maximize the number of certain highquality pairs at each time step. We will show, however, that these policies are not always optimal from the perspective of the platform in the long run.
We first show the following lower bound on the expected regret of any nonlearning policy.
Proposition 3 (A Lower bound for any nonlearning policy).
For any and and for any nonlearning policy such that , we have
Next, we define two nonlearning policies, a metapolicy Exponential Cliques and its practical implementation called stopped Exponential Cliques, and we will show the latter to be optimal among the class of nonlearning policies for all when is large. Moreover, we will later show that it is asymptotically optimal among all policies, learning and nonlearning, for our regret minimization problem as .
Exponential Cliques:
As this is a nonlearning policy, all known 1 workers are paired with each other and so we restrict our attention to the unknown graph. The algorithm is carried out in epochs and in each epoch we pair two cliques of unknown workers and test all pairwise matches between the two cliques.

At the first epoch (at time ), each group is an individual worker (vertex in ) and we pair workers at random and observe the feedback. Workers who are identified as 1 are removed from the unknown graph and the remaining vertices form a clique of size 2.

In the remaining epochs the unknown graph consists of cliques of size at the start of the epoch. We pair cliques together and use the next time steps to make all pairwise matches between the workers in the two cliques.

We stop when all worker types have been identified, or the graph consists of one clique.
If at the start of an epoch there is an unpaired clique , which has size , then the workers in this clique will be matched among themselves until the next epoch when there is an unpaired clique , which has size . In the th epoch, we create all pairwise matches between and , which will take time , and then repeat existing matches for the remainder of the epoch.
Note that in this algorithm, a worker’s type is identified if and only if two cliques , each with exactly one 1 worker, , respectively, are paired together and the pairing between and occurs. The fact that now tells us that all other workers in were 0. Similarly we can conclude that all workers in are also 0. If we do not observe a matching between such a and , then at the end of the epoch and are combined to form a clique of twice the size. This algorithm maintains the invariant that at the beginning of every epoch , all workers in the unknown worker graph are in a clique of size , except for possibly workers in the unpaired clique. Since the workers in this unpaired clique are repeating matches among themselves, the platform is not learning any information about this clique and so these workers may be incurring more regret. As the number of epochs increases, the number of workers in this unpaired clique grows exponentially. To curtail the effects that this clique might create in terms of regret, we define the following algorithm that stops Exponential Cliques after epochs.
kstopped Exponential Cliques: Run Exponential Cliques for epochs. After epochs, enumerate all matches remaining create remaining matches as fast as possible.
If there are unknown workers remaining when we stop Exponential Cliques, we can complete all remaining matches in at most time steps.
We have the following upper bound on the performance of this algorithm.
Proposition 4 (Upper bound on expected regret of kStopped Exponential Cliques).
Let be the stopped Exponential Cliques Algorithm. Then,
and moreover, under the Exponential Cliques Algorithm, .
And thus Stopped Exponential Cliques is an asymptotically optimal (when is large) nonlearning algorithm.
Intuitively, this upper bound can be explained as follows. Under exponential cliques, at each epoch we have two types of cliques: Type A consisting of all 0 workers and Type B consisting of a single 1 worker, the remaining workers being 0. Each 0 worker starts in a singleton clique of type A. There are two steps to each 0 worker being identified. It first becomes a part of a type B clique, and in the process gets matched to a 1 worker exactly once (suppose for a moment that there are no unpaired cliques at any epoch, so the 0 worker doesn’t get matched to the same 1 worker again). In the second step, this type B clique gets matched to another type B clique. In this step, there are two possibilities: either the 0 worker gets matched to the lone 1 worker of the other clique, or before that happens, the two lone 1 workers in the two cliques get matched to each other, thus identifying everyone in the two cliques. In the first case, the 0 worker gets matched to a 1 exactly once more, while in the latter case it doesn’t. The two possibilities are equiprobable. Thus in expectation, each 0 worker gets matched to
1 workers, leading to a regret of per 0 worker. Thus the total regret per worker is upper bounded by . Of course, this argument assumes that there are no unpaired cliques at any epoch. The proof shows that by the careful selection of the stopping time for exponential cliques, the contribution to regret from the unpaired cliques until the stopping time and the regret from the residual matches after the stopping time are .To summarize, we have characterized an algorithm that is optimal among the class of algorithms that does not pair known high quality workers with workers whose types are uncertain. We have not yet made any claims as to how well this class of algorithms compares to policies that pair known workers with unknown workers for the sake of learning.
4.1 Learning vs. nonlearning policies in the regime
Now we show a lower bound on the expected regret of any policy as , which will then allow us to conclude that the optimal nonlearning stopped Exponential Cliques policy is indeed asymptotically optimal (in ) among all algorithms, nonlearning and otherwise, in this regime.
4.1.1 A Lower Bound on Expected Regret
The following lower bound holds for all policies and comes from the fact that all algorithms, in expectation, must incur a substantial amount of regret in the first two time steps since worker types are initially unknown.
Proposition 5.
For any and policy , we have
Since this shows that the nonlearning stopped Exponential Cliques is asymptotically optimal in the large regime and that risking bad matches by pairing known high quality workers with unknown workers is not necessary for the platform.
Comment 1.
For large values of , with high probability, a large enough number of 1 workers will get identified in the first period so that we are able to learn all the remaining unknown workers in the second period. We can show that this policy is strictly suboptimal. Informally, the number of matches in the first period is . Moreover the number of unidentified 0 workers in this period is , all of which will be matched to a 1 worker in the second period and get identified. Since a regret of is incurred per worker in a match, the total per worker regret is . Now .
4.2 Suboptimality of nonlearning algorithms for smaller
We now show that nonlearning algorithms are suboptimal for . We do so by proposing a learning algorithm that achieves a lower regret than in this regime. This algorithm is a modification of the stopped Exponential Cliques algorithm and again ensures that the set of workers in the unknown graph are partitioned into cliques of equal size. This time, a known 1 worker is assigned to the clique to match with the unknown workers and learn their types faster.
Distributed Learning: This policy proceeds in epochs.
At time , we set epoch and match the workers at random. In the following epochs

Randomly pair cliques in the unknown worker graph. Form pairs of known 1 workers and assign each pair to one of the clique pairs until there are no more or pairs. We refer to the vertices in these pairs as the exploration set.

If there are an odd number of cliques, choose one clique at random and pair workers in the clique with each other for this epoch.

For any not assigned to an , run Exponential Cliques on this pair.

For clique pairs in the exploration set we use the known 1 workers to help learn . At each step, create pairwise matchings between the two cliques but replace one matching with the matchings and . It is possible to do this in such a way that no are matched to each other more than once.^{1}^{1}1To ensure that no two workers are matched more than once, at each step in epoch , match and , where denotes the th worker in . Then for all other , pair .

If the algorithm identifies a 1 worker , pair with any known 1 workers available and pair the other 0 workers in with 0 workers. Use the known 1 worker to learn the rest of worker types in .
This policy maintains the invariant that all vertices in the unknown graph are contained in cliques of size at the beginning of each epoch, except for possibly one unpaired clique. Since the algorithm identifies the worker types of all cliques in the exploration set by the end of an epoch, the only cliques remaining will be those which underwent the original Exponential Cliques policy without learning. Thus at the start of the next epoch all workers will be contained in a clique of size .
The fact that the inputs and outputs of Distributed Learning at each epoch are cliques of size and , respectively, allows us to alternate epochs of Exponential Cliques and Distributed Learning.
stopped Exponential Cliques with learning () Let denote the policy which matches at random at , conducts rounds of Distributed Learning, then runs epochs of Exponential Cliques, and then completes all remaining matches as fast as possible.
The following proposition shows that this learning policy can beat the performance of all nonlearning policies for small enough .
Proposition 6.
For ,
The Weakest Link model exhibits a tradeoff between myopic regret minimization and learning, in terms of who to pair known high type workers to. We showed that our nonlearning policy is optimal as and that all nonlearning policies are suboptimal for . Thus if the platform wishes to maximize the expected aggregate payoff across all workers, a greedy policy is not always optimal and the platform must take into account the proportion of high quality and low quality workers in its worker pool.
5 Strongest Link
We now consider the strongest link setting. In particular, for any pair of workers , the payoff that the team formed by this pair generates is .
Regret in the Strongest Link model. In this setting, recall that for any pair of workers with , the reward observed for this pair is , and a positive feedback is generated when or . Thus, if the worker qualities are known, the optimal matching is to make matches to the extent possible. When , and when is large, some matches are inevitable in the optimal matching. Thus regret can be measured by the number of matches. On the other hand if , some matches are inevitable and regret can be measured by the number of matches. We formalize this observation in the following lemma. The proof can be found in the appendix.
Lemma 7 (Characterization of Regret).
For any policy such that , define to be times the number of matches for () until . Let . Then .
Learning in the Strongest Link model. In this setting, 0 workers are identified by being matched other 0 workers, either known or unknown. While 1 workers are identified when they are matched to either a known 0 worker, or an unknown 0 worker that later gets identified. Thus, unlike the Weakest Link model, a pairing of an unknown worker to a 0 worker is necessary for identification.
Inevitability of Regret. In any algorithm, in the first period, workers are paired off arbitrarily. At the end of this period, there is the set of pairs of workers that have been identified as 0 (ones that resulted in an outcome of ) and the set of pairs of workers of unknown quality (ones that resulted in an outcome of ). Of the pairs that are of unknown quality, it is straightforward to compute that fraction are of the type and the remainder are of the type. Thus in the first period itself we have an unavoidable fraction of matches that incur regret irrespective of . Moreover, more regret is unavoidable in the subsequent periods.
1) When the pairs of known workers in incur regret if they are matched to each other and the goal is to find these workers high quality matches from the unknown population as quickly as possible. But matching known 0 workers to unknown workers to learn inevitably incurs regret due to potential matches to unknown 0 workers.
2) When the pairs of workers in that are of the type incur regret and they need to be matched to a 0 worker as soon as possible. Since unknown pairs already incur no regret, it is the known 0 workers at the end of the first period that must be matched to the unknown pairs in . But in this case, matching these known 0 workers to unknown workers does not incur any regret. Regret is nevertheless inevitable because of the unavoidable matches between unknown workers.
In either case, the central question is: how to utilize workers that have been identified as 0 at the end of the first period to explore the unknown workers?
5.1 Candidate algorithms
After observing outcomes of the pairings in the first period, consider the situation from the perspective of a conservative manager. Since all the pairs of workers in result in a reward of , she may be tempted to not jeopardize that guaranteed reward and hence match the learned 0 workers amongst themselves. But any manager can be convinced that this approach is catastrophic in terms of regret. The problem is that there could be type pairs in , which could be separated into two pairs, thus doubling the reward. Thus any algorithm that eventually attains regret must pair the discovered 0 workers that do not have a high type match, to unknown workers.
What, then, is the policy that matches these known 0 workers to unknown workers in the most conservative fashion, i.e., with the least negative impact on immediate payoffs? This policy is simply the one where every pair of known 0 workers are matched to an unknown pair of workers to the extent possible (remaining matches are preserved). By doing so, there is no adverse impact on immediate payoffs: either the unknown pair was in which case the payoff gets doubled, or it is in which case the immediate payoff (from that pair) remains the same, i.e., . Every known pair is successively paired to an unknown pair until it gets matched to a pair. This policy thus eventually attains no regret. We call this policy chain policy, described formally below, for reasons that will become clear.
At this point, we would like to understand what is the power of this most conservative asymptotically regret optimal algorithm. In particular, are there gains from adopting a more aggressive experimentation approach? To understand this, we propose a slightly different algorithm that we refer to as a chain algorithm, formally described below. We show a remarkable reversal of the relative performance of the chain and chain algorithms at : the former incurs lower regret for where as the situation is reversed for .
5.1.1 1chain and 2chain asynchronous algorithms
We now define the chain and chain algorithms that we referred to above. At each time, the algorithms maintain 1) a dynamic set of pairs of workers of unknown quality and 2) a dynamic set of the set of pairs of workers of known quality that are matched to each other, and will be matched to each other for posterity.
Now for each pair in (the set of pairs of workers that have been identified as being of low quality in period 1), the algorithms asynchronously operate in epochs to find high quality matches for each of the two low quality workers. The two algorithms differ in how the epochs are implemented.

[leftmargin=*]

chain. While matches have not been found for the pair, pick a pair from . Let it be . Then match and . If both matches result in an outcome of , then add and to P and stop. The pair has been matched. If not, then add to P and go to the next epoch. If is empty at the beginning of any epoch, add to and stop. Note that each epoch is of time .

chain. While matches have not been found for the pair, pick two pairs from . Let they be and . Then match , and . Now there are 4 cases:

All matches result in an outcome of ; in which case add , and to and stop.

One of or results in an outcome of , the other resulting in , and results in an outcome of . In this case, let be the match that resulted in outcome 0. Then in the next time period, match , , and . If leads to an outcome of , then add , , and to and stop. Else, add and to and go to the next epoch.

Both or result in an outcome of , and results in an outcome of . In this case, add and to and go to the next epoch.

Both or result in an outcome of , and results in an outcome of . In this case, add and to and go to the next epoch.
These are the only possibilities. If has one pair only at any epoch, implement the chain epoch. If is empty, add to and stop. Note that each epoch in this case is either of period (cases 1, 3 or 4) or periods (case 2).

At any time, any pairs in the set that are not demanded by any of the pairs in for experimentation, are matched to each other. The following proposition summarizes our main finding.
Proposition 8.

When , the chain algorithm incurs a lower regret compared to the chain algorithm.

When , the chain algorithm incurs a lower regret compared to the chain algorithm.
In the proof, we show that the chain algorithm accumulates fewer matches between two high type workers for all . However, the chain algorithm results in fewer matches between two low type workers than the chain algorithm for . Thus the change in relative performance between chain and chain algorithms is a result of the discontinuity in the characterization of regret and not a discontinuity in the performance of the algorithms themselves.
6 Conclusion
This work suggests that online labor platforms which facilitate flash teams and ondemand tasks can drastically improve the overall performance of these teams by altering the algorithms that design these teams. A common theme that we explored is what is to be done with workers whose qualities have been learned: should they be utilized to exploit or to explore? We demonstrated that the answer intricately depends on the payoff structure and and on the apriori knowledge about the distribution of skill levels in the population. We provided several insights into to these tradeoffs in the case of two natural payoff structures. In particular, through fundamental regret bounds, we showed that simple myopic strategies can be highly suboptimal in certain regimes, while can work well in certain others. We are optimistic that these insights can help guide effective managerial decisions on these platforms.
References

[BCB12]
Sébastien Bubeck, Nicolo CesaBianchi, et al.
Regret analysis of stochastic and nonstochastic multiarmed bandit
problems.
Foundations and Trends® in Machine Learning
, 5(1):1–122, 2012.  [BFN06] Moshe Babaioff, Michal Feldman, and Noam Nisan. Combinatorial agency. In Proceedings of the 7th ACM conference on Electronic commerce, pages 18–28. ACM, 2006.
 [CBL12] Nicolo CesaBianchi and Gábor Lugosi. Combinatorial bandits. Journal of Computer and System Sciences, 78(5):1404–1422, 2012.
 [CE10] Guillaume Carlier and Ivar Ekeland. Matching for teams. Economic theory, 42(2):397–418, 2010.
 [CSP15] Richard Combes, Mohammad Sadegh Talebi Mazraeh Shahi, Alexandre Proutiere, et al. Combinatorial bandits revisited. In Advances in Neural Information Processing Systems, pages 2116–2124, 2015.
 [ED89] Thomas O Erb and Nancy M Doda. Team organization: Promise–practices and possibilities. 1989.
 [JKK16] Ramesh Johari, Vijay Kamble, and Yash Kanoria. Know your customer: Multiarmed bandits with capacity constraints. arXiv preprint arXiv, 1603, 2016.
 [KR15] Jon Kleinberg and Maithra Raghu. Team performance with test scores. In Proceedings of the Sixteenth ACM Conference on Economics and Computation, EC ’15, pages 511–528, New York, NY, USA, 2015. ACM.
 [KWAS15] Branislav Kveton, Zheng Wen, Azin Ashkan, and Csaba Szepesvari. Tight regret bounds for stochastic combinatorial semibandits. In Artificial Intelligence and Statistics, pages 535–543, 2015.
 [NWC99] George A Neuman, Stephen H Wagner, and Neil D Christiansen. The relationship between workteam personality composition and the job performance of teams. Group & Organization Management, 24(1):28–45, 1999.
 [RMT17] Arun Rajkumar, Koyel Mukherjee, and Theja Tulabandhula. Learning to partition using score based compatibilities. In Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems, pages 574–582. International Foundation for Autonomous Agents and Multiagent Systems, 2017.
 [RRT14] Daniela Retelny, Sébastien Robaszkiewicz, Alexandra To, Walter S Lasecki, Jay Patel, Negar Rahmati, Tulsee Doshi, Melissa Valentine, and Michael S Bernstein. Expert crowdsourcing with flash teams. In Proceedings of the 27th annual ACM symposium on User interface software and technology, pages 75–85. ACM, 2014.
 [SJR16] Max Simchowitz, Kevin Jamieson, and Benjamin Recht. Bestofkbandits. In Conference on Learning Theory, pages 1440–1489, 2016.
 [TGW05] Till Talaulicar, Jens Grundei, and Axel v Werder. Strategic decision making in startups: The effect of top management team organization and processes on speed and comprehensiveness. Journal of Business Venturing, 20(4):519–541, 2005.
 [VRT17] Melissa A Valentine, Daniela Retelny, Alexandra To, Negar Rahmati, Tulsee Doshi, and Michael S Bernstein. Flash organizations: Crowdsourcing complex work by structuring crowds as organizations. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pages 3523–3537. ACM, 2017.
Appendix A Proofs of all results
Proof of Lemma 1.
We only show the result for the Weakest Link model; the Strongest Link setting is similar. Note that the entire history can be represented in an edgelabeled graph where and if and with labeling function such that . Note that the only pieces of information in the history not captured in are the (possible) repeated matchings between workers
and the order in which these matches occurred. Neither affects any posterior probabilities of the worker or subsets of the workers. Thus this graph captures all joint posterior probabilities
for all workers .Now suppose that we introduce a vertexlabeling function on some subset of the vertex set, to be defined, with .
For any edge with , it must be true that and for any vertex with or that . Then removing all edges adjacent to and , adding and all neighbors to the set , and labeling the vertices with the worker types does not change the posterior probabilities that can be calculated from this graph. For any vertex adjacent to a vertex with , the edge label regardless of and so removing this edge does not affect posterior probabilities. Thus (1) is sufficient to represent the history . The proof for the Strongest Link setting is similar and follows from the same observations that repeated matchings and the order in which matchings occur do not affect the posterior distribution. ∎
Proof of Lemma 2..
When is even then every time a match is made, some other match is being made as well, where as the two high types should have been matched to each other. Thus there is a a loss in payoff of for every two pairs, i.e., a regret of per pair, or a regret of per worker, per pair. Thus the result follows. When is odd, one match is inevitable at each time step and does not incur any loss in payoff. This attributing a regret of per
pair overestimates the regret by
. But since , the result follows. ∎Proof of Proposition 3.
The proof follows multiple steps. First, observe that the random ability assignment where first the number of high types is drawn from a Binomial(N,p) distribution, and then the realized number is assigned uniformly across all possible assignments to the workers, has the same distribution as the i.i.d. ability assignment with probability p. In order to prove the result, first observe that
(2)  
(3)  
(4)  
(5) 
We will thus prove a lower bound on . To do so, we prove a lower bound on , and then take conditional expectation on the event that to derive the overall lower bound. Fix any one of the low types. Call this type . We will first show that the probability that the type gets matched to at least one high type is at least .
To see this, consider an assignment of types in which the low type does not get matched to any high type. Now for each such assignment, there are at least other assignments, obtained by swapping with each one of the high types that get identified, such that in every swapped assignment, gets matched to a high type (if is even, each high type will eventually get identified; if is odd, at least high types get identified). Further, no two assignments in which doesn’t get matched to any high types lead to the same assignment after a swap. To see this, suppose that , and are three different workers. Suppose that and are two other high types. Suppose that assignment of these types is , and , and assignment is , and (the assignment of types to all other workers is the same in the two assignments). Then swapping the assignments of and in assignment leads to the same assignment as that obtained by swapping assignments of and in assignment . Suppose that in assignment , never gets matched to a high type. But since gets matched to a high type in assignment (since is even), this means that in assignment , will definitely get matched to a high type because the policy is unable to differentiate the two assignments until that happens (note that in assignment , will not be matched to a high type till the policy notices the difference). Thus no two assignments in which doesn’t get matched to any high types lead to the same assignment after a swap, because if that is the case, then in one of the assignments does get matched to a high type, which is a contradiction.
Thus if we denote the probability of being never matched to a high type as , then we have , which implies that .
Next, fix a type assignment in which gets matched to exactly one high type that eventually gets identified. Let denote this high type. Thus in the assignment under consideration, after and are matched, gets matched to another high type. For every such type assignment there is another assignment where the assignments of and are swapped, in which after and get matched, gets matched to another high type before gets matched to another high type. The policy cannot tell the difference between these two assignments until this happens since the sequence of outcomes remains the same. Thus for every assignment where gets matched to exactly one high type before being identified, there is an assignment where gets matched to at least 2 high types before getting identified. Moreover no two assignments lead to the same assignment after the swap. To see this, suppose that , and are three different workers. Suppose that and are two other high types. Suppose that assignment of these types is , and , and assignment is , and (the assignment of types to all other workers is the same in the two assignments). Then swapping the assignments of and in assignment leads to the same assignment as that obtained by swapping assignments of and in assignment . Suppose that in assignment , is the only high type that gets matched to. Then in assignment , and get matched before can get matched to and are removed in the nonlearning policy. Thus even if gets matched to exactly high type in assignment , that high type is not and hence the swap is not valid. Thus no two assignments in which gets matched to a single high type lead to the same assignment after the swap.
Now define the following quantities:

Let be the probability that the type is matched to exactly one high type that eventually gets identified under the nonlearning policy,

is the probability that the type is matched to two or more high types under the nonlearning policy,

is the probability that the type gets matched to exactly one high type, that never gets identified (this can happen only when is odd).
Then our argument above shows that . Further . Thus . This means that .
Thus the expected number of high types that gets matched to under the policy is at least .
Thus times the expected number of highlow matches across all types conditional on is,
Thus,
We finally show that , which completes the proof. This quantity is the probability that a type gets matched to exactly one high type and that high type is the one that never gets identified. Suppose that this probability is . Then the expected number of such matches till the time the algorithm finishes learning is , which contradicts the fact that the algorithm learns in time steps in expectation (since a high type that doesn’t get identified gets matched to a low type exactly once in each time step). ∎
Proof of Proposition 4.
We need the following two lemmas to establish the proof.
Lemma 9.
At the beginning of any epoch of the Exponential Cliques algorithm, the probability that a clique of size contains a high type worker is
Proof.
First note that the initial worker types are independent and distributed according to . Then a subset of workers has high type workers, where . If a clique is in the unknown graph, then there can be at most one high type worker in the clique. Then the probability that the clique contains exactly one high type worker, given the fact that it contains either one or zero, is . ∎
Lemma 10.
Given any and any sequence such that for all , define the random variables as follows:
Then for all ,
where .
Proof.
We will prove the above lemma by induction. Note that for , the proof is straightforward as has the distribution .
Also, for any , the fact that is fairly straightforward. For example, it follows from a coupling argument with a random variable of type .
Assume the lemma is true for . Then,
By the induction assumption, we have .
Also remember that .
Therefore, we get that
and thereby completing the induction argument. ∎
We now prove Proposition 5. First, note that under Exponential Cliques, at each epoch we have two types of cliques: Type A consisting of all 0 workers and Type B consisting of a single 1 worker, the remaining workers being 0. Each 0 worker starts in a singleton clique of type A. There are two steps to each 0 worker being identified. It first becomes a part of a type B clique, and in the process gets matched to a 1 worker. In the second step, this type B clique gets matched to another type B clique. In this step, there are two possibilities: either the 0 worker gets matched to the lone 1 worker of the other clique, or before that happens, the two lone 1 workers in the two cliques get matched to each other, thus identifying everyone in the two cliques. In the first case, the 0 worker gets matched to a 1 exactly once more, while in the latter case it doesn’t. The two possibilities are equiprobable. Thus in expectation, each 0 worker gets matched to 1 workers, leading to a regret of per 0 worker. Thus the total regret per worker is upper bounded by .
But this argument assumes that there are no unpaired cliques at any epoch. If a 0 worker is a part of an unpaired clique of type , then it gets matched to a worker in that clique more than once. The contribution of such a match to the overall regret can at most be
at every time step (since there is at most one 1 worker in an outlier clique at any time step). But over
epochs, the total number of time steps is . Thus the contribution to the regret from the outlier cliques throughout the run of Exponential cliques is .What remains is the contribution to the regret from the workers left unidentified after the run of Exponential cliques, i.e., when there are randomly matched to each other. the rest of the proof shows that the contribution to regret from this phase is as well.
Let denote the number of unknown workers at the start of epoch . If we stop Exponential Cliques at epoch and afterward complete all remaining matches as fast as possible, even if all these matches incur regret, the total regret is (which will take time steps). Thus, the expected perworker regret incurred after stopping Exponential Cliques is at most . We show that if we stop Exponential Cliques at epoch , . Note that this would also imply that and this the number of steps required to finish learning every workers type after EC stops is . Thus, overall .
Let denote the number of unknown cliques at the start of epoch . Then
(6) 
Let be the number of cliques of size exactly . When two unknown cliques are matched, then they give rise to an unknown clique in the next epoch if they are not both cliques of high type^{2}^{2}2A high type clique contains a worker of type 1.. Therefore, we know that
where .
Note that all the cliques are of size , except possibly just one of smaller size. The presence of an unpaired clique of size or a clique with fewer than workers (and there can be only one such clique), will remain at the end of the epoch and that all future unpaired cliques will be merged with this one. Hence,
All that remains to be done is to show that
Let . Invoking Lemma 10, we know that
Utilizing these inequalities in the equation above, we now have to show that
or equivalently,
(7) 
Note that
Comments
There are no comments yet.