Recent years have seen explosive growth in the domain of digital data-driven services. Search engines, restaurant recommendations, and social media are among the many products we use day-to-day which sit atop modern data analysis and machine learning (ML). In such markets, firms live and die by the quality of their models; thus success in the ‘race for data’, whether acquired directly from customers or indirectly via acquisition of rival firms or purchasing data corpuses, is crucial. In this work, we study two questions: whether such markets tend towards monopoly, and how competition affects consumer welfare. Importantly, we consider these questions in light of the modeling choices that firms must make.
In our model, two firms compete for market share (utility) by providing identical services that each rely on an ML model. The firms’ error rates depend on their choices of algorithms, models and the volume of available training data. Each firm’s market share is proportional to the error of its model relative to the model built by its competitor. This is motivated by the observation that the services built using ML are highly accurate, so users are more conscientious of the mistakes the service makes, rather than the successes. A competition exponent measures relative ferocity of competition and maps to a plausible Markov model of consumer choice. See Section2.2 for more details.
The firms initially possess (possibly differing) quantities of data, and are given the opportunity to buy additional data at a fixed price to improve their models. Since data is costly and relative (rather than absolute) model quality determines market share, each firm’s best course of action may depend on the actions of its rival. Hence, each firm acts strategically and faces two decisions: whether to buy the additional data, and what type of model to build in order to produce the best product given the data it ends up with.
The decision of what model to build seems to complicate the firms’ action space greatly; there is a very large set of model classes to select from, and different classes have different efficiencies. For example, when restricting attention to neural networks, the choices of depth and number of nodes per layer produce different hypothesis classes with different optimal models. Thus, in principle, the decisions of what model class to select and whether to purchase additional data must be made jointly. However, learning theory allows us to greatly reduce this large action space. In Section2.1, we show that the game in which firms jointly choose a model and whether to attempt to buy the additional data reduces to a strategically equivalent game in which firms first choose whether to buy the data and then choose optimal models.
In Section 3, we characterize the Nash equilibria of our game for different parameter regimes. For no combination of parameters does exactly one firm wish to buy the data; unsurprisingly, for very high prices, neither firm buys data, and for very low prices, both firms do. In the middling regime, the competitive aspect of the game imposes a ‘prisoners' dilemma’-like flavor: both firms would prefer neither firm buy the data, but each do so in order to prevent the other from strengthening its position. Moreover, the unique mixed strategy Nash equilibrium in this regime involves firms increasing
their probability of buying data as priceincreases. This counterintuitive result follows from the logic of equilibrium: firms playing mixed strategies must be indifferent to buying and not buying the data, and as the price rises, the probability that a competitor acquires the data must rise in order to make investing in data acquisition a palatable option.
Finally, we study whether any of the dynamics of the game push the market towards a monopoly. Perhaps counter to a ‘rich-get-richer’ feedback loop that might be expected in data races, we observe that in all equilibria, the data gap (and thus, market share gap) always narrows (in expectation). As measured by consumer welfare, this is actually undesirable. Both the direction of the data gap as well as the welfare implication may be counterintuitive, particularly with respect to the well-known stylized fact that market concentration is bad for consumers. However, consumer data that improves a service can be viewed as exhibiting a form of network effects, in which case perfect competition can result in inefficiency and under-provisioning of a good [katz1985network]. In other words, a greater data gap would result in more consumers using a less error-prone service. As for the data race, anecdotal evidence, such as GM’s acquisition of automated driving startup Cruise, despite Waymo’s earlier market entry and research head-start, are suggestive (though not conclusive) that these predictions may be indicative of real-world dynamics [Primack16].
We view our work as a first step towards modeling and analyzing competition for data in markets driven by ML. Under our simplifying assumptions, we derive concrete results with relevance both for policymakers analyzing algorithmic actors as well as engineering or business decision-makers considering the tasks of data acquisition and model selection. Our results are qualitatively robust to other natural modeling choices, such as allowing both firms to purchase the data, as well as treating the data seller as a market participant; however, more significant departures may lead to different conclusions. See Sections 4 and 5 for more details.
1.1 Related Work
The theory of ML from a single learner’s perspective is well developed, but until recently, little work had studied competition between learning algorithms. Notable exceptions include [wu, bpt]. We differ from both works by exploring the comparative statics and welfare consequences of a single decision (data acquisition). Concurrent work [benporat2019] studies a game in which learners strategically choose their model to compete for users, but users only care about the accuracy of predictions on their particular data. In contrast, users in our model choose based on the overall model error.
Our work also intersects with several strains of economic literature, including industrial organization and network effects [katz1985network, david1987some, economides1996economics]. We differ from such models in two key ways. First, in contrast to assuming a static equilibrium [katz1985network] or fixing a dynamic but unchanging process at the outset [salfar], our work can be viewed as an analysis of a shock to a given potentially asymmetric equilibrium in the form of the availability of new data. Second, the consumers in our model do not behave strategically (see e.g. [vohra, wu] for more discussion).
Finally, our work is related to spectrum auctions, competition with congestion externalities [vohra], and the sale of information or patents [kamien1992optimal, kamien1986fees]. Our results primarily share qualitative similarities: the choice of one firm to buy data (spectrum) forces the other to do so to avoid losing market share, though it would not have been profitable absent the rival, and actual outcomes run counter to consumer preferences (see e.g. [vohra]).
We formally motivate and model the ML problem of the firms and demonstrate how this reduces to a game in which the firms can either buy or not buy the new data.
2.1 Choosing a Model Class
Consider a firm using ML to build a service e.g. a recommendation system. The amount of data available to the firm is a crucial determinant to the effectiveness of the predictive service of the firm. Fixing the amount of data, the firm faces a fundamental tradeoff; it can use a more complex model that can fit the data better, but learning using a complicated model requires more training data to avoid over- or underfitting.
We can formally represent this tradeoff as follows. Let denote the hypothesis class from which the firm is selecting its model and assume the data is generated from a distribution . Then given i.i.d. draws from the error of the firm when learning a hypothesis from can be written as [sssml].
The first term, known as estimation error, determines how well in expectation a model learned with draws from can predict compared to the best model in class . The second term, known as approximation error, determines how well the best model in class can fit the data generated from .
The approximation error is independent of the amount of training data, while the estimation error decreases as the volume of training data increases. The choice of affects both errors. In particular, fixing the amount of training data, increasing the complexity of will increase the estimation error. On the other hand, the additional complexity will decrease the approximation error as more complicated data generating processes can be fit with more complicated models.
Once the amount of data is fixed, the firm can optimize over its choice of model complexity to achieve the best error. We examine a few widely used ML models and their error forms.
As a first example, consider the case where the firm is building a neural network and has to decide how many nodes to use. is the measure of the complexity of the model class and given data points, the error of the model can be written using the following simplification of a result from Barron94.
Lemma 1 (Barron94).
Let be the class of neural networks with nodes. Then for any distribution , with high probability, the error when using data points to learn a model from is at most for constants and .
Fixing , the choice of that minimizes the error can be computed by minimizing the bound in Lemma 1 with respect to . This corresponds to and we get that the error of the model built by the firm is .
As another example, consider the very simple setting of realizable PAC learning where the data points are generated by some hypothesis in a fixed hypothesis class.
Lemma 2 (KearnsV94).
Any algorithm for PAC learning a concept class of VC dimension must use examples in the worst case.
Thus in this setting, in the worst case, firms need training data points to achieve error . A similar bound gives that with high probability, the firms can guarantee error of (see [KearnsV94]).
In the examples above the error of a firm with data points takes the form of either or after the firm optimizes over the choice of model complexity. Importantly, the error in both cases (and more generally) degrades as the number of data points increases. The rate at which the error degrades is commonly known as the learning rate.
There are other learning tasks with learning rates different than the examples above. Consider a stylized model of a search engine where the set of queries is drawn from a fixed and discrete distribution over a very large or even infinite set, and the search engine can only correctly answer queries that it has seen before. If, as is often assumed, the query distribution is heavy-tailed, then the search engine will require a large training set to return accurate answers.
In this framework, the probability that a search engine incorrectly answers a query drawn from the distribution is exactly the expectation of the unobserved mass of the distribution given the queries observed so far. This quantity is known as the missing mass of a distribution (see e.g. [BerendK11, Decrouez2018, OrlitskySZ03, Good53]). Lemma 3 shows how to bound the expected missing mass for the class of polynomially decaying query distributions.
Lemma 3 (Decrouez2018).
Let for be a discrete distribution with polynomial decay defined over such that Then the expected missing mass given draws from is .
By varying in the query distribution of Lemma 3, the learning rate in the search problem can take the form of for any . Thus, the learning rate for search may be much faster or slower compared to the previous examples, and the exact rate depends on the value of .
We saw that given a fixed amount of data, a firm using ML can optimize over its learning decisions to get the best possible error guarantee. Furthermore, while error decays as more data becomes available, the rate of decay can vary widely depending on the task. We next see how various learning rates can be incorporated into the parameters of our game.
2.2 Error-Based Market Share
Consider two competing firms (denoted by Firm 1 and 2) that provide identical services e.g. search engines. We assume the market shares of the firms depend on their ability to make accurate predictions e.g. responding to search queries. As discussed above, the quality of their models is determined ultimately by the size of their training data with a task-dependent learning rate. Each firm trains a model on its data and uses its model to provide the service. Let and denote the excess error of the firms for the corresponding models. Intuitively, these errors measure the quality of the firms’ services, so a firm with smaller error should have higher market share. We assume each firm captures a market share proportional to the relative errors of the two models. Formally, we define Firm 1 and 2’s error-based market share as
The constant , which we call the competition exponent (inspired by Tullock contest [Tullock01]), indicates the ferocity of the competition, or how strongly a relative difference in the errors of the firms’ models translates to a market advantage. As gets closer to 0, the tendency is towards each firm capturing half of the market, and thus a large difference in the models’ errors is needed for one firm to gain a significant advantage in the market share. Conversely, as grows larger, even tiny differences in the models’ errors translate to massive differences in the market share. (See Figure 1.)
An error-based model reflects markets for services which demand extremely low errors, such as vision systems for self-driving cars. Under the error-based model, if Firm 1 has accuracy and Firm 2 has accuracy, Firm 1 will capture of the market share. By contrast, an accuracy-based model (i.e. when the market share of Firm 1 is defined as ) would suggest much less realistic near-even split.
We provide another justification suggesting that an error-based market can arise even when the learned model is used to provide an everyday service in which high accuracy is not a strict requirement. Consider a customer who, each day, uses the service. She begins by choosing the service of one of the firms uniformly at random. As long as the answers she receives are correct, she has no reason to switch to the other firm’s service, and uses the same firm’s service tomorrow. However, once the firm makes an error, the customer switches to the other firm’s service. The transition probabilities are therefore given by the accuracy and error of each firm. See Figure 2 for the Markov process representing this example.
We can think of the market share captured by each firm as the proportion of the days on which each firm saw the customer. This is exactly the stationary distribution of the associated Markov process as stated in Lemma 4.
Let and denote the probability mass that the stationary distribution of the Markov process in Figure 2 assigns to Firms 1 and 2. Then , and
Sketch of the Proof.
By the definition of a stationary distribution, and should satisfy the following conditions:
Given that by definition, we can solve the system of linear equations to compute the market shares. ∎
Lemma 4 states that the market share of each firm in the Markov process is exactly the error-based market share as defined in Equation 1 when setting . A similar argument motivates an error-based market share for values of , where the customer switches firms after experiencing mistakes in a row. The probability of making mistakes in a row is just for Firm , so the stationary distribution of the Markov process is exactly the two error-based market shares as defined in Equation 1.
Using our observations from Section 2.1, we can write the error-based market share in the large-data regime as follows.
Let and denote the number of data points of Firm 1 and 2, respectively. Then for some , the market share of Firm 1 can be written (asymptotically) as .
Sketch of the Proof..
Depending on the task at hand, we can write the of a firm as for some , where refers to the excess error of the model with the smallest worst-case error. Substituting this into Equation 1 and ignoring lower order terms, which vanish asymptotically, we get
Now let . Since is a natural number and is a real number in , the combined competition exponent is a real number strictly larger than 0. ∎
Because can be any integer and there exists a corresponding learning problem for any learning rate in , Theorem 1 implies that the combined competition exponent in our game can be any positive real number, motivated by the initial choice of and the learning rate of the firms’ ML algorithms.
The reductions and derivations in Sections 2.1 and 2.2 allow us to simplify the acquisition games as follows. We first simplify the actions of each firm to only decide whether to buy the data or not, since model choice can be optimized once the number of available data points is known. Moreover, Theorem 1 not only allows us to simplify the form of market share, but also provides us with a meaningful interpretation for any positive (combined) competition exponent.
2.3 The Structure of the Game
Given the reductions so far, we model our game as a two-player, one-shot, simultaneous move game. Firms 1 and 2 begin the game endowed with an existing number of data points, denoted by and , respectively. Without loss of generality, we assume . Each firm must decide whether or not to purchase an additional corpus of data points111For simplicity we assume this data is independent of and identically distributed to the data in possession of the firms. at a fixed price of . The firm can either Buy (denoted by ) or Not Buy (denoted by ) the new data. If both firms attempt to buy the data, the tie is broken uniformly at random (Section 4 discusses relaxing the assumption that only one firm may buy the data). After the purchase, each firm uses its data to train an ML model for its service.
We assume the particular form of the market share of Firm 1 using the reduction in Theorem 1. The market share of Firm 2 is defined to be one minus the market share of Firm 1.
A strategy profile is a pair of strategies, one for each of the firms. Fixing , the utility of Firm (denoted by ) is its market share less any expenditure. The utility of Firm 1 in all of the strategy profiles of the game is summarized in Table 1 (rows and columns correspond to the actions of Firm 1 and 2). The utility of Firm 2 is defined symmetrically.
|Firm 1/Firm 2||Buy (B)||Not Buy (NB)|
|Not Buy (NB)|
A strategy profile is a pure strategy Nash equilibrium (pure equilibrium) if no firm can improve its utility by taking a different action, fixing the other firm’s action. A mixed strategy Nash equilibrium (mixed equilibrium) is a pair of distributions over the actions (one for each firm) where neither firm can improve its expected utility by using a different distribution over the actions, fixing the other firm’s distribution. We are interested in analyzing the Nash equilibria (equilibria).
3 Equilibria of the Game
We now turn to finding and analyzing the equilibria. First, we introduce some additional notation. Let
These parameters have intuitive interpretations. is the expected change in Firm 1’s (or Firm 2’s) market share when moving the outcome from (or similarly ) to . is the change in market share that Firm 1 receives if it moves from to , and is the symmetric relation from the perspective of Firm 2.
We observe that . Moreover, since and are nonnegative, it is immediately clear that .
Finally, when (i.e. Firm 1 starts with strictly more data), Firm 2 experiences a larger absolute change in its market share when the outcome changes from to than to . In other words, Firm 2 experiences a larger increase in market share when it buys the data compared to the decrease it experiences when Firm 1 receives the data. We defer all the omitted proofs to Appendix A.
If then for all and we have that .
3.1 Characterization of the Equilibria
The equilibria of the game clearly depend on the values of the parameters , , , and . For example, if (), then neither firm should ever want to (not) buy the data. We observe that, fixing the values of , , and , there is a range of values for where the data is too expensive (too cheap) and () is a dominant strategy for both firms. There is also an intermediate range of values for where more interesting behaviors emerge, as formally characterized in Theorem 2.
When , is the unique equilibrium.
When , is the unique equilibrium.
When , and are both equilibria. Furthermore, there exists a (unique) mixed equilibrium such that
where and denote the probabilities that Firms 1 and 2 select the action , respectively.
We use flow diagrams to analyze the equilibria of the game (see e.g. [CandoganMOP11] for more details on this technique). As a tutorial of this flow diagram argument, we carefully analyze the diagram for the regime of our game in which as depicted in the top left panel of Figure 3.
In a flow diagram, each vertex corresponds to a strategy profile. An arrow indicates that one player changes its strategy while the other’s action is fixed. In particular, in Figure 3 vertical (horizontal) arrows demonstrate the change of strategy for Firm 1 (Firm 2). The numerical value above the arrow indicates how much a player gains by a deviation, and arrows are oriented so that they always point in the direction of nonnegative gain. The leftmost vertical arrow indicates that Firm 1 increases its utility by by changing its decision from to , fixing that Firm 2 is committed to playing . Similarly, the rightmost vertical arrow indicates that Firm 1 increases its utility by when it makes this change, fixing that Firm 2 is committed to playing . The horizontal arrows are the symmetric results for Firm 2, fixing the action of Firm 1. The topmost arrow indicates the increase in utility when moving from to when Firm 1 plays , and the bottommost corresponds to the increase in utility for the same change of action when Firm 1 plays .
This particular flow diagram models the regime of the game where the price is sufficiently low such that is the unique pure equilibrium. Consider the profile . Since arrows only point at, rather than originate from, , unilateral deviations from are unprofitable for both players. Hence, is a pure strategy equilibrium in this regime. Furthermore, there is no other pure equilibrium because there are no other ‘sinks’ in the top left panel of Figure 3. Moreover, no mixed equilibrium exists. To see this, note that in a mixed equilibrium, a player mixing can only mix over best responses. But since the arrows representing Firm 2’s deviations both point towards is dominated by ; hence cannot be a best response, so Firm 2 cannot be mixing. But since Firm 1 is not indifferent between and if Firm 2 chooses , Firm 1 will not mix either. More generally, this logic means that mixed equilibria require arrows pointing in opposite directions.
Similar logic allows us to easily analyze the continuum of games induced as allow varies monotonically. Every value of induces exactly one of the flow diagrams in Figure 3. Thus, characterizing the equilibria in each flow diagram characterizes the equilibria of the different parameter regimes.
(1) : The top left panel of Figure 3 represents the flow diagram in this regime and we can see that the only equilibrium is the pure strategy of .
(3) : The top right panel of Figure 3 represents the flow diagram in this regime. There are two pure equilibria: and . There also exists a mixed equilibrium. In a mixed equilibrium, both players are randomizing, and thus must be indifferent between the pure strategies they are randomizing over; this condition allows us to solve for the mixed strategies.
Let denote the probability that Firm 1 is playing B. Then in a mixed equilibrium, Firm 2 is indifferent between the two actions. Therefore,
By rearranging we get that
Similarly let denote the probability that Firm 2 is playing B. Then in a mixed equilibrium, Firm 1 is indifferent between the two actions. With a similar calculation we can show that
(4) : The bottom panel of Figure 3 represents the flow diagram in this regime and we can see that the only equilibrium is the pure strategy of . ∎
Theorem 2 allows us to make several key observations about the market structure of this game. First, since and represent the maximum increase in the market share firms could achieve by buying the data, the fact that the only equilibrium when is means that both firms buy the data despite the fact that the best-case improvement in market share is less than what they pay. This ‘race for data’ thus has the character of a prisoner’s dilemma – if both firms could agree not to buy the data, they would be better off, but either would be tempted to buy the data and improve market share.
Second, Theorem 2 illustrates how several features of equilibrium depend on the ferocity of competition, as determined by the exponent ; as varies, the frontiers of the regimes described in Theorem 2 shift too. For example, in the case that , market share is split evenly between the two firms, regardless of error or accuracy; unsurprisingly, as (which implies ), , and also approach , so the payoff difference between strategy profiles becomes negligible. As a consequence, the regimes (1), (2), and (3) collapse, and all but very small induce regime (4), where is the only equilibrium. Thus, for small , unless is very close to zero, is the only equilibrium. We observe similar behavior when is large. Assuming that , then implies that (and hence ), again implying that regimes (1), (2), and (3) collapse. Thus again, unless is very close to , is the unique equilibrium. This is for a different reason than the small case, however: Firm 1 now has no incentive to buy, since it is guaranteed almost the whole market share using its current model. Moreover, in this scenario, Firm 2’s initial disadvantage is too great to be overcome by buying the data.
If is in between these two extremes, many choices of and lead to a non-empty interval , with endpoints far from and . When falls in this interval, regime (2) holds, so a mixed equilibrium exists; we discuss solving for this mixed equilibrium in Section 3.2. The complete characterization of the equilibria for all regimes of in Theorem 2 allows us to pin down the optimal fixed price from the perspective of maximizing seller’s revenue. However, in full generality, the seller’s problem encompasses further possibilities like auction pricing; hence, we defer this calculation to future work. See Section 5 for a discussion.
3.2 Mixed Equilibrium and Monotonicity Analysis
Next, we carefully examine the mixed equilibrium and study the relationship between the weights each firm places on each action and the parameters of the game.
Recall that and in Theorem 2 denote the probability that Firms 1 and 2 purchase the data in the mixed equilibrium. When , then which implies that the smaller firm will succeed more often in purchasing the data in the mixed equilibrium. The relationship of and with the number of data points and the price is as follows.
Let denote the mixed equilibrium in the the regime where . Then and , both increase when increases or decreases.
Lemma 6 may seem counterintuitive as it implies that as the price rises through the range in which a mixed equilibrium exists, the probability that any of the firms want to buy the data also increases. However, once the price crosses the threshold , the unique equilibrium is the pure strategy . This gives rise to a discontinuity. See Figure 4.
Of course, this all says nothing about the equilibrium utilities for the firms; as long as the equilibrium utilities are not identical, players will naturally have ordinal preferences over the set of equilibria. We analyze these preferences in Lemma 7, which elucidates the discontinuity at .
When , and for all strategy profiles . However, , while .
While both firms agree that is the most preferred outcome, their preferences over the remaining three outcomes are discordant. In particular, given that at least one firm will try to buy the data, each firm would prefer itself to be the buyer rather than the opponent. If either firm believes the other may try to buy the data, it will put positive weight on the action in the mixed equilibrium. Once the price crosses , both firms know that it would be irrational for the other to buy, so we see a unique pure equilibrium of .
3.3 Change in the Market Share
We now analyze the change in the market shares.
When , the only strategy profile that strictly increases the market share of Firm 1 is .
So while only leads to an increase in the market share of Firm 1, it is not a pure equilibrium. We show that even when firms play according to the mixed equilibrium, the expected market share of Firm 1 does not strictly increase.
When and , the expected market share of Firm does not strictly increase if both firms play according to the mixed equilibrium.
Together, Lemma 8 and Theorem 3 demonstrate that the natural forces of the interaction on the market are, perhaps surprisingly, antimonopolistic. Since we assume that Firm 1 enters the game with a greater market share than Firm 2, but that no equilibrium allows Firm 1 to increase its market share, the game disfavors the concentration of market power. This raises the question of whether this antimonopolistic tendency is good for the users. We analyze this next.
3.4 Consumer Welfare in Equilibrium
We now consider the perspective of the users of the firm’s service. We show that consumers prefer the outcome , in which the initially stronger firm concentrates its market power. This is not supported by a pure equilibrium in any regime, nor is it the most likely outcome generated by mixed equilibrium; hence, we will see that the interests of the firms do not align with the interests of the consumers. We define the consumer welfare as follows.
Let and denote the (expected) number of data points that Firm 1 and 2 posses when playing according to strategy profile . Then the consumer welfare is
The welfare definition arises from assuming consumers receive unit of utility for accurate predictions and for erroneous ones. Notice that maximizing this definition of consumer welfare is exactly equivalent to minimizing the market-share weighted error probability. This leads to Theorem 4.
Suppose . Then the consumers have the following preferences over the strategy profiles.
Note that consumers’ preference for the outcome in which Firm 1 concentrates its market power is not the same as saying that the consumers prefer a monopoly. Rather, the consumers have a preference for higher quality services. When , Firm 1’s model before acquiring the data has a lower error rate than that of Firm 2, and so, of all the possible outcomes, the one which leads to a product with the lowest error rate is the one in which Firm 1 is able to improve on its already superior product. But if Firm 2 were not a player at all, then a monopolistic Firm 1 would have no incentive to buy the new data. Therefore, a monopoly without the threat of competition will not lead to the best outcome from the consumer’s perspectives.
4 Extensions and Robustness
Next, we consider robustness to three simple extensions.
We treat the data seller as a market participant with its own customers and market share. This allows us to model firm acquisition: buying the data translates to acquiring the firm and its customers, and neither firm buying the data corresponds to the third firm remaining in the market.
Rather than the data being exclusively sold to one firm in the case that both firms buy, we allow the seller to sell the data to both firms at the same fixed price.
We consider the richer concept of correlated equilibrium and search for additional equilibria.
In each of these extensions, we can again derive the quantities , , and ; while the precise quantities change, their rankings and relationships do not. Thus, in the first two extensions, the general phenomenon of three regimes, with mixing over the middle regime, remains unchanged. In the third extension, some new correlated equilibria exist, but none include the qualitatively different result of coordinating purchase of the data by a single firm. Moreover, in expectation, the market share becomes less asymmetric in all extensions.
5 Future Directions
We view our work as a first step towards modeling and analyzing competition for data in markets driven by ML. There are several directions for further investigation. First, we modeled the data to be acquired as having a fixed size and a fixed price, but real datasets can be divisible. One further direction to consider is a game in which we expand the strategy space of the players to include buying any number of data points at a fixed price per data point or nonlinear function of the number of data points purchased. More generally, treating the seller of the data as an additional player in the game allows for further questions, such as: what is the optimal revenue-generating mechanism to sell the data? And does the optimal mechanism maximize social welfare?
Additionally, many firms that provide learning-based services acquire their data through their customers that use the service. In this way, capturing a larger market share induces a feedback loop which allows a firm to iteratively improve its product. What can be said about our game in a repeated setting with dynamic feedback effects? Furthermore, firms that provide digital services often operate in a secondary market in which other firms pay for advertising spots in their product. Improving one’s market share should in principle allow a firm to charge advertisers a higher price, but we do not know to what extent this affects the analysis of the equilibria of the game. Incorporating advertiser behavior would greatly complicate the model but provide potentially interesting results.
Appendix A Omitted Proofs
Proof of Lemma 5. Define and . Then and we also have that
Next let , , and . Notice that and for all . Algebraic manipulations show that
Fix a pair of with , there are two cases to consider222These two cases correspond to and , but this correspondence is irrelevant.: (1) and (2) .
In case (1) . Notice that would suffice to prove the claim in this case, because
which is the last condition in the chain of double implications. In fact, does hold, because
Now we turn to the second case. Suppose . We again must show that . Notice that the following implication holds.
Hence we show that the first inequality is true. Note that
which is trivially true by and .
Hence in both cases, we have that
concluding the proof. Finally, we note that symmetry yields the corresponding claim . ∎
Proof of Lemma 6. The left hand sides of the equations characterizing the mixed equilibrium in the statement of Theorem 2 have the form which is increasing in when . If we call the left hand side of either of these equations , we can solve for which is also monotonically increasing in over . Hence, to analyze the monotonicity of the , it is suffices to analyze the monotonicity of . But since must also equal the right hand side, it suffices to analyze the monotonicity of the right hand sides of these equations.
Since does not appear in or , it is easy to see that both fractions and increase as increases. Hence, both and increase as increases in the regime . The parameter on the other hand appears in , and . All of these parameters increase as increases. It is then similarly easy to see that both fractions and decrease as increases. ∎
Proof of Lemma 7. We claim that the ordinal preferences of Firm 1 over the outcomes are as follows.
Since , it suffices to show that
We first show that .
We then show that .
Moreover, the ordinal preferences of Firm 2 are as follows.
Again note that . So it suffices to show that .
We first show that .
We wrap up by showing that .
Then in the mixed equilibrium, the four outcomes occur with the following probabilities:
with probability ,
with probability ,
with probability , and
with probability .
The expected change in Firm 1’s market share can be calculated by summing over the change in its market share in each outcome (see the proof of Lemma 8), weighted by how often the outcome occurs in the mixed equilibrium. Thus the expected change in Firm 1’s market share is
Since we are only interested in whether the market share increases or decreases, we only care about the sign of the above term. The denominator is always positive, as both terms in the denominator are positive when . So it suffices to show that the numerator is non-negative.
When then the first term in the numerator is zero, so the expected market share is the same as the initial market share. On the other hand when , Lemma 5 implies that the first term in the numerator is negative. We claim that the second term in the numerator is always positive. To see this, first observe that is a linear function of and it is strictly positive at both end points (by Lemma 5) and . By the properties of linear functions, the term is positive for all values of between and , which is exactly the regime we are interested in. ∎
Proof of Lemma 8. The change in the market share of Firm 1 in strategy profiles compared to the beginning of the game is the parameter , which is always positive. Similarly, the change in the market share of Firm 2 in strategy profile compared to the beginning of the game is the parameter . The change in the market share of Firm 1 in this strategy profile is since the sum of market shares is always one, and since is always positive, is negative. Moreover, the expected change in Firm 1’s market share for is because we decide which firm purchases the data by a fair coin toss. By Lemma 5, , so . Finally, there is no change in the market shares in the outcome . Thus, only for does Firm 1’s market share strictly increase. ∎
Proof of Theorem 4. We first simplify the consumer welfare for a strategy profile by shorthands , , and .
So the strategy profile that maximizes the social welfare of the consumers equivalently maximizing the following equation
We take the following three steps to prove the statement of the theorem: (1) , (2) and (3) for all . For simplicity in the rest of the proof we assume that the error scales with the square root of the number of data points.
To prove part (1), first, observe that
when and also
since the function is increasing in when . Adding a positive term to the expression above, we get that