1 Introduction
The market for software vulnerabilities—also known as bugs—is a crowded one. For those holding a serious bug to sell, there are many kinds of interested customers: the software vendors themselves that can produce official patches, the anonymous buyers in the black markets that boast greater reward [1], and many others in between—such as the vulnerability brokers.
As defined by Böhme [3], by vulnerability brokers, we refer to organizations other than software vendors that purchase vulnerabilities and produce corresponding defense services (such as intrusion detection systems [8]) for their subscribers. Bug bounty programs offered by vulnerability brokers provide greater financial incentives for vulnerability sellers, as their customers could include large corporations and government agencies that have huge budgets for security [8]. One common problem these programs have is that their subscribers are usually charged an annual subscription fee [3], while they certainly don’t produce a constant number of security updates every year, and each customer may not benefit equally with each update—for example, an update that helps prevent a bug in Windows operating system would be of little interest to customers that don’t use Windows at all, though they still have to pay for the update with the fixed subscription fee.
While this inequality can be trivially solved by designing as many subscription levels as necessary, we are introducing a gametheoretical model for nonprofit bug bounty programs that both solves this efficiency problem and promotes general software security.
Specifically, we study the mechanism design problem of selling one bug (information regarding it) to multiple agents. The goal is not to make a profit, but we need the mechanism to cover the cost of the bug. All agents receive the bug if enough payments can be collected to cover the cost. To incentivize payments, agents who do not pay receive the bug slightly delayed. Our goal is to maximize the social welfare by minimizing the maximum and the total delay. We end up with a mechanism that is 4competitive against an undominated mechanism in terms of maximum delay, and for expected delay, we conclude by discussing the expected delay under some assumptions on the distributions of the agents’ valuations.
Although this problem we are studying is derived from bug bounty programs, it certainly could relate to other systems. So here we define the traits that characterize the problem. The service or good that is sold has unlimited supply once funded (zero marginal cost), and cannot be retracted once given to a user. The most common examples are information and digital goods. The agents have a valuation function that is nondecreasing in terms of time: the earlier the agent gets the information, the more utility she receives. And as we are designing nonprofit systems, the mechanism should be budget balanced: we charge the agents exactly the amount needed to purchase the bug which the defense information is derived from. Finally, we want to incentivize enough payments with long premium time periods (periods exclusively enjoyed by the paying agents). But we also want the premium time periods to be as short as possible so that nonpaying agents can receive the information sooner, as it leads to higher social welfare.
2 Related Research
With more and more critical software vulnerabilities catching the public’s attention, there’s an increasing amount of literature on the market for vulnerabilities. However, we failed to identify any research that shares the same problem structure or the same goal as ours, so the following work is mostly on understanding the vulnerability market, and inspirations for future work, rather than what our study is based on.
Regarding the vendor’s possible reluctance to accept and fix reported vulnerabilities responsibly, Canfield et al. [4] made quite a few recommendations on ways to incentivize vendors to fix the software’s vulnerabilities responsibly, and general improvement suggestions including allowing negotiations of the severity level of discovered vulnerabilities; on the subject of how and when should bugs be disclosed to the general public. Arora et al. [2] produced numerical simulations which suggested instant disclosure of vulnerabilities to be suboptimal. Nizovtsev and Thursby [13], unlike others, used a gametheoretic approach to show that full disclosure can be an equilibrium strategy, and commented on the pressure of instant disclosure may put on vendors may have a longterm effect that improves software quality. Also, there had been discussions on the feasibility of introducing markets for trading bugs openly [11], with some going as far as designing revenuemaximizing mechanisms for them [6, 5].
Then, when introducing new bug bounty programs, it’s quite necessary to consider its effect outside the expected producer and consumer population. Maillart et al. [10] proved that each newly launched program has a negative effect on submissions to existing bug bounty programs, and they also analyzed the researchers (bounty hunters) expected gains and strategies in participating in bug bounty programs. Specifically for vulnerability brokers, Kannan et al. [9] emphasized a caveat that a vulnerability broker (which is called a marketbased infomediary in their paper) always has incentive to leak the actual vulnerability, as “…This leakage exposes nonsubscribers to attacks from the hacker. The leakage also serves to increase the users’ incentives to subscribe to the infomediary’s service.”
Finally, we found a sorely lacking amount of literature on existing vulnerability brokers and the actual sellers of vulnerabilities. Although a few papers on these topics were located [8, 6, 5], we did not find any detailed models or discussions, perhaps due to the secretive nature of the cybersecurity business.
3 Model Description
We study the problem of selling one bug (with a fixed cost) to agents. Our goal is not to make a profit, but we need the mechanism to cover the cost of the bug. Without loss of generality, we assume the cost of the bug is .
Our mechanism would generally charge a total payment of from the agents (or charge , in which case the bug is not sold). If the bug is sold, then we provide the bug to all agents, including those who pay very little or do not pay at all. There are a few reasons for this design decision:

The main goal of this nonprofit system is to promote general software security, so we would like to have as many people protected from the vulnerability as possible.

Since no cost is incurred in distributing the bug once funded, the system and the agents don’t lose anything by allowing the presence of free riders.

In practise, providing free security information encourages more agents to join the system. Under our cost sharing mechanism, including more agents actually generally leads to less individual payment and increased utilities for everyone.
To incentivize payments, if an agent has a higher valuation (is willing to pay more), then our mechanism provides the bug information to this agent slightly earlier. For the free riders, they receive the bug for free, except that there is a bit of delay. Our aim is to minimize the delay (we cannot completely get rid of the delay as it is needed for collecting payments).
We assume the bug has a life cycle of . Time is when the sale starts. Time is when the bug reaches its end of life cycle (or when the bug becomes public knowledge).
We use to denote agent ’s type. If agent receives the bug at time , then her valuation equals . That is, if she receives the bug at time , then her valuation is simply . If she receives the bug at time , then her valuation is .
We use and to denote agent ’s allocation time and payment, under mechanism , when agent reports and the other agents report .^{1}^{1}1For randomized mechanisms, the allocation times and payments are the expected values over the random bits. Agent ’s utility is .
We enforce three mechanism constraints in this paper: strategyproofness, individual rationality, and ex post budget balance. They are formally defined as follows:

Strategyproofness: for any ,

Individual Rationality: for any ,

Ex post budget balance^{2}^{2}2For randomized mechanisms, we require that for all realizations of the random bits, the constraint holds.:

If the bug is sold, then we must have

If the bug is not sold, then we must have that for all

We study the minimization of two different mechanism design objectives. The MaxDelay and SumDelay are defined as follows:
Our setting is a singleparameter setting where Myerson’s characterization applies.
Claim (Myerson’s Characterization [12])
Let be a strategyproof and individually rational mechanism, we must have that

For any and , is nonincreasing in . That is, by reporting higher, an agent’s allocation time never becomes later.

The agents’ payments are completely characterized by the allocation times. That is, is determined by .
The above payment characterization also implies that both the payment and the utility are nondecreasing in .
4 PriorFree Settings
In this section, we focus on problem settings where we do not have the prior distributions over the agents’ types. For both MaxDelay and SumDelay, the notion of optimal mechanism is not welldefined. Given two mechanisms and , mechanism may outperform mechanism under some type profiles, and vice versa for some other type profiles.
We adopt the following dominance relationships for comparing mechanisms.
Definition 1
Mechanism MaxDelayDominates mechanism , if

for every type profile, the MaxDelay under mechanism is at most^{3}^{3}3Tiebreaking detail: given a type profile, if under , the bug is not sold (max delay is ), and under , the bug is sold (the max delay happens to be also ), then we interpret that the max delay under is not at most that under . that under mechanism .

for some type profiles, the MaxDelay under mechanism is less than that under mechanism .
A mechanism is MaxDelayUndominated, if it is not dominated by any strategyproof and individually rational mechanisms.
Definition 2
Mechanism SumDelayDominates mechanism , if

for every type profile, the SumDelay under mechanism is at most that under mechanism .

for some type profiles, the SumDelay under mechanism is less than that under mechanism .
A mechanism is SumDelayUndominated, if it is not dominated by any strategyproof and individually rational mechanisms.
For our model, one trivial mechanism works as follows:
Cost Sharing (CS) Strategyproofness: Yes Individual rationality: Yes Ex post budget balance: Yes Consider the following set:
The above mechanism is strategyproof, individually rational, and ex post budget balanced. Under the mechanism, agents join in the cost sharing and their delays are s, but the remaining agents all have the maximum delay . Both the MaxDelay and the SumDelay are bad when is small. One natural improvement is as follows:
Cost Sharing with Deadline (CSD) Strategyproofness: Yes Individual rationality: Yes Ex post budget balance: No Set a constant deadline of . Under the mechanism, an agent’s allocation time is at most . Consider the following set:
The idea essentially is that we run the trivial cost sharing (CS) mechanism on the time interval , and every agent receives the time interval for free. The mechanism remains strategyproof and individually rational. Unfortunately, the mechanism is not ex post budget balanced—even if the cost sharing failed (e.g., is empty), we still need to reveal the bug to the agents at time for free. If , we have to pay the seller without collecting back any payments.
The reason we describe the CSD mechanism is because our final mechanism uses it as a subcomponent, and the way it is used fixes the budget balance issue.
Example 1
Let us consider the type profile . We run the cost sharing with deadline (CSD) mechanism using different values:

If we set , then agent and would receive the bug at time and each pays . Agent and pay nothing but they have to wait until time .

If we set , then agent and would still receive the bug at time and each pays . Agent and pay nothing but they only need to wait until . This is obviously better.

If we set , then all agents pay and only wait until . However, we run into budget issue in this scenario.
We need to be small, in order to have shorter delays. However, if is too small, we have budget issues. The optimal value depends on the type profile. For the above type profile, the optimal . When , agent is still willing to pay for the time interval as .
Definition 3
Given a type profile , consider the following set:
is between and . As becomes smaller, the set also becomes smaller. Let be the minimum value so that is not empty. If such does not exist (e.g., is empty), then we set .
is called the optimal deadline for this type profile.
Instead of using a constant deadline, we may pick the optimal deadline for every type profile.
Cost Sharing with Optimal Deadline (CSOD) Strategyproofness: No Individual rationality: Yes Ex post budget balance: Yes For every type profile, we calculate its optimal deadline. We run CSD using the optimal deadline.
CSOD is ex post budget balanced. If we cannot find agents to pay each for any , then the optimal deadline is and the cost sharing failed. That is, we simply do not reveal the bug (choose not to buy the bug from the seller).
Unfortunately, we gained some and lost some. Due to changing deadlines, the mechanism is not strategyproof.
Example 2
Let us reconsider the type profile . The optimal deadline for this type profile is . By reporting truthfully, agent receives the bug at time and pays . However, she can lower her type to (the optimal deadline is now slightly below ). Agent still receives the bug at time but only pays .
Other than not being strategyproof, under our priorfree settings, CSOD is optimal in the following senses:
Theorem 4.1
Cost sharing with optimal deadline (CSOD) is both MaxDelayUndominated and SumDelayUndominated.
Proof
We first focus on MaxDelayUndominance. Let be a strategyproof and individually rational mechanism that MaxDelayDominates CSOD. We will prove by contradiction that such a mechanism does not exist.
Let be an arbitrary type profile. Without loss of generality, we assume . We will show that ’s allocations and payments must be identical to that of CSOD for this type profile. That is, must be identical to CSOD, which results in a contradiction.
We first consider type profiles under which the bug is sold under CSOD. We still denote the type profile under discussion by . Let be the number of agents who participate in the cost sharing under CSOD.
We construct the following type profile:
(1) 
For the above type profile, under CSOD, the first agents receive the bug at time and each pays . By dominance assumption (both MaxDelayDominance and SumDelayDominance), under , the bug must also be sold. To collect , the first agents must each pays and must receive the bug at time due to individual rationality.
Let us then construct a slightly modified type profile:
(2) 
Since , under , agent must still receive the bug at time due to the monotonicity constraint. Agent ’s payment must still be . If the new payment is lower, then had agent ’s true type been , it is beneficial to report instead. If the new payment is higher, then agent benefits by reporting instead. Agent to still pay and receive the bug at time due to individual rationality.
We repeat the above reasoning by constructing another slightly modified type profile:
(3) 
Due to the monotonicity constraint, agent still pays and receives the bug at time . Had agent reported , he would receive the bug at time and pay , so due to the monotonicity constraint, agent still pays and receives the bug at time under type profile (3). The rest of the agents must be responsible for the remaining , so they still each pays and receives the bug at time .
At the end, we can show that under , for the following type profile, the first agents each pays and must receive the bug at .
(4) 
For the above type profile (4), there are agents reporting s. For such agents, their payments must be due to individual rationality. Since MaxDelayDominates^{4}^{4}4The claim remains true if we switch to SumDelayDominance. CSOD, these agents’ allocation time must be at most , which is their allocation time under CSOD (this value is the optimal deadline). We show that they cannot receive the bug strictly earlier than under .
Let us consider the following type profile:
(5) 
For type profile (5), agent must receive the bug at time and pay . She can actually benefit by reporting instead, if under type profile (4), agents reporting receive the bug at for free.
utility for reporting truthfully  
Therefore, for type profile (4), all agents who report must receive the bug at exactly . That is, for type profile (4), and CSOD are equivalent.
Now let us construct yet another modified type profile:
(6) 
Here, we must have . Otherwise, under the original type profile, we would have more than agents who join the cost sharing. We assume under , agent receives the bug at time and pays . is at most due to the monotonicity constraint. We have
utility when the true type is and reporting truthfully  
utility when the true type is and reporting 
Therefore,
(7) 
Had agent ’s type been , her utility for reporting her true type must be at least her utility when reporting . That is,
utility when the true type is and reporting truthfully  
utility when the true type is and reporting 
That is,
(8) 
Combine Equation (7), Equation (8), , and , we have and . That is, under type profile (6), agent ’s allocation and payment remain the same whether she reports or .
Repeat the above steps, we can show that under the following arbitrary profile, agent to ’s allocation and payment also remain the same as when they report .
(9) 
That is, for type profiles where the bug is sold under CSOD, and CSOD are equivalent.
We then consider an arbitrary type profile for which the bug is not sold under CSOD. Due to the monotonicity constraint, an agent’s utility never decreases when her type increases. If any agent receives the bug at time that is strictly before and pays , then due to the individual rationality constraint, we have that . must be strictly below , otherwise the bug is sold under CSOD. Had agent ’s true type been higher but still below (say, to ), her utility must be positive, because she can always report even when her true type is . But earlier we proved that had ’s true type been , she would receive the bug at time and pay . Her utility is when her type is . This means her utility decreased if we change her true type from to , which is a contradiction. That is, all agents must receive the bug at time (and must pay ). Therefore, for an arbitrary type profile for which the bug is not sold under CSOD, still behaves the same.
In the above proof, all places where we reference MaxDelayDominance can be changed to SumDelayDominance. ∎
CSOD is both MaxDelayUndominated and SumDelayUndominated, but it is not strategyproof. We now propose our final mechanism in this section, which builds on CSOD. The new mechanism is strategyproof and its delay is within a constant factor of CSOD.^{5}^{5}5That is, we fixed the strategyproofness issue at the cost of longer delays, but it is within a constant factor.
GroupBased Cost Sharing with Optimal Deadline (GCSOD) Strategyproofness: Yes Individual rationality: Yes Ex post budget balance: Yes For agent , we flip a fair coin to randomly assign her to either the left group or the right group. We calculate the optimal deadlines of both groups. We run CSD on both groups. The left group uses the optimal deadline from the right group and vice versa.
Claim
Groupbased cost sharing with optimal deadline (GCSOD) is strategyproof, individually rational, and ex post budget balanced.
Proof
Every agent participates in a CSD so strategyproofness and individual rationality hold. Let and be the optimal deadlines of the left and right groups, respectively. If , then the left group will definitely succeed in the cost sharing, because its optimal deadline is and now they face an extended deadline. The right group will definitely fail in the cost sharing, as they face a deadline that is earlier than the optimal one. At the end, some agents in the left group pay and receive the bug at , and the remaining agents in the left group receive the bug at time for free. All agents in the right group receive the bug at time for free. If , the reasoning is the same. If , then we simply tiebreak in favour of the left group. If , then potentially both groups fail in the cost sharing, in which case, we simply do not reveal the bug (do not buy it from the seller). ∎
Definition 4
Mechanism is MaxDelayCompetitive against mechanism if for every agent , every type profile, we have that the max delay under is at most times the max delay under .
SumDelayCompetitive is defined similarly.
Theorem 4.2
GCSOD is MaxDelayCompetitive against CSOD under two technical assumptions:

No agent’s valuation for the bug exceeds the whole cost. That is, for all .

At least one agent does not participate in the cost sharing under CSOD.
There’s no way to ensure that the first assumption always holds, but it can be argued that it at least holds in the scenarios of cost sharing serious bugs beyond any individual’s purchasing power. The second assumption is needed only because in the single case of everyone joining the cost sharing under CSOD, the max delay is 0. While under GCSOD, the max delay is always greater than 0 so it would not be competitive in this case only. And for the other assumption, as our system would welcome as many agents as possible, it is expected that there are always agents who don’t value a new bug very much so that they would prefer to be free riders instead of participating in the cost sharing under CSOD.
Proof
Let us consider an arbitrary type profile that satisfies both technical assumptions. We denote it by . Without loss of generality, we assume . Let be the number of agents who join the cost sharing under CSOD. The optimal deadline under CSOD is then , which is exactly the max delay for this type profile.
Under a specific random grouping, for the set of agents from to , we assume agents are assigned to the left group and agents are assigned to the right group.
For the left group, the optimal deadline is at most if , which is at most . When , the optimal deadline is at most . Under CSOD, since all types are at most , the optimal deadline is at least . That is, if , the optimal deadline of the left group is at most .
In summary, the optimal deadline of the left group is at most if and if . That is, the optimal deadline of the left group is at most
Similarly, the optimal deadline of the right group is at most
The max delay under GCSOD is at most the worse of these two deadlines. The ratio between the max delay under GCSOD and the max delay under CSOD is then at most .
We use to denote the expected ratio (expectation with regard to the random groupings):
(10) 
We define .
If is even and at least , then
The ratio between and is at most when is at least .
We omit the similar proof when
is odd. In summary, we have
when . For smaller , we numerically calculated . All values are below . ∎Corollary 1
GCSOD is SumDelayCompetitive against CSOD under two technical assumptions:

No agent’s valuation for the bug exceeds the whole cost. That is, for all .

At least half of the agents do not participate in the cost sharing under CSOD.
Proof
Let and be the optimal deadline and the number of agents who join the cost sharing under CSOD. The SumDelay of the agents under CSOD is . Under GCSOD, the deadlines are at most according to Theorem 4.2. The SumDelay is then at most . Therefore, the competitive ratio is , which is at least if . ∎
5 Settings with Prior Distributions
In this section, we assume that there is a publicly known prior distribution over the agents’ types. Specifically, we assume that every agent’s type is drawn from an identical and independent distribution, whose support is . We still enforce the same set of mechanism constraints as before, namely, strategyproofness, individually rationality, and ex post budget balance. Our aim is to minimize the expected MaxDelay or the expected SumDelay
. Our main results are two linear programs for computing the lower bounds of
expected MaxDelay and expected SumDelay. We then compare the performance of CS and GCSOD against these lower bounds.The key idea to obtain the lower bounds is to relax the ex post budget balance constraint to the following:

With probability
, the bug is not sold under the optimal mechanism. depends on both the mechanism and the distribution. 
Every agent’s expected payment is then , as the agents’ distributions are symmetric.^{6}^{6}6It is without loss of generality to assume that the optimal mechanism does not treat the agents differently based on their identities. Given a nonanonymous mechanism, we can trivially create an “average” version of it over all permutations of the identities [7]. The resulting mechanism is anonymous and has the same MaxDelay and SumDelay.

Every agent’s expected allocation time is at least , as the allocation time is with probability .
We divide the support of the type distribution into equal segments. We use to denote . The th segment is then . Noting that the agents’ distributions are symmetric, we do not need to differentiate the agents when we define the following notation. We use to denote an agent’s expected allocation time when her type is . That is, is an agent’s expected allocation time when her type is , and is her expected allocation time when her type is . Similarly, we use to denote an agent’s expected payment when her type is . The and the are the variables in our linear programming models.
Due to Myerson’s characterization, the must be nonincreasing. That is,
We recall that strategyproofness and individual rationality together imply that the agents’ payments are completely characterized by the allocation times. Using notation from Section 3, we have
Using notation from this section, that is
is another variable in our linear programming model. We use to denote the probability that an agent’s type falls inside the th interval . Since every agent’s expected payment is , we have
Since an agent’s expected allocation time is at least , we have
The expected SumDelay is at least . We minimize it to compute a lower bound for the expected SumDelay.
To compute a lower bound on the expected MaxDelay, we introduce a few more notations:

Let be the expected MaxDelay when all agents report higher than .

Let be the probability that all agents report higher than .

Let be the expected MaxDelay when at least one agent reports at most .

Let be the probability that at least one agent reports at most .

Let be an agent’s expected delay when she reports at most .
The expected MaxDelay is at least the following for any :
We minimize (11) to compute a lower bound on the expected MaxDelay.
(11) 
We present the expected delays of CS and GCSOD under different distributions.
refers to the case where every agent’s valuation is drawn from the uniform distribution from
to .refers to the case where every agent’s valuation is drawn from the normal distribution with mean
and standard devastation , conditional on that the value is between and .MaxDelay  SumDelay  
GCSOD  CS  Lower Bound  GCSOD  CS  Lower Bound  
,  
,  
,  
,  
,  
,  
,  
,  
,  
,  
,  
, 
CS outperforms GCSOD in terms of both MaxDelay and SumDelay. This is not too surprising because GCSOD is designed for its competitive ratio in the worst case. Our derived lower bounds show that CS is fairly close to optimality in a lot of cases.
6 Conclusions and Future Work
We have come up with a mechanism with competitive ratios of 4 for max delay and 8 for sum delay under certain assumptions. As the problem setting is rather new, there are plenty of options to be explored when designing mechanisms with better performance. Possible solutions showing promise include, for exmaple, another method we considered but did not dedicate as much time into—scheduling fixed prices for different sections of time periods, regardless of the agents’ submitted valuations. But such a mechanism will require extensive simulations and analyses to evaluate its performance. It should also be noted that the lack of data for such simulations is to be addressed.
While most of our result is presented under priorfree settings, we made a certain number of assumptions, some of which easily hold true for realistic applications—and therefore rather trivial—some of which less so. For example, there is an assumption that there is at least one agent not participating in the cost sharing in the benchmark function CSOD. This is necessary because we cannot evaluate any mechanism’s resulting time against 0 and produce a valid competitive ratio, while this can also be easily satisfied by including freeriders who are determined not to contribute at all. But for the assumption that no agent’s valuation exceeds the total required amount, although it is introduced because of similar reasons, we cannot expect it to hold true for every case. So either removing existing constraints to generalize the solution or adding more assumptions to yield better results would be reasonable as immediate future work.
References
 [1] Algarni, A., Malaiya, Y.: Software vulnerability markets: Discoverers and buyers. International Journal of Computer, Information Science and Engineering 8(3), 482–484 (2014)
 [2] Arora, A., Telang, R., Xu, H.: Optimal policy for software vulnerability disclosure. Management Science 54(4), 642–656 (2008)
 [3] Böhme, R.: A comparison of market approaches to software vulnerability disclosure. In: Proceedings of the 2006 International Conference on Emerging Trends in Information and Communication Security. pp. 298–311. ETRICS’06 (2006)

[4]
Canfield, C., Catota, F., Rajkarnikar, N.: A national cyber bug broker:
Retrofitting transparency. (2015),
https://www.andrew.cmu.edu/user/ccanfiel/NationalCyberBugBroker_final.pdf  [5] Guo, M., Hata, H., Babar, M.A.: Revenue maximizing markets for zeroday exploits. In: PRIMA 2016: Princiles and Practice of MultiAgent Systems  19th International Conference, Phuket, Thailand, August 2226, 2016, Proceedings. pp. 247–260 (2016)
 [6] Guo, M., Hata, H., Babar, M.A.: Optimizing affine maximizer auctions via linear programming: An application to revenue maximizing mechanism design for zeroday exploits markets. In: PRIMA 2017: Principles and Practice of MultiAgent Systems  20th International Conference, Nice, France, October 30  November 3, 2017, Proceedings. pp. 280–292 (2017)
 [7] Guo, M., Markakis, E., Apt, K.R., Conitzer, V.: Undominated groves mechanisms. J. Artif. Intell. Res. 46, 129–163 (2013)
 [8] Howard, R.: Cyber Fraud: Tactics, Techniques and Procedures. CRC Press (2009)
 [9] Kannan, K., Telang, R.: Market for software vulnerabilities? think again. Management Science 51(5), 726–740 (2005)
 [10] Maillart, T., Zhao, M., Grossklags, J., Chuang, J.: Given enough eyeballs, all bugs are shallow? revisiting eric raymond with bug bounty programs. Journal of Cybersecurity 3(2), 81–90 (2017)
 [11] Miller, C.: The legitimate vulnerability market: Inside the secretive world of 0day exploit sales. In: In Sixth Workshop on the Economics of Information Security (2007)
 [12] Myerson, R.B.: Optimal auction design. Math. Oper. Res. 6(1), 58–73 (Feb 1981)
 [13] Nizovtsev, D., Thursby, M.: To disclose or not? an analysis of software user behavior. Information Economics and Policy 19(1), 43 – 64 (2007)
Comments
There are no comments yet.