1 Introduction
Since December 2019 the world has been facing the Covid19 pandemic, and its disastrous effects in human life and the economy. Responding to the pandemic, most countries have closed off their borders, and imposed unprecedented, universal lockdowns on their entire economies. The key reason for such drastic measures is uncertainty: we do not yet know the actual transmission rate, the lethality, or the prevalence of this new deadly disease. As governments and policy makers were caught by surprise, there is no doubt that these drastic measures were needed as a first line of defense. The data show that we would have to deal with a massive humanitarian disaster otherwise.
At the same time, as the economic pain mounts, especially for the most vulnerable and disadvantaged segments of the population, there is an urgent need to think of careful ways to safely reopen the economy. Estimating the true prevalence of Covid19 has been identified as a key parameter to this effort (Alvarez et al., 2020). In the United States, the number of confirmed Covid19 cases is 1,193,813 as of May 7 with 70,802 total deaths. This implies a (case) prevalence of 0.36% (assuming 328m as the US population), and a 5.9% mortality rate of Covid19, which is even higher than the mortality rate reported at times by the World Health Organization.^{1}^{1}1The official mortality rate was revised from 2% in late January to 4% in early March; see also an official WHO situation report from early March: https://www.who.int/docs/defaultsource/coronaviruse/situationreports/20200306sitrep46covid19.pdf?sfvrsn=96b04adf_2. However, the true prevalence, that is, the number of people who are currently infected or have been infected by Covid19 over the entire population is likely much higher, and so the mortality rate should be significantly lower than 5.9%. A growing literature is attempting to estimate these numbers through epidemiological models (Li et al., 2020; Flaxman et al., 2020), or structural assumptions (Hortaçsu et al., 2020).
A more robust alternative seems to be possible through randomized serological studies that detect marker antibodies indicating exposure to Covid19. In the US, there is currently a massive coordinated effort to evaluate the widespread application of these tests. The results are expected in late May of 2020.^{2}^{2}2CDC page: https://www.cdc.gov/coronavirus/2019ncov/lab/serologytesting.html. The hope is that these tests will determine the true prevalence of the virus, and thus its lethality, and also determine whether someone is immune enough to return to work (the extent of immunity is still uncertain, however). Furthermore, seroprevalence studies can give information on risk factors for the disease, such as a patient’s age, location, or underlying health conditions. They may also reveal important medical information on immune responses to the virus, such as how long antibodies last in people’s bodies following infection, and could also identify those able to donate blood plasma, which is a possible treatment to seriously ill Covid19 patients.^{3}^{3}3Food and Drug Administration (FDA) announcement on serology studies (04/07/2020): https://www.fda.gov/newsevents/pressannouncements/coronaviruscovid19updateserologicaltests. The development of serology tests is therefore essential to designing a careful strategy towards both effective medical treatments and a gradual reopening of the economy.
Until widespread serology testing is possible, however, we have to rely on a limited number of serology studies that have started to emerge in various areas of the globe, including the US. Table 1 presents a nonexhaustive summary of such studies around the world. For example, in Germany, serology tests in early April showed a 14% prevalence in a sample of 500 people.^{4}^{4}4Report in German: https://www.land.nrw/sites/default/files/asset/document/zwischenergebnis_covid19_case_study_gangelt_0.pdf. In the Netherlands, a study in midApril showed a lower prevalence at 3.5% in a small sample of blood donors.^{5}^{5}5Presentation slides in Dutch: https://www.tweedekamer.nl/sites/default/files/atoms/files/tb_jaap_van_dissel_1604_1.pdf. ^{6}^{6}6 See also a summary of these projects in the journal “Science”: https://www.sciencemag.org/news/2020/04/antibodysurveyssuggestingvastundercountcoronavirusinfectionsmaybeunreliable. In the US, in a recent and relatively large study in Santa Clara, California, Bendavid et al. (2020)
estimated an insample prevalence of 1.5% from 50 positive test results in a sample of 3330 patients. Using a reweighing technique, the authors extrapolated this estimate to 24% prevalence in the general population. A followup study in LA County found 35 positives out of 846 tests. What is unique about these last two studies is that data from a prior validation study are also available, where, say, 401 “true negatives” were tested with 2 positive results, implying a false positive rate of 0.5%. Upon publication, these studies received intense criticism because the false positive rate appears to be large enough compared to the underlying disease prevalence. For example, the AgrestiCoull and ClopperPearson 95% confidence intervals for the false positive rate are
and , respectively. These intervals for the false positive rate are not incompatible even with a 0% prevalence, since a 1.5% false positive rate achieves (false) positives on average, same as the observed value in the sample.Such standard methods, however, are justified based on approximations, asymptotic arguments, prior specifications (for Bayesian methods), or normality assumptions, which are always suspect in small samples. In this paper, we develop a method that can assess finitesample statistical significance in a robust way. The key idea is to treat all unknown quantities as parameters, and explore the entire parameter space to assess agreement with the observed data. Our method adopts the partial identification framework, where the goal is not to produce point estimates, but to identify sets of plausible parameter values (Wooldridge and Imbens, 2007; Tamer, 2010; Chernozhukov et al., 2007; Manski, 2003, 2010, 2007; Romano and Shaikh, 2008, 2010; Honoré and Tamer, 2006; Imbens and Manski, 2004; Beresteanu et al., 2012; Stoye, 2009; Kaido et al., 2019). Within that literature, our proposed method appears to be unique in the sense that it constructs a procedure that is valid in finite samples given the correct distribution of the test statistic. Importantly, the choice of the test statistic can affect only the power of our method, but not its validity. Such flexibility may be especially valuable in choosing a test statistic that is both powerful and easy to compute. Thus, the main benefit of our approach is that it is valid with enough computation, whereas more standard methods are only valid with enough samples.
The rest of this paper is structured as follows. In Section 2 we describe the problem formally. In Section 3.1 we describe the proposed method on a high level. A more detailed analysis along with a modicum of theory is given in Section 3.2. In Section 4 we apply the proposed method on data from the Santa Clara study, the LA County study, and a recent study from New York state.
Prevalence  Location  Time  Method  Notes 
6.14%  China  01/21  PCR  Sample of 342 pneumonia patients (Liang et al., 2020). 
2.6%  Italy  02/21  PCR  Used 80% of entire population in Vó, Italy (Lavezzo et al., 2020). 
5.3%  USA  03/12  PCR  Used 131 patients with ILI symptoms (Spellberg et al., 2020). 
13.7%  USA  03/22  PCR  Sample of 215 pregnant women in NYC (Sutton et al., 2020). 
0.34%  USA  03/17  model  (Yadlowsky et al., 2020). 
9.4%  Spain  03/28  serology  Sample of 578 healthcare workers (GarciaBasteiro and others, 2020). 
3%  Japan  03/31  serology  Random set of 1000 blood samples in Kobe Hospital (Doi et al., 2020). 
36%  USA  04/02  PCR  Study in large homeless shelter in Boston (Baggett et al., 2020). 
1.5%  USA  04/03  serology  Recruited 3330 people via Facebook (Bendavid et al., 2020). 
2.5%  USA  04/04  model  Uses ILINet data; implies 96% unreported cases (Lu et al., 2020). 
9.1%  Switzerland  04/06  serology  Sample of 1335 individuals in Geneva (Stringhini et al., 2020). 
14%  Germany  04/09  serology  Selfreported 400 households in Gangelt. (Streeck et al., 2020). 
3.1%  Netherlands  04/16  antigen  Used 3% of all blood donors (Jaap van Dissel, 2020). 
0.440%  USA  04/24  model  Partial identification using #cases/deaths (Manski and Molinari, 2020). 
Unreported  
90%  USA  03/16  model  Used airflight data to identify transmission rates (Hortaçsu et al., 2020). 
85%  India  04/02  model  Extrapolated from respiratoryrelated cases (Venkatesan, 2020). 
2 Problem Setup
Here, we formalize the statistical problem of estimating disease prevalence through imperfect medical tests. Every individual is associated with a binary status : it is if the individual has developed antibodies from exposure to the disease, and if not. We will also refer to these cases as “positive” and “negative”, respectively. Patient status is not directly observed, but can be estimated with a serology (antibody) test.
This medical antibody test can be represented by a function , and determines whether someone is positive or negative. As usual, the categorization of the test results can be described through the following table:
true status,  

0  1  

0  true negative  false negative  
1  false positive  true positive 
We will assume that each test result is an independent random outcome, such that the true positive rate and false positive rate, denoted respectively by and ,^{7}^{7}7The terms “sensitivity” and “specificity” are frequently used in practice of medical testing. In our setting, sensitivity maps to the true positive rate , and specificity maps to one minus the false positive rate . In this paper, we will only use the terms “true/false positive rate” as they are more precise and selfexplanatory. are constant:
(A1)  
(A2) 
This assumption may be untenable in practice. In general, patient characteristics, or test target and delivery conditions can affect the test results. For example, Bendavid et al. (2020) report slightly different test performance characteristics depending on which antibody (either IgM or IgG) was being detected. We note, however, that this assumption is not strictly necessary for the validity of our proposed inference procedure. It is only useful in order to obtain a precise calculation for the distribution of the test statistic (see Theorem 1 and remarks).
To determine test performance characteristics, and gain information about the true/false positive rates of the antibody test, there is usually a validation study where the underlying status of participating individuals is known. In the Covid19 case, for example, such validation study could include preCovid19 blood samples that have been preserved, and are thus “true negatives”. To simplify, we assume that in the validation study there is a set of participating individuals, where it is known that everyone is a true negative, and a set where everyone is positive; i.e.,
There is also the main study with a set of participating individuals, where the true status is not known. We assume no overlap between sets and , which is a realistic assumption. We define and as the respective number of participants in the validation study, and as the number of participants in the main study. These numbers are observed, but the full patient sets or the patient characteristics, may not be observed.
We also observe the positive test results in both studies:
[Calibration study]  
[Main study]  (1) 
Thus, is the number of false positives in the validation study since we know that all individuals in are true negatives. Similarly, is the number of true positives in the validation study since all individuals in are known to be positive. These numbers offer some simple estimates of the false positive rate and true positive rate of the medical test, respectively: and . We use to denote the observed values of test positives
, respectively, which are integervalued random variables.
The statistical task is therefore to use observed data and do inference on the quantity:
(2) 
i.e., the unknown disease prevalence in the main study. We emphasize that is a finitepopulation estimand — we discuss (briefly) the issue of extrapolation to the general population in Section 5. The challenge here is that generally includes both false positives and true positives, which depends on the unknown test parameters, namely the true/false positive rates and . Since is the (unknown) number of infected individuals in the main study, we can use Assumptions (A1) and (A2) to write down this decomposition formally:
(3) 
where denotes the binomial random variable. For brevity, we define as our joint data statistic, and as the joint parameter value. The independence of tests implies that the density of can be computed exactly as follows.
(4) 
where
denotes the probability of
successes in a binomial experiment with trials and probability of success. There are several ways to implement Equation (4) efficiently — we defer discussions on computational issues to Section 5.3 Method
We begin with an illustrative example to describe the proposed method on a high level. We give more details along with some theoretical guarantees in the section that follows.
3.1 Illustrative example
Let us consider the Santa Clara study (Bendavid et al., 2020) with observed data:
The unknown quantities in our analysis are and : the true positive rate of the test, the false positive rate, and the unknown prevalence in the main study, respectively. Assume zero prevalence (), 90% true positive rate (), and 1.5% false positive rate (). We ask the question: “Is the combination
compatible with the data?”. Naturally, this can be framed in statistical terms as a null hypothesis:
(5) 
To test we have to compare the observed positive test results with the values that could have been observed if indeed the true model parameter values were . Our model is simple enough that we can execute this hypothetical analysis exactly based on the density of in Equation (4), where
is the vector of all positive test results, and
is specified as in ; see Figure 1.To simplify visualization, in Figure 1 we fix the component of to its observed value ), and only plot the density with respect to the other two components, ; i.e., we plot
. One can visualize the full joint distribution
as a collection of such conditional densities for all possible values of .The next step is to decide whether the observed value of , namely , is compatible with the distribution of Figure 1. We see that the mode of the distribution is around the point , whereas the point is at the lower edge of the distribution. If the observed values were even further, say at , then we could confidently reject since the density at is basically zero. Here, we have to be careful because the actual observed values are still somewhat likely under . Our method essentially accepts when the density of this distribution at the observed value of statistic is above some threshold , that is, we decide based on the following rule:
(6) 
The test in Equation (6) is reminiscent of the likelihood ratio test, the key difference being that our test does not require maximizations of the likelihood function over the parameter space, which is computationally intensive, and frequently unstable numerically — we make a concrete comparison in the application of Section 4.3. Our test essentially uses the density of as the test statistic for , while threshold generally depends on the particular null values being tested. Assuming that the test of Equation (6) has been defined, we can then test for all possible combinations of our parameter values, , in some large enough parameter space , and then invert this procedure in order to construct the confidence set. As usual, we would like this confidence set to cover the true parameters with some minimum probability (e.g., 95%). In the following section, we show that this is possible through an appropriate construction of the test in Equation (6), which takes into account the level sets of the density function depicted in Figure 1. The overall procedure is computationally intensive, but is valid in finite samples without the need of asymptotic or normality assumptions. The details of this construction, including the appropriate selection of the test threshold and the proof of validity, are presented in the following section.
3.2 Theoretical Details
Let denote the statistic, where , and let be the model parameters. We take to be finite and discrete; e.g., for probabilities we take a grid of values between 0 and 1. Let denote the observed value of in the sample. Let denote the density of the joint statistic conditional on the model parameter value , as defined in Equation (4). Suppose that is the true unknown parameter value, and assume that
(A4) 
Assumption (A4) basically posits that our discretization is fine enough to include the true parameter value with probability one. In our application, this assumption is rather mild as we are dealing with parameters that are either probabilities or integers, and so bounded within welldefined ranges. Moreover, this assumption is implicit essentially in all empirical work since computers operate with finite precision. Our goal is to construct a confidence set such that where is some desired level (e.g., ). Trivially, satisfies this criterion, so we will aim to make as narrow as possible. We will also need the following definition:
(7) 
Function depends on level sets of , and counts the number of sample data points (over the sample space ) with likelihood at that is smaller than the observed likelihood at .
We can now prove the following theorem.
Theorem 1.
Suppose that Assumption (A4) holds. Consider the following construction for the confidence set:
(8) 
Then, .
Proof.
For any fixed consider the function , . Note that, for fixed , function is monotone increasing and generally not continuous with respect to . Let , and define as the unique fixed point for which , if that point exists; if not, define the point as }. It follows that , for any , and so the event is the same as the event . Now we can bound the coverage probability as follows.
(9)  
In the first line we used the test definition and Assumption (A4); in the second line, we used the monotonicity of , and the fact that is the true parameter value; in the last line, we used the definition of and the uniform bound on .
When is not a discontinuity point of , for all , then our test is exact in the sense that . In general, however, this condition will not hold for all , and so the confidence set of Equation (8) may be conservative and lose power. We could potentially achieve more power if instead we define the confidence set as follows:
(10) 
Theorem 2.
Suppose that Assumption (A4) holds. Consider the construction of confidence set as defined in Equation (10). Then, .
Proof.
The proof is almost identical to Theorem 1, if we replace the definition of with , . With this definition is a smoother function, which explains intuitively why this construction will generally lead to more power.
It is straightforward to see that almost surely since
Since both constructions are valid in finite samples, the choice between or should be mainly based on computational feasibility. The construction of may be easier to compute in practice as it depends on a summary of the distribution through the level set function , while the construction of requires full knowledge of the entire distribution. If it is computationally feasible, however, should be preferred because it is contained in with probability one, as argued above. This leads to sharper inference. See also the applications on serology studies in Section 4 for more details, where the construction of is feasible.
3.3 Concrete Procedure and Remarks
Theorems 1 and 2 imply the following simple procedure to construct a 95% confidence set:
Procedure 1

Observe the value of test positives in the validation study and the main study. Create a grid for the unknown parameters .

Reject all values for which ; alternatively, we can reject based on .

The remaining values in form the 95% confidence sets or , respectively.
Remark 3.1 (Computation).
Procedure 1 is fully parallelizable over , and so the main computational difficultly is the need to sum over the sample space . Note, however, that Procedure 1 can work for any choice of given its density . Thus, our method offers valuable flexibility for inference; for instance, could be simulated, or could be a simple statistic (e.g., sample averages) and not necessarily an “” estimator. See Section 5 for more discussion on computation.
Remark 3.2 (Identification).
Procedure 1 is not a typical partial identification method in the sense that there are settings under which the model of Equation (4) is point identified (i.e., when ). However, we choose to describe Procedure 1 as a partial identification method for two main reasons. First, it is more plausible, in practice, that the calibration studies are small and finite (), since a calibration study needs to have highquality, groundtruth data. Second, it can happen that we don’t have both kinds of calibration studies available (i.e., it could be that either or ). In both of these settings, the underlying model is no longer point identified, and so Procedure 1 is technically a partial identification method.
Remark 3.3 (Conservativeness).
Procedure 1 generally produces conservative confidence intervals. However, we can show that , where . This value is very small, in general (e.g., in the Santa Clara study). In this case, the alternative construction is “approximately exact” in the sense that the coverage probability of is almost equal to .
Remark 3.4 (Marginal inference).
The parameter of interest in our application could only be the disease prevalence, whereas the true/false positive rates of the antibody test may be considered “nuisance”. In this paper, we directly project (or ) on a single dimension to perform marginal inference (see Section 4), but this is generally conservative, especially at the boundary of the parameter space (Stoye, 2009; Kaido et al., 2019; Chen et al., 2018). A sharper way to do marginal inference with our procedures is an interesting direction for future work.
3.4 Comparison to other methods
How does our method compare to a more standard frequentist or Bayesian approach? Here, we discuss two key differences. First, as we have repeatedly emphasized in this paper, our method is valid in finite samples under only independence of test results, which is a mild assumption. In contrast, a standard frequentist approach, say based on the bootstrap, is inherently approximate and relies on asymptotics, while a Bayesian method requires the specification of priors and posterior sampling. Of course, our procedure requires more computation, mainly compared to the bootstrap, and can be conservative for marginal inference (see Remark 3.4), but this is arguably a small price to pay in a critical application such as the estimation of Covid19 prevalence.
A second, more subtle, difference is the way our method performs inference. Specifically, we decide whether any is in the confidence set based on the entire density over all , whereas both frequentist and Bayesian methods typically perform inference “around the mode” of the likelihood function with fixed
(we ignore how the prior specification affects Bayesian inference to simplify exposition). This can explain, on an intuitive level, how the inferences of the respective methods may differ. Figure
2 illustrates the difference. On the left panel, we plot the likelihood, , as a function of . Typically, in frequentist or Bayesian methods, the confidence set is around the mode, say . We see that a parameter value, say , with a likelihood value, , that is low in absolute terms will generally not be included in the confidence set. However, in our approach, the value is not important in absolute terms for doing inference, but is only important relative to all other values of the test statistic distribution . Such inference will typically include the mode, , but will also include parameter values at the tails of the likelihood function, such as . As such, our method is expected to give more accurate inference in smallsample problems, or in settings with poor identifiability where the likelihood is nonsmooth and multimodal. We argue that we actually see these effects in the application on Covid19 serology studies analyzed in the following section — see also Section 4.3 and Appendix D for concrete numerical examples.4 Application
In this section, we apply the inference procedure of Section 3.3 to several serology test datasets in the US. Moreover, we present results for combinations of these datasets, assuming that the tests have identical specifications. This is likely an untenable assumption, but it helps to illustrate how we can use our approach to flexibly combine all evidence. Before we present the analysis, we first discuss some data on serology test performance to inform our inference.
4.1 Serology test performance
An important aspect of serology studies are the test performance characteristics. As of May 2020, there are perhaps more than a hundred commercial serology tests in the US, but they can differ substantially across manufacturers and technologies. In our application, we use data from Bendavid et al. (2020), who applied a serology testing kit distributed by Premier Biotech. Bendavid et al. (2020) used validation test results provided by the test manufacturer, and also performed a local validation study in the lab. The combined validation study estimated a true positive rate of 80.3% (95% CI: 72.1%87%), and a false positive rate of 0.5% (95% CI: 0.1%1.7%).
To get an idea about how these performance characteristics relate to other available serology tests we use a dataset published by the FDA based on benchmarking 12 other testing kits to grant emergency use authorization (EUA) status. The dataset is summarized in Table 2. We see that the characteristics of the testing kit used by Bendavid et al. (2020)
are compatible with the FDA data shown in the table. For example, a true positive rate of 80% is below the mean and median of the point estimates in the FDA dataset. A false positive rate of 0.5% falls between the median and mean of the respective FDA point estimates. A reason for this skewness is likely the existence of one outlier testing kit that performs notably worse than the others (Chembio Diagnostic Systems). Removing this datapoint brings the mean false positive rate down to 0.6%, very close to the estimate provided by
Bendavid et al. (2020).Test characteristic  min  median  mean  max  

true positive rate ()  point estimate  77.4%  91.1%  90.6%  100% 
95% CI, low endpoint  60.2%  81%  80.1%  95.8%  
95% CI, high endpoint  88.5%  96.7%  95.3%  100%  
false positive rate ()  point estimate  0%  0.35%  1.07%  5.6% 
95% CI, low endpoint  0%  0.1%  0.52%  2.7%  
95% CI, high endpoint  0.3%  1.6%  3.18%  11.1% 
4.2 Santa Clara study
In the Santa Clara study, Bendavid et al. (2020) report a validation study and main study, with participants, respectively. The observed test positives are , respectively. Given these data, we produce the 95% confidence sets for following both procedures in (8) and (10) described in Section 3.3. In Figure 11 of Appendix C, we jointly plot all triples in the 3dimensional space of Equation (8), with additional coloring based on prevalence values. We see that the confidence set is a convex space tilting to higher prevalence values as the false positive rate of the test decreases. The true positive rate does not affect prevalence, as long as it stays in the range 80%95%.
To better visualize the pairwise relationships between the model parameters, we also provide Figure 3 that breaks down Figure 11 into two subplots, one visualizing the pairs and another visualizing the pairs . The figure visualizes both and to illustrate the differences between the two constructions. From Figure 3, we see that the Santa Clara study is not conclusive about Covid19 prevalence. A prevalence of 0% is plausible, given a high enough false positive rate. However, if the true false positive rate is near its empirical value of 0.5%, as estimated by Bendavid et al. (2020), then the identified prevalence rate is estimated in the range 0.4%1.8% in . Under this assumption, we see that offers a sharper inference, as expected, with an estimated prevalence in the range 0.7%1.5%. Even though, strictly speaking, the statistical evidence is not sufficient here for definite inference on prevalence, we tend to favor the latter interval because (i) common sense precludes 0% prevalence in the Santa Clara county (total pop. of about 2 million); (ii) the interval generally agrees with the test performance data presented earlier, and (iii) it is still in the low end compared to prevalence estimates from other serology studies (see Table 1). Regardless, pinning down the false positive rate is important for estimating prevalence, especially when prevalence is as low as it appears to be in the Santa Clara study. Roughly speaking, a decrease of 1% in the false positive rate implies an increase of 1.3% in prevalence.
4.3 Santa Clara study: Comparison to other methods
In this section, we aim to discuss how our method practically compares to more standard methods using data from the Santa Clara study. In our comparison we include Bayesian methods, a classical likelihood ratiobased test, and the Monte Carlobased approach to partial identification proposed by Chen et al. (2018).^{8}^{8}8 All these methods are fully implemented in the accompanying code at https://github.com/ptoulis/covid19.
4.3.1 Comparison to standard Bayesian and bootstrapbased methods
Due to initial criticism, the authors of the original Santa Clara study published a revision of their work, where they use a bootstrap procedure to calculate confidence intervals for prevalence in the range 0.7%1.8%.^{9}^{9}9Link: https://www.medrxiv.org/content/10.1101/2020.04.14.20062463v2.full.pdf. Some recent Bayesian analyses report wider prevalence intervals in the range 0.3%2.1% (Gelman and Carpenter, 2020). In another Bayesian multilevel analysis, Levesque and Maybury (2020) report similar findings but mention that posterior summarization here may be subtle, since the posterior density of prevalence in their specification includes 0%. These results are in agreement with our analysis in the previous section only if we assume that the true false positive rate of the serology test was near its empirical estimate (0.5%). We discussed intuitively the reason for such discrepancy in Section 3.4, where we argued that standard methods typically do inference “around the mode” of the likelihood, and may thus miscalculate the amount of statistical information hidden in the tails.
For a numerical illustration, consider two parameter values, namely and , where the components denote the false positive rate, true positive rate, and prevalence, respectively. In the Santa Clara study, and , that is, (which implies 0% prevalence) maps to a likelihood value that is many orders of magnitude smaller than . In fact, is close to the mode of the likelihood, and so frequentist or Bayesian inference is mostly based around that mode, ignoring the tails of the likelihood function, such as . For our method, however, the small value of is moreorless irrelevant — what matters is how this value compares to the entire distribution . It turns out that , that is, 13.7% of the mass of is below the observed level . As such, cannot be rejected at the 5% level (see also Appendix D). This highlights the key difference of our procedure compared to frequentist or Bayesian procedures. More generally, we expect to see such important differences between the inference from our method and the inference from other more standard methods in settings with small samples or poor identification (e.g., nonseparable, multimodal likelihood).
4.3.2 Comparison to likehood ratio test
As briefly described in Section 3.1, our test is related to the likelihood ratio test (Lehmann and Romano, 2006, Chapter 3). Here, we study the similarities and differences between the two tests, both theoretically and empirically through the Santa Clara study. Specifically, consider testing a null hypothesis that the true parameter is equal to some value using the likelihood ratio statistic,
(11) 
Since is known analytically from Equation (4), the null distribution of can be fully simulated. An exact pvalue can then be obtained by comparing this null distribution with the observed value . We can see that this method is similar to ours in the sense that both methods use the full density in the test, and both are exact. The main difference, however, is that our method is using a summary of the density values that are below the observed value , which avoids the expensive (and sometimes numerically unstable) maximization in the denominator of the likelihood ratio test in (11). Our proposed method turns out to be orders of magnitude faster than the likelihood ratio approach as we get fold to fold speedups in our setup — see Section 5.1 for a more detailed comparison in computational efficiency.
To efficiently compare the inference between the two tests, we sampled 5,000 different parameter values from inside — i.e., the 95% confidence set from the basic test in Equation (8) — and 5,000 parameter values from , and then calculated the overlap between the test decisions. The likelihood ratio test rejected 3% of the values from the first set, and 98% of the values from the second set, indicating a good amount of overlap between the two tests. The correlation between the pvalue from the likelihood ratio test, and the values , which our basic test uses to make a decision in Equation (8), was equal to 0.94. The correlation with the alternative confidence set construction is 0.90, using instead the values in the above calculation. Since the likelihood ratio test is exact, these results suggest that our test procedures are generally highpowered.
In Figures 7 and 8 of Appendix A, we plot the 95% confidence sets from the likelihood ratio test described above for the Santa Clara study and the LA county study (of the following section). The estimated prevalence is 0%1.9% for Santa Clara, which is shorter than but wider than , as reported earlier; the same holds for LA county. As with our method, prevalence here is estimated through direct projection of the confidence set, which may be conservative. It is also possible that with more samples the likelihood ratio test could achieve the same interval as (we used only 100 samples), but this would come at an increased computational cost. Overall, the likelihood ratio test produces very similar results to our method, but it is not as efficient computationally.
4.3.3 Comparison to Monte Carlo confidence set method of Chen et al. (2018)
In recent work, Chen et al. (2018) proposed a Monte Carlobased method of inference in partially identified models. The idea is to sample from a quasiposterior distribution, and then calculate , the 95% percentile of , where denotes the th sample from the posterior. The 95% confidence set is then defined as:
(12) 
We implemented this procedure with an MCMC chain that appears to be mixing well — see Appendix B and Figure 9 for details. The 95% confidence set, , is given in Figure 10 of Appendix B. Simple projection, yields a prevalence in the range 0.9%1.43%. This suggests that our MCMC “spends more time” around the mode of the likelihood, which we back up with numerical evidence in Appendix B. Finally, we also tried Procedure 3 of Chen et al. (2018), which does not require MCMC simulations but is generally more conservative. Prevalence was estimated in the range 0.12%1.65%, which is comparable to our method and the likelihood ratio test.
4.4 LA County study
Next, we analyze the results from a recent serology study in Los Angeles county, which estimated a prevalence of 4.1% over the entire county population.^{10}^{10}10http://publichealth.lacounty.gov/phcommon/public/media/mediapubhpdetail.cfm?prid=2328 We use the same validation study as before since this study was executed by the same team as the Santa Clara one. Here, the main study had participants with positives.^{11}^{11}11 This number was not reported in the official study announcement mentioned above. It was reported in a Science article referencing one of the authors of the study: https://www.sciencemag.org/news/2020/04/antibodysurveyssuggestingvastundercountcoronavirusinfectionsmaybeunreliable. For inference, we only use the alternative construction, , of Equation (10) to simplify exposition. The results are shown in Figure 4.
In contrast to the Santa Clara study, we see that the results from this study are conclusive. The prevalence rate is estimated in the range 1.7%5.2%. If the false positive rate is, for example, closer to its empirical estimate (0.5%) then the identified prevalence is relatively high, somewhere in the range 3%5.2%. We also see that the true positive rate is estimated in the range 85%95%, which is higher than the empirical point estimate of 80% provided by Bendavid et al. (2020). In fact, the empirical point estimate is not even in the 95% confidence set. Finally, as an illustration, we combine the data from the Santa Clara and LA county studies. The assumption is that the characteristics of the tests used in both studies were identical. The results are shown in Figure 12 of Appendix C. We see that 0% prevalence is consistent with the combined study as well. Furthermore, prevalence values higher than 2.5% do not seem plausible in the combined data.
4.5 New York study
Recently, a quasirandomized study was conducted in New York state, including NYC, which sampled individuals shopping in grocery stores. Details about this study were not made available. Here, we assume that the medical testing technology used was the same as in the Santa Clara and LA county studies, or at least similar enough that the comparison remains informative.
Under this assumption, we can use the same validation study as before, with participants in the validation study, and positives, respectively. The main study in New York had participants with observed test positives.^{12}^{12}12 https://www.nytimes.com/2020/04/23/nyregion/coronavirusantibodiestestny.html The confidence set on this dataset is shown in Figure 5. We see that the evidence in this study is much stronger than the Santa Clara/LA county studies with an estimated prevalence in the range 12.9%16.6%. The true positive rate is now an important identifying parameter in the sense that knowing its true value could narrow down the confidence set even further.
Finally, in Figure 13 of Appendix C we present prevalence estimates for a combination of all datasets presented so far, while using both constructions, and , to illustrate their differences. As mentioned earlier, this requires the assumption that the antibody testing kits used in all three studies had identical specifications, or at least very similar so that the comparison remains informative. This assumption is most likely untenable given the available knowledge. However, we present the results there for illustration and completeness. The general picture in the combined study is a juxtaposition of earlier findings. For example, both false and positive rates are now important for identification. The identified prevalence is in the range 5.2%8.2% in (and 3.2%8.9% in ). These numbers are larger than the Santa Clara/LA county studies but smaller than the New York study.
5 Discussion
5.1 Computation
The procedure described in Section 3.3 is computationally intensive for two main reasons. First, we need to consider all values of , which is a threedimensional grid. Second, given some , we need to calculate for each , which is also a threedimensional grid.
To deal with the first problem we can use parallelization, since the test decisions in step 3 of our procedure are independent of each other. For instance, the results in Section 4 were obtained in a computing cluster (managed by Slurm) comprised of 500 nodes, each with x86 architecture, 64bit processors, and 16GB of memory. The total wall clock time to produce all results of the previous section was about 1 hour. The results for, say, the Santa Clara study can be obtained in much shorter time (a few minutes) because they contain few positive test results. To address the second computational bottleneck we can exploit the independence property between , and , as shown in the product of Equation (4). Since any zero term in this product implies a zero value for , we can ignore all individual term values that are very small. Through numerical experiments, we estimate that this computational trick prunes on average 97% of leading to a significant computational speedup. For example, to test one single value takes about 0.25 seconds in a typical highend laptop, which is a 200fold speedup compared to 50 seconds required by the likelihood ratio test of Section 4.3.2 — see Appendix A for more details.
5.2 Extrapolation to general population
As mentioned earlier, prevalence in Equation (2) is a finitepopulation estimand, that is, it is a number that refers to the particular population in the study. Theorem 1 shows that our procedure is valid for only under Assumption (A4). However, to extrapolate to the general population we generally need to assume that
(A3) 
This is currently an untenable assumption. For example, in the Santa Clara study the population of middleaged white women was overrepresented, while the population of Asian or Latino communities was underrepresented. The impact from such selection bias on the inferential task is very hard to ascertain in the available studies. Techniques such as poststratification or reweighing can help, but at this early stage any extrapolation using distributional assumptions would be too speculative. However, selection bias is a wellknown issue among researchers, and can be addressed as widespread and carefully designed antibody testing catches on. We leave this for future work.
6 Concluding Remarks
In this paper, we presented a partial identification method for estimating prevalence of Covid19 from randomized serology studies. The benefit of our method is that it is valid in finite samples, as it does not rely on asymptotics, approximations or normality assumptions. We show that some recent serology studies in the US are not conclusive (0% prevalence is in the 95% confidence set). However, the New York study gives strong evidence for high prevalence in the range 12.9%16.6%. A combination of all datasets shifts this range down to 5.2%8.2%, under a test uniformity assumption. Looking ahead, we hope that the method developed here can contribute to a more robust analysis of future Covid19 serology tests.
7 Acknowledgments
I would like to thank Guanglei Hong, Ali Hortascu, Chuck Manski, Casey Mulligan, Joerg Stoye, and Harald Uhlig for useful suggestions and feedback. Special thanks to Connor Dowd for his suggestion of the alternative construction (10), and to Elie Tamer for various important suggestions. Finally, I gratefully acknowledge support from the John E. Jeuck Fellowship at Booth School of Business.
References
 A simple planning problem for covid19 lockdown. Technical report National Bureau of Economic Research. Cited by: §1.
 Prevalence of sarscov2 infection in residents of a large homeless shelter in boston. JAMA. Cited by: Table 1.
 COVID19 antibody seroprevalence in santa clara county, california. medRxiv. Cited by: Table 1, §1, §2, §3.1, §4.1, §4.1, §4.2, §4.2, §4.4.
 Partial identification using random set theory. Journal of Econometrics 166 (1), pp. 17–32. Cited by: §1.
 Monte carlo confidence sets for identified sets. Econometrica 86 (6), pp. 1965–2018. Cited by: Figure 10, Appendix B, Remark 3.4, §4.3.3, §4.3.3, §4.3.3, §4.3.
 Estimation and confidence regions for parameter sets in econometric models 1. Econometrica 75 (5), pp. 1243–1284. Cited by: §1.
 Estimation of seroprevalence of novel coronavirus disease (covid19) using preserved serum at an outpatient setting in kobe, japan: a crosssectional study.. medRxiv. Cited by: Table 1.
 Report 13: estimating the number of infections and the impact of nonpharmaceutical interventions on covid19 in 11 european countries. Cited by: §1.
 Seroprevalence of antibodies against sarscov2 among health care workers in a large spanish reference hospital. medRxiv. Cited by: Table 1.
 Bayesian analysis of tests with unknown specificity and sensitivity. medRxiv. Cited by: §4.3.1.
 Bounds on parameters in panel dynamic discrete choice models. Econometrica 74 (3), pp. 611–629. Cited by: §1.
 Estimating the fraction of unreported infections in epidemics with a known epicenter: an application to covid19. University of Chicago, Becker Friedman Institute for Economics Working Paper (202037). Cited by: Table 1, §1.
 Confidence intervals for partially identified parameters. Econometrica 72 (6), pp. 1845–1857. Cited by: §1.
 Confidence intervals for projections of partially identified parameters. Econometrica 87 (4), pp. 1397–1432. Cited by: §1, Remark 3.4.
 Suppression of covid19 outbreak in the municipality of vo, italy. medRxiv. Cited by: Table 1.
 Testing statistical hypotheses. Springer Science & Business Media. Cited by: §4.3.2.
 A note on covid19 seroprevalence studies: a metaanalysis using hierarchical modelling. medRxiv. Cited by: §4.3.1.
 Substantial undocumented infection facilitates the rapid dissemination of novel coronavirus (sarscov2). Science. Cited by: §1.
 Prevalence and clinical features of 2019 novel coronavirus disease (covid19) in the fever clinic of a teaching hospital in beijing: a singlecenter, retrospective study. medRxiv. Cited by: Table 1.
 Estimating the prevalence of covid19 in the united states: three complementary approaches. Cited by: Table 1.
 Estimating the covid19 infection rate: anatomy of an inference problem. Technical report National Bureau of Economic Research. Cited by: Table 1.

Partial identification of probability distributions
. Springer Science & Business Media. Cited by: §1.  Partial identification of counterfactual choice probabilities. International Economic Review 48 (4), pp. 1393–1410. Cited by: §1.
 Partial identification in econometrics. In Microeconometrics, pp. 178–188. Cited by: §1.
 Inference for identifiable parameters in partially identified econometric models. Journal of Statistical Planning and Inference 138 (9), pp. 2786–2807. Cited by: §1.
 Inference for the identified set in partially identified econometric models. Econometrica 78 (1), pp. 169–211. Cited by: §1.
 Community prevalence of sarscov2 among patients with influenzalike illnesses presenting to a los angeles medical center in march 2020. JAMA. Cited by: Table 1.
 More on confidence intervals for partially identified parameters. Econometrica 77 (4), pp. 1299–1315. Cited by: §1, Remark 3.4.
 Repeated seroprevalence of antisarscov2 igg antibodies in a populationbased sample from geneva, switzerland. medRxiv. Cited by: Table 1.
 Universal screening for sarscov2 in women admitted for delivery. New England Journal of Medicine. Cited by: Table 1.
 Partial identification in econometrics. Annu. Rev. Econ. 2 (1), pp. 167–195. Cited by: §1.
 Estimate of covid19 case prevalence in india based on surveillance data of patients with severe acute respiratory illness. medRxiv. Cited by: Table 1.
 What’s new in econometrics? lecture 9: partial identification. NBER Summer Institute 9, pp. 2011. Cited by: §1.
 Estimation of sarscov2 infection prevalence in santa clara county. medRxiv. Cited by: Table 1.
Appendix
Appendix A More details on the likelihood ratio test
The concrete testing procedure for the likelihood ratio test of Section 4.3.2 is as follows.

Define the test statistic:

Calculate the observed value .

Sample from (we set samples).

Calculate the onesided pvalue as
For the maximization in the denominator of we use the standard BFGS algorithm on a natural reparameterization. Specifically, we define the natural parameters as follows:
where are the original model parameters, i.e., false positive rate, true positive rate, and prevalence, respectively; and . To avoid numerical instabilities we define and , where is a small constant, e.g., 1e8. Since the natural parameter is unconstrained, the optimization routine becomes faster and easier; mapping back from to is also straightforward.
The maximization takes about 0.05 seconds of wallclock time in a typical highend laptop.^{13}^{13}13 CPU: Intel(R) Core(TM) i78559U CPU at 2.70GHz; Memory: 16 GB 2133 MHz LPDDR3. It therefore takes a total of 50 seconds to test one single hypothesis based on 1,000 samples of the likelihood ratio. In contrast, our partial identification method takes 0.25 seconds of wallclock time to test the same single null hypothesis, a 200fold speedup. As explained in Section 5.1 this is because the computation of can be done very efficiently due to the decomposition of into three independent terms in Equation (4).
Since the likelihood ratio test cannot be fully implemented, we chose to sample randomly 5,000 parameter values from the basic confidence set, , and 5,000 values from , and then test each value using the likelihood ratio test. The idea is to explore the agreement of the two tests. The overlap between the likelihood ratio test decisions and the basic construction is 97.3% for the values from , and 97.7% for the values from . There was even more agreement with the alternative construction, specifically 97.3% and 99.6%, respectively. Figure 6 also shows some more detailed results. The xaxis represents the pvalue generated in step 4 of the likelihood ratio test. The yaxis represents the “strength of evidence” calculated by our procedure. For the basic construction, , of Equation (8) the strength of evidence is defined as — the larger this value is, the stronger we reject. For the alternative construction, , of Equation (10) the value is defined as . We see high correlation between the different tests (0.94 and 0.90, respectively).
Appendix B More details on Monte Carlo confidence sets
To implement the method of Chen et al. (2018) we used the following procedure.

We defined the natural parameter, as in the previous section, and imposed a uniform prior: .

We implement the MetropolisHastings algorithm through a symmetric proposal distribution, , where is the maximumlikelihood estimate.

We run the MetropolisHastings for 200,000 iterations and got samples from the posterior distribution . We discarded the first 20% of the posterior samples. Figure 9 shows that the MCMC chain appears to be mixing well.

We transformed the samples back as samples, and calculated , the 95% percentile of , where denotes the th sample. The 95% confidence set is then defined as:
The 95% posterior credible interval (shown in the bottom panel of Figure
9) was 0.27%1.56%, which is similar but slightly more narrow than the intervals from the Bayesian analyses described in Section 4.3. The 95% confidence set, , defined above is visualized in Figure 10. Simple projection, yields a prevalence in the range 0.9%1.43%. We note that 0.9% corresponds to 32 true positives out of total 50 positives in the Santa Clara study. We can see from Figure 9 that this number corresponds to the mode of the posterior marginal distribution for prevalence. Given the symmetry of this posterior distribution around the mode, it is surprising that the lowend of the confidence set corresponds to 0.9% prevalence (i.e, 32 true positives out of 3,330 tests). We can explain this discrepancy numerically by checking that values higher than 32 (prevalence higher than 0.9%), that is, values on the rightend of the marginal posterior distribution, generally map to (much) higher likelihood values compared to values on the leftend. So, even though the marginal posterior looks symmetric, the likelihood values are not. Since, the confidence set defined above is based on the likelihood values, it will be “skewed” towards higher values of .Appendix C Additional results from serology studies in the US
Here, we present additional results from our analysis on serology studies from the US. In particular, we combine datasets from various studies and analyze the results. This requires the assumption that the antibody testing kits used in all three studies had identical specifications, or at least very similar so that the comparison remains informative.
Appendix D Numerical example illustrating differences with likelihoodbased methods
Here, we illustrate the differences between our proposed inferential method and likelihoodbased methods (both Bayesian and frequentist) through a simple numerical example. Suppose that is such that , with extremely large, and fix some parameter value to test. Suppose also that the conditional density is defined as:
Set both and to be infinitesimal values. As such, under we observe with probability roughly equal to 0.95, or with some very small probability , or observe any other remaining value from uniformly at random.
Suppose we observe in the data. Should we reject or accept ?
Since we can make arbitrarily small, an inferential method that focuses only on the likelihood function, would conclude that any is more plausible than , as long as . Both frequentist and Bayesian methods would agree to such conclusion, and typically would perform inference around the mode of with respect to . However, our procedure makes a different conclusion, and actually accepts (at the 5% level)! The reason is that
That is, even though is equal to a tiny value, there is still 5% of the mass of at or below that value.
Comments
There are no comments yet.