Novel Impossibility Results for Group-Testing

01/08/2018 ∙ by Abhishek Agarwal, et al. ∙ University of Massachusetts Amherst University of Minnesota The Chinese University of Hong Kong 0

In this work we prove non-trivial impossibility results for perhaps the simplest non-linear estimation problem, that of Group Testing (GT), via the recently developed Madiman-Tetali inequalities. Group Testing concerns itself with identifying a hidden set of d defective items from a set of n items via t disjunctive/pooled measurements ("group tests"). We consider the linear sparsity regime, i.e. d = δ n for any constant δ >0, a hitherto little-explored (though natural) regime. In a standard information-theoretic setting, where the tests are required to be non-adaptive and a small probability of reconstruction error is allowed, our lower bounds on t are the first that improve over the classical counting lower bound, t/n ≥ H(δ), where H(·) is the binary entropy function. As corollaries of our result, we show that (i) for δ≳ 0.347, individual testing is essentially optimal, i.e., t ≥ n(1-o(1)); and (ii) there is an adaptivity gap, since for δ∈ (0.3471,0.3819) known adaptive GT algorithms require fewer than n tests to reconstruct D, whereas our bounds imply that the best nonadaptive algorithm must essentially be individual testing of each element. Perhaps most importantly, our work provides a framework for combining combinatorial and information-theoretic methods for deriving non-trivial lower bounds for a variety of non-linear estimation problems.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Figure 1: Bounds on vs. for . The lower bound implied by theorem 1 corresponds to the horizontal part of the magenta curve, and the result implied by theorem 2 corresponds to the remainder of the magenta curve (the “Quantization bound”). Both of these are superseded by the more sophisticated (and harder to prove) lower bound in theorem 7, plotted via the red curve. The shaded region (above the blue curve and below the red curve) denotes where there is an “adaptivity gap” – the lower bound for (vanishing-error) NAGT exceeds the rate achievable by (zero-error) AGT [25].

Estimation/inverse problems are the bread and butter of engineering – given a system with a known input-output relationship, an observed output, and statistics on the input, the goal is to infer the input. While much is known about linear estimation problems and their fundamental limits [16, 22], understandably characterizing the fundamental limits of non-linear estimation problems are considerably more challenging.

Arguably one of the “simplest” non-linear estimation problems is that of Group Testing (GT). It is assumed that hidden among a set of items is a special set of defective items.111It is typically assumed that the value of , or a good upper bound on it, is known a priori. This is because it can be shown that PAC-learning the value of is “cheap” in terms of the number of group tests required [6]. The classical problem as posed by Dorfman [8], requires one to exactly estimate via disjunctive measurements (“group tests”) on “pools” of items. That is, the output of each test is positive if the pool contains at least one item from , and negative otherwise. Besides its intrinsic appeal as a fundamental estimation problem, group-testing and its generalizations have a variety of diverse applications, such as bioinformatics [21], wireless communications [28, 3], and pattern finding [17].

Group testing problems come in a variety of flavours. In particular:

  1. (Non)-Adaptivity: The testing algorithm can be adaptive (tests may be designed depending on previous test outcomes) or non-adaptive (tests must be designed non-adaptively, allowing for parallel testing/standardized hardware).

  2. Reconstruction error: The reconstruction algorithm might need to be zero-error (always output the correct answer), or vanishing error (the probability of error goes to zero asymptotically in ), or an probability of error (-error) may be allowed. 222Note that the error here is in the decoder, not in the test outcomes. There is considerable other literature (e.g. [4]) for the scenario when the test outcomes themselves may be noisy, for instance due to faulty hardware.

  3. Statistics of : Different works consider different statistical models for . In Combinatorial Group Testing (CGT), it is assumed that any set of items may be defective, whereas in Probabilistic Group Testing (PGT), items are assumed to be i.i.d. defective with probability .

  4. Sparsity regime: Finally, it turns out that the specific sparsity regime matters - the regime where scales sub linearly in has seen much work, whereas the linear sparsity regime ( for some constant ) is relatively little explored.

In this work we focus on non-adaptive group-testing with -error in the linear sparsity regime – indeed, this is perhaps the most “natural” version of the problem, especially when viewed through an information-theoretic lens (for instance, the most investigated/used versions of channel codes are: non-adaptive since the encoder does not get to see the decoder’s input; allow for reconstruction error; and typically have constant rate and hence are in the linear regime). Nonetheless, to put our own results in context we first briefly reprise the literature for other flavors of the problem in table 1. Note that, with a slight abuse of notation, we denote by

the entropy of the random variable/vector

, as well as the binary entropy function . This should be clear from the argument of the function.

In particular, let us briefly discuss the existing results of -error nonadaptive group testing problem, the focus of this paper. It is quite straightforward to come up with a converse result based on counting/Fano’s inequality (for example, see [4]) that says . In [1], it has been shown that this bound is also tight for small , as long as by showing randomized achievability schemes. Probabilistic existence of achievability schemes in this regime has also been derived, including for more general settings, in [31] (see Theorem 5.5 therein). If we are allowed to sacrifice a constant factor in the number of tests, then we can have explicit deterministic construction of such achievability schemes [19]. It is to be noted that, there is a surprising lack of study in the regime where the number of defectives varies linearly with the number of elements, i.e., . The counting converse bound simply boils down to This implies that individual testing of items is optimal when . There is no other nontrivial converse bound that exists for the linear regime. In this paper we aim to close this gap. On the other hand, a recent work by Wadayama [26], provides an achievability scheme in this regime based on sparse-graph codes (and density-evolution analysis). For certain values of (for example ), this achievability scheme is in direct contradiction with our impossibility result in theorem 7.

It also is worth pointing out that the linear-sparsity regime is well-studied for adaptive group testing starting as early as in the sixties.333The authors would like to thank Matthew Aldridge for drawing our attention to this part of the literature. It has been shown that under a zero probability of error metric, for individual testing is the optimal strategy [25]444In the literature there is also a conjecture [13] that if one demands that the worst-case number of tests (rather than average number of tests) be less than , then under a zero probability of error metric no value of can be tolerated. On the other hand, a rather simple adaptive algorithm achieves an expected number of tests equaling at most and identifies all defectives [25]555 [25] actually ascribes this algorithm to folklore – we have been unable to find an earlier reference to this result. – we reprise this algorithm for completeness in section A.2. This is interesting if we contrast this with our converse result. There is a regime of values of (roughly in the range ), where zero-error adaptive algorithms on average require fewer than tests to reconstruct , however our bounds imply that the best nonadaptive algorithm (even with vanishing error) turns out to essentially be individual testing of each element.

max width= [c] Adaptive Non-Adaptive Zero-error Zero-error Vanishing Error / -error sub-linear linear sub-linear linear sub-linear linear Achievability [14] [25] [15] , [1] See discussion of [26] Converse (folklore) [4], if , (Max. # tests) [23] , if , (Avg. # tests)[25] [10] if [9, Thm 7.2.9] [9, Thm 7.2.9] [4] (see theorem 7)

  • *1 Adaptive algorithms with reconstruction error have not really been considered much in the literature. Most proposed algorithms naturally result in zero-error, and the only known converses that are tighter than the counting bound intrinsically rely on the zero-error nature of the problem.

  • *2 This bound holds even for -error.

  • *3 It is known [25] that is the correct cutoff point for adaptive PGT, whereas it is conjectured that the cutoff point for adaptive CGT is 1/3 [13]

  • *4

Table 1: A comparison of known inner and outer bounds on the number of tests required in a variety of group-testing settings. See also fig. 1.

1.1 Our Contributions and Techniques

The canonical method (variously called the information-theoretic bound, or the counting bound) for proving impossibility results for group-testing problems via information-theoretic methods is quite robust to model perturbations: it works for adaptive and non-adaptive algorithms, zero-error and vanishing error reconstruction error criteria, PGT and CGT, and sub-linear and linear regimes. This method (see the Appendix in [4] for an example) generally proceeds as follows:

  1. Entropy bound on input: One first bounds the entropy of the -length binary vector describing the status of the items (this means, the entry corresponding to an element in is if and only if the element is defective): this quantity equals in the CGT case, and in the PGT case666One can see directly via Stirling’s approximation that for large these two quantities are equal, up to lower-order terms.; then

  2. Information (in)equalities/Fano’s inequality:

    One uses standard information equalities, the data-processing inequality, the chain-rule, and Fano’s inequality to argue that any group-testing scheme must satisfy the inequality

    (here is a binary vector describing the set of test outcomes (that means an entry in is if and only if the corresponding test result is positive), and is a lower bound on the probability of error of the group-testing scheme); and then

  3. Independence bound. Since is a binary vector, one uses the independence bound to argue that , and thereby obtains a lower bound on the required number of tests , as a function of , and .

Perhaps surprisingly, even for such a non-linear problem as group-testing, for a variety of group-testing flavors (such as non-adaptive GT with vanishing error when  [1]) such a straightforward approach results in an essentially tight lower bound on the number of tests required. The key contribution of our work is to provide a tightening of the method above for the regimes where it is not known to be tight.

While we believe our generalization technique is also fairly robust to various perturbations of the group-testing model, we focus in this work on the problem of -error non-adaptive PGT777As noted in the Remark at the end of section 2.6, almost all the techniques in this paper go through even for CGT – we highlight the current technical bottleneck there as well. in the linear sparsity regime. Possibly our key insight is that for this problem variant is that step (iii) of the counting bound may be quite loose.

Specifically, we present three novel converse bounds in theorems 7, 2 and 1 for the general non-adaptive PGT problem in the linear regime. The result in theorem 1 follows from the observation that, for the individual test entropies are maximized when each test contains exactly one object. Another simple result, for , in theorem 2 follows from the observation that the individual test entropy, satisfies for most of the region because of the constraint that each test must contain an integer number of objects.

Our main result (tighter than either theorem 1 or theorem 2, but also significantly more challenging to prove) in theorem 7 exploits the observation that the tests in the Non Adaptive Group Testing (NAGT) problem must have elements in common. For the linear regime, this observation leads to significant mutual information between the tests when the number of objects in the tests do not scale with . Hence, we can exploit this mutual information to tighten the upper bound on the joint entropy in step (iii) above. Figure 1 plots our results in the linear regime along with existing results in the literature.

To bound the joint entropy in step (iii), we must look for information inequalities that upper bound the joint entropies of correlated random variables. While the fascinating polymatroidal properties of such joint entropies (Shannon-type inequalities) explored by Zhang and Yeung [30], as well as the non-Shannon-type inequalities that were subsequently found [29] and are not consequences of such polymatroidal properties, are in this direction, they are perhaps too general to offer much guidance as to which specific information-inequalities might prove useful for providing non-trivial lower bounds for NAGT. A more structured characterization in this direction is Han’s inequality [12] (implied by Shannon-type inequalities), that says

where contains test results except for the th test.

In this paper we use a significant generalization of Han’s inequality to an asymmetric setting due to Madiman and Tetali [18], that seems well-suited to analyzing the combinatorial structures naturally arising in NAGT. Consider the NAGT matrix , whose th element is 1 if and only if the th test includes the th element. Let denote the binary random variables corresponding to the test outcomes for and let

denote the indicator random variables corresponding to the objects for

. To demonstrate that non-trivial correlation between at least some sets of tests that must exist in our setting, we use the Madiman-Tetali inequalities [18],

(1)

where and . The coefficients and the set form a cover of (more detail on this will be given in section 2). In theorem 7 we use the weak form eq. 24 of the inequality above – see section 4 for a discussion of the strong form and its potential use.

We use a two-step procedure to bound the joint entropy. In the first step, we assume that all the rows of the matrix has same weight (i.e., all tests contain the same number of elements, section 2.5). The results are then extended to general group testing matrices by considering them as a union of tests of (differing) constant weights. The final result is summarized in theorem 7.

In the rest of the paper, we first describe our converse results section 2, followed by a comparison with earlier bounds section 3 and future directions of this project.

2 Impossibility Results for Nonadaptive Group Testing

2.1 Notation and Model

For integers let and . Let denote the logarithm to the base 2, unless otherwise stated.

Consider the PGT problem with objects. Assume that we can tolerate a error probability in the decoding. Denote the indicator random variable which corresponds to object being defective by . Then are iid . With a slight abuse of notation, we use to refer to the random variable and the object interchangeably, when there is no scope of confusion.

Let denote the fixed GT matrix with tests. Denote the random variable corresponding to the outcome of the test in row by and let for . For an object set , let denote the random variable corresponding to the test with object set . For a class of object sets , let denote the random vector corresponding to the test with object sets .

Let denote the set of objects included in test and let denote the tests containing the object . Let denote the class of subsets of corresponding to the object sets of the tests ie. . For a class of sets , and define as the class with set removed from all subsets in and let .

2.2 Simple Converse Bounds

Recall that in the linear sparsity regime each element is defective with probability . The canonical counting bound for the Group Testing problem gives the following upper bound on the number of tests for the -error case:

(2)

This method uses the independence bound to get an upper bound on the joint entropy of the tests, eq. 3, and then uses Fano’s inequality, eq. 4, to get a a lower bound on .

(3)
(4)

We tighten eq. 2 by improving the bound in eq. 3 for the non-adaptive PGT problem in the linear regime. We do this by exploiting the fact the in the NAGT problem there would be a significant fraction of tests that have elements in common. Intuitively, we would want to maximize the entropy of the individual tests by choosing such that i.e. for where

(5)

This implies that all tests contain a constant (with respect to ) number of objects. When any set of such tests have an object in common, we can bound their joint entropy away from . We exploit this fact to bound the joint entropy away from . But first, we exploit the nature of the group tests to improve eq. 2.

Theorem 1.

For the PGT problem, we need at least tests to identify the defective set with error probability for where

.

Proof.

Using the entropy chain rule, for , we have,

(6)

where the inequality in section 2.2 follows since for , . Now, using eq. 4 and section 2.2 we get,

Thus, for we cannot do any better than individual testing. In the rest of the section, we focus on the GT bound for . Even in this regime, we can use the fact that eq. 5 is not an integer for all values of to improve eq. 2 without much effort.

Theorem 2.
Proof.

Due to the fact that each test can contain only an integer number of objects, we have

(7)

Hence theorem 2 follows from eq. 4 and section 2.2. ∎

Note that, for , . Therefore, the result in theorem 2 improves over the classical counting bound.

2.3 Upper Bound via Madiman Tetali inequality

To improve eq. 3 further for all values of , we use the Madiman Tetali inequalities in [18] to exploit the correlation between tests,

(8)

where are a class of subsets of that cover , and denote a fractional cover of the hypergraph on vertex set . This means that for each , the set of numbers satisfy the relation .

Note that using the independence bound for in eq. 8 we have,

(9)

where . Therefore, to improve eq. 3 we have to utilize the fact that have joint entropy less than . Heeding this intuition, first for a fixed set , we derive a non-trivial upper bound on in section 2.4, for tests such that all of them have at least one object in common ie . Next, we use this bound to derive a closed-form expression for the joint entropy in eq. 8 for a constant row weight NAGT matrix in section 2.5. Finally, we generalize the upper bound to derive a closed form expression for arbitrary row weight matrices in section 2.6. Using this expression and eq. 4, we get an improvement over the counting lower bound in theorem 7.

2.4 Upper bound on

Consider a set such that there exists an object that is common in all the tests . Also assume that, . In this case, we upper bound the joint entropy of the tests in theorem 3.

Theorem 3.

Consider , such that , and all tests have at least one object in common. Then,

(10)

where

(11)

and

(12)

In the rest of this section, we give the proof of theorem 3. Assume that the tests have object in common. Let denote the set of tests containing the same objects as but with object removed from all tests ie. . We have,

(13)
(14)
(15)
(16)

Therefore, combining eq. 13, eq. 14, eq. 15, and eq. 16, we have

(17)

Note that,

(18)

Thus, the expression for in eq. 16 is minimized at the minimum possible value of . We lower bound the probability using lemma 4 to get an upper bound on eq. 17.

Lemma 4.

For any , we have,

(19)
Proof.

Note that . We show that the minimization in eq. 22 occurs when all object sets are disjoint. Since, in that case the tests in are independent, we must have,

(20)

Without loss of generality, let . Suppose that, the tests are such that there exists an object that is common among tests for some . Then, we show that, we can decrease the probability by modifying to by including an object in instead of object such that . Denote the modified tests by . Then, it suffices to prove that,

(21)

since using eq. 21 recursively for objects contained in more than one tests in we can prove eq. 20. We prove eq. 21 in section A.1. ∎

Thus, from lemma 4 we have,

(22)

where is defined in eq. 12. Hence, from eq. 16, eq. 18, and eq. 22, we have,

(23)

Now, combining eqs. 23 and 17 we have,

where is as defined in eq. 11.

2.5 Constant Row Weight Testing Matrix

In this section we assume that matrix has constant row weight such that . Intuitively, this is a very natural assumption. Since it allows each test in matrix to be symmetric. This assumption also allows us to easily upper bound the joint entropy of the tests using eq. 8, as we see below.

To apply eq. 8, we consider the hypergraph with edges and having matrix as the incidence matrix. Thus, , where denotes the support set of column in . Note that, in this case, forms a cover of the hypergraph . Therefore, we have,

(24)

We upper bound the expression on the RHS in eq. 24 to get an asymptotic closed form expression for the joint entropy of the form,

(25)

where is shown to be an increasing function of . Thus, using eq. 4, we have,

Theorem 5.

Consider the non-adaptive PGT problem, with tolerable probability of error . Assume that each object is defective independently with probability . Then, for a constant row weight group testing matrix, we have asymptotically in ,

(26)

where such that and

(27)

where is defined in eq. 11.

The proof of theorem 5 follows from eq. 25 and eq. 4. The form of in eq. 25 is derived below as

(28)
(29)
(30)

where eq. 28 follows from theorem 3 and eq. 24, and eq. 29 follows from the convexity of from lemma 6. Note that since is a convex decreasing function of , must be a concave increasing function of . Thus, eq. 25 and hence theorem 5 follows.

Lemma 6.

is a convex decreasing function of

Proof.

We have,

(31)

and

(32)

where . Since is always positive for , is a decreasing convex function of from eqs. 31 and 32. ∎

2.6 General Testing Matrix

In this section, we remove the assumption that matrix has a fixed row weight and derive an upper bound on – better than eq. 3 – for the most general case. We use this upper bound to improve eq. 2 in theorem 7.

We separate the matrix into submatrices based on the number of objects in the tests. Thus, matrix has tests of weight such that .

Now, we show that the analysis in section 2.5 follows through for each matrix . Let . Assume w.l.o.g. that the tests corresponding to are . Denote the support sets of column in by . Note that some of the columns in the matrix may be empty, i.e. . Thus let denote the support sets corresponding to the non-empty columns. Let denote the class of support sets where each empty column is considered a distinct set. Therefore we have,

(33)

When , we have and . Note that for the lower bound in eq. 23 also gives . Therefore,

(34)

Hence, combining eqs. 34 and 2.6 we have

(35)
Remark.

Note that the manipulation in eq. 34, although seemingly unnecessary, is required because is not possible in section 2.5. But since this is possible in this section with non-constant weight GT matrices, the lower bound of in eq. 23 may not hold in this case. But this algebraic manipulation resolves that problem.

Using the expressions in eq. 14, eq. 15 and eq. 23 in eq. 35, we have,

(36)

where for , we define,

(37)

Thus, we have from section 2.6,

(38)
(39)
(40)
(41)

where eq. 41 follows since the maximization in eq. 40 is over a convex polytope and is a concave increasing function of from eq. 32 and eq. 37 and the following equations, equationparentequation

(42a)
(42b)

Then, from eq. 41 and eq. 4, we have our main result.

Theorem 7.

Consider the non-adaptive PGT problem with probability of error at most . Assume that each object is defective independently with probability . Then, we have asymptotically in ,

(43)

where

(44)

The bound in theorem 7 intersects with at .

Remark: Note that although we have stated the results in this paper for the PGT problem, most arguments in the paper go through for the corresponding CGT problem as well. The only problem arises in the proof of lemma 4 since when is not constant (w.r.t. ) . However we believe that with some effort and appropriate approximations, our techniques should also go through for CGT.

3 Discussion and Comparison

In this section we compare the results in theorem 7 with other achievability and impossibility results in the literature. First, to show an adaptivity gap, we consider a simple adaptive algorithm for the GT problem presented in [11] and analyze the expected number of tests required. The algorithm is defined in section A.2. The expected number of tests performed is

(45)

The graph in fig. 1 plots the lower bound in theorem 7, the expected number of tests in eq. 45, the quantization bound in theorem 2, and the entropy counting bound,  eq. 2 for vanishing error i.e. . The solid circle markers in the plot represent the bound in eq. 43 for such that . From fig. 1, there exists a non-vanishing gap between the lower bound in theorem 7 and the counting bound. The quantization bound in theorem 2 also improves over the counting bound for a significant region of . As claimed earlier, we can also see an adaptivity gap in fig. 1 represented by the shaded region.

Even when the results in theorem 7 are plotted for , we can see from eq. 43 and fig. 1 that there would exist a non-vanishing gap between eq. 43 and the counting bound for small values of as well. For , it would be possible to ignore certain objects altogether during tests, and hence a smaller number of tests could be possible.

The number of objects in each test in the GT matrix is constrained to be an integer. This gives a discrete nature to the bound in eq. 43. This is evident from the piecewise nature of plot for the lower bound in eq. 43.

4 Future Work / Implications

In this work we use the weak form of the Madiman-Tetali inequalities in [18] to upper bound the joint entropy of the test . Since the weak form of the inequalities ignores the gains the conditional form of the entropy function provides, we suspect that there is a lot more to be gained by exploiting the strong form in eq. 1. Motivated by the results in this work, we conjecture that for any constant , non-adaptive tests are necessary to ensure vanishing error.

From the the plots in fig. 1 and theorem 7 we see that the joint entropy of the tests is minimized for row weight . As decreases (and increases) the improvement in the first term () in eq. 13 reduces. For eq. 1, the strong form of the Madiman-Tetali inequalities, this term becomes

(46)

Recall the definition of from eq. 1. Intuitively, as increases, the average mutual information between tests and increases. Thus, for the conditional Madiman-Tetali form, the term in eq. 46 may be a lot smaller for small . Hence we believe that the bound in theorem 7 could potentially be improved significantly by using the conditional form of the Madiman-Tetali inequalities. However, the analytical approximations involved in using these techniques are also non-trivial.

Another way to see that the bound in theorem 7 is loose is by changing the hypergraph in eq. 24. Instead of taking the support of a single column of as hyperedges in in eq. 24, we could use the union of support of columns, for i.e. . For large we can see that a large number of tests corresponding to the hyperedges will have more than one object in common. Therefore, we believe there is still room for improvement even just employing the weak degree form of the Madiman-Tetali inequalities.

One more potentially promising direction worth exploring is to consider the rows or columns of the NAGT matrix as codewords of a binary code, use the combinatorial Delsarte inequalities [7] that provide non-trivial bounds on the distance spectrum of codes to appropriately “tighten” the information-theoretic Shannon-type inequalities (specifically the Madiman-Tetali inequalities) in this work. We are motivated by the fact that such an optimization approach has had significant success in providing the essentially tightest known upper bounds on the sizes of binary error-correcting codes [20] – while we freely admit that it is unclear to us what such a fusion of combinatorial and information-theoretic techniques might look like concretely, nonetheless, the prospect is intriguing.

Finally we believe that our technique of lower bounding the number of tests via the Madiman-Tetali inequalities may have wide applicability in similar sparse recovery problems and other variants of group testing, such as threshold group testing [5], the pooled-data problem [27], and potentially even long-standing open problems pertaining to threshold secret-sharing schemes [2].

5 Acknowledgements

The authors would also like to thank Oliver Johnson, Matthew Aldridge, and Jonathan Scarlett for enlightening discussions that significantly improved this work. In particular we would like to acknowledge Oliver Johnson for drawing our attention to [25, 23], to Matthew Aldridge for the observation that our lower bounds imply an adaptivity gap, and to all three for helpful discussions regarding [26].

This work was partially funded by a grant from the University Grants Committee of the Hong Kong Special Administrative Region (Project No. AoE/E-02/08), RGC GRF grants 14208315 and 14313116 and NSF awards CCF 1642550 and CCF 1618512.

Appendix A Appendix

a.1 Proof of the remainder of Lemma 4

Let and let denote the event when the test is negative, and denote the event . Let denote the complement of the event . Let .

1 Using inclusion-exclusion [24, Section 2.1], we have,

(47)

2 Let denote the event when the test is negative, and denote the event . Let denote the complement of the event . Let . Then, for any ,

(48)
Proof.

Using step 1, we have,

(49)

Again using step 1 we have,

(50)
(51)

where (50) follows from (A.1). Now (48) directly follows from (51) ∎

3We have, using step 1 and step 2,

(52)

where (52) follows since

a.2 Ungar’s [25] Adaptive Algorithm

Let . Below we analyze the expected number of tests required in Algorithm 1

Data: objects such that each object is defective independently with probability
Result: the defective set,
1
2 if  then
3       Test each object individually
4 else
5       Partition the items into disjoint pairs.
6       while there exist untested pairs  do
7             ;
8             if  then
9                   ;
10                  
11             else if  then
12                   ;
13                   ;
14                  
15             else if  then
16                   ;
17            
18       end while
19      
Algorithm 1 Adaptive Algorithm for Group Testing

Algorithm 1 conducts tests in lines 1, 1, 1 :

  1. The first test in line 1 is always performed,

  2. The second test in line 1 is performed iff .

  3. The third test in line 1 is performed iff .

Thus the expected number of tests performed is

(53)

References

  • [1] M. Aldridge, O. Johnson, and J. Scarlett, “Improved Group Testing Rates with Constant Column Weight Designs,” in IEEE International Symposium On Information Theory (ISIT), 2016, pp. 1381–1385.
  • [2] A. Beimel, “Secret-Sharing Schemes: A Survey,” IWCC, vol. 6639, pp. 11–46, 2011.
  • [3] T. Berger, N. Mehravari, D. Towsley, and J. Wolf, “Random Multiple-access Communication and Group Testing,” IEEE Transactions on Communications, vol. 32, no. 7, pp. 769–779, 1984.
  • [4] C. L. Chan, S. Jaggi, V. Saligrama, and S. Agnihotri, “Non-adaptive Group Testing: Explicit Bounds and Novel Algorithms,” IEEE Transactions on Information Theory, vol. 60, no. 5, pp. 3019–3035, 2014.
  • [5] P. Damaschke, “Threshold Group Testing,” in General theory of information transfer and combinatorics.   Springer, 2006, pp. 707–718.
  • [6] P. Damaschke and A. S. Muhammad, “Bounds for Nonadaptive Group Tests to Estimate the Amount of Defectives,” in

    International Conference On Combinatorial Optimization and Applications

    .   Springer, 2010, pp. 117–130.
  • [7] P. Delsarte, “An Algebraic Approach to the Association Achemes of Coding Theory,” Philips Res. Reports Suppls., vol. 10, 1973.
  • [8] R. Dorfman, “The Detection of Defective Members of Large Populations,” the Annals of Mathematical Statistics, vol. 14, no. 4, pp. 436–440, 1943.
  • [9] D.-Z. Du and F. K. Hwang, Combinatorial Group Testing and its Applications.   World Scientific, 2000.
  • [10] A. G. D’yachkov and V. V. Rykov, “Bounds on the Length of Disjunctive Codes,” Problemy Peredachi Informatsii, vol. 18, no. 3, pp. 7–13, 1982.
  • [11] P. Fischer, N. Klasner, and I. Wegener, “On the Cut-off Point for Combinatorial Group Testing,” Discrete Applied Mathematics, vol. 91, no. 1-3, pp. 83–92, 1999.
  • [12] T. S. Han, “Nonnegative Entropy Measures of Multivariate Symmetric Correlations,” Information and Control, vol. 36, no. 2, pp. 133–156, 1978.
  • [13] M. Hu, F. Hwang, and J. K. Wang, “A Boundary Problem for Group Testing,” SIAM Journal on Algebraic Discrete Methods, vol. 2, no. 2, pp. 81–87, 1981.
  • [14] F. Hwang, “A Method for Detecting all Defective Members in a Population by Group Testing,” Journal of the American Statistical Association, vol. 67, no. 339, pp. 605–608, 1972.
  • [15] F. Hwang and V. Sós, “Non-adaptive Hypergeometric Group Testing,” Studia Sci. Math. Hungar, vol. 22, pp. 257–263, 1987.
  • [16] T. Kailath, A. H. Sayed, and B. Hassibi, Linear estimation.   Prentice Hall Upper Saddle River, NJ, 2000, vol. 1.
  • [17] A. J. Macula and L. J. Popyack, “A group testing method for finding patterns in data,” Discrete applied mathematics, vol. 144, no. 1, pp. 149–157, 2004.
  • [18]

    M. Madiman and P. Tetali, “Information Inequalities for Joint Distributions, With Interpretations and Applications,”

    IEEE Transactions on Information Theory, vol. 56, no. 6, pp. 2699–2713, 2010.
  • [19] A. Mazumdar, “Nonadaptive Group Testing with Random Set of Defectives,” IEEE Transactions on Information Theory, vol. 62, no. 12, pp. 7522–7531, 2016.
  • [20] R. McEliece, E. Rodemich, H. Rumsey, and L. Welch, “New upper bounds on the rate of a code via the Delsarte-MacWilliams inequalities,” IEEE Transactions on Information Theory, vol. 23, no. 2, pp. 157–166, 1977.
  • [21] H. Q. Ngo and D.-Z. Du, “A Survey on Combinatorial Group Testing Algorithms with Applications to DNA Library Screening,” Discrete Mathematical Problems with Medical Applications, vol. 55, pp. 171–182, 2000.
  • [22] G. Reeves and M. C. Gastpar, “Approximate Sparsity Pattern Recovery: Information-theoretic Lower Bounds,” IEEE Transactions on Information Theory, vol. 59, no. 6, pp. 3451–3465, 2013.
  • [23] L. Riccio and C. J. Colbourn, “Sharper Bounds in Adaptive Group Testing,” Taiwanese Journal of Mathematics, pp. 669–673, 2000.
  • [24] R. P. Stanley, “Enumerative Combinatorics. Vol. 1, Cambridge Studies in Advanced Mathematics,” 2012.
  • [25] P. Ungar, “The Cutoff Point for Group Testing,” Communications on Pure and Applied Mathematics, vol. 13, no. 1, pp. 49–54, 1960.
  • [26] T. Wadayama, “Nonadaptive Group Testing Based On Sparse Pooling Graphs,” IEEE Transactions on Information Theory, vol. 63, no. 3, pp. 1525–1534, 2017.
  • [27] I.-H. Wang, S.-L. Huang, K.-Y. Lee, and K.-C. Chen, “Data Extraction via Histogram and Arithmetic Mean Queries: Fundamental Limits and Algorithms,” in Information Theory (ISIT), 2016 IEEE International Symposium on.   IEEE, 2016, pp. 1386–1390.
  • [28] J. Wolf, “Born Again Group Testing: Multiaccess Communications,” Information Theory, IEEE Transactions on, vol. 31, no. 2, pp. 185–191, 1985.
  • [29] Z. Zhang and R. W. Yeung, “A non-Shannon-type Conditional Inequality of Information Quantities,” IEEE Transactions on Information Theory, vol. 43, no. 6, pp. 1982–1986, 1997.
  • [30] ——, “On Characterization of Entropy Function via Information Inequalities,” IEEE Transactions on Information Theory, vol. 44, no. 4, pp. 1440–1452, 1998.
  • [31] A. Zhigljavsky, “Probabilistic existence theorems in group testing,” Journal of statistical planning and inference, vol. 115, no. 1, pp. 1–43, 2003.