The Stochastic Boolean Function Evaluation Problem for Symmetric Boolean Functions

11/16/2021
by   Dimitrios Gkenosis, et al.
NYU college
0

We give two approximation algorithms solving the Stochastic Boolean Function Evaluation (SBFE) problem for symmetric Boolean functions. The first is an O(log n)-approximation algorithm, based on the submodular goal-value approach of Deshpande, Hellerstein and Kletenik. Our second algorithm, which is simple, is based on the algorithm solving the SBFE problem for k-of-n functions, due to Salloum, Breuer, and Ben-Dov. It achieves a (B-1) approximation factor, where B is the number of blocks of 0's and 1's in the standard vector representation of the symmetric Boolean function. As part of the design of the first algorithm, we prove that the goal value of any symmetric Boolean function is less than n(n+1)/2. Finally, we give an example showing that for symmetric Boolean functions, minimum expected verification cost and minimum expected evaluation cost are not necessarily equal. This contrasts with a previous result, given by Das, Jafarpour, Orlitsky, Pan and Suresh, which showed that equality holds in the unit-cost case.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

06/10/2019

Symmetry Properties of Nested Canalyzing Functions

Many researchers have studied symmetry properties of various Boolean fun...
07/03/2017

On Blockwise Symmetric Matchgate Signatures and Higher Domain #CSP

For any n≥ 3 and q≥ 3, we prove that the Equality function (=_n) on n ...
12/12/2021

Closed-form expressions for the sketching approximability of (some) symmetric Boolean CSPs

A Boolean maximum constraint satisfaction problem, Max-CSP(f), is specif...
11/13/2018

Approximating minimum representations of key Horn functions

Horn functions form a subclass of Boolean functions and appear in many d...
01/10/2018

Distribution of the absolute indicator of random Boolean functions

The absolute indicator is one of the measures used to determine the resi...
01/06/2020

A Boolean Task Algebra for Reinforcement Learning

We propose a framework for defining a Boolean algebra over the space of ...
09/19/2019

Symbolic dynamics and rotation symmetric Boolean functions

We identify the weights wt(f_n) of a family {f_n} of rotation symmetric ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In the Stochastic Boolean Function Evaluation (SBFE) problem, we are given (the representation of) a Boolean function that must be evaluated on an initially unknown assignment to the variables . The value of in this assignment can only be obtained by performing a test, which has an associated cost

. The probability that

is , where and the tests are independent. Testing must continue until the outcomes of the performed tests are sufficient to determine the value of . The problem is to determine the (adaptive) order in which to perform the tests, so as to minimize expected testing cost. (See Section 2 for a formal definition.)

There is an elegant polynomial-time algorithm that solves the SBFE problem when is a -of- function, that is, a Boolean function whose output is 1 iff at least of its inputs are equal to 1. The original version of the algorithm, with its analysis, is due to Salloum, Breuer, and (independently) Ben-Dov  Salloum79; BenDov81; SalloumBreuer84; Chang; Salloum97.

In this paper we consider the SBFE problem for a superclass of the -of- functions, namely the class of symmetric Boolean functions. A Boolean function is symmetric if its output depends only on the number of its inputs that are equal to 1. A symmetric Boolean function is represented by a vector of length indexed from 0 to , which we call its value vector. Position of the value vector contains the value of on input assignments containing exactly 1’s. For example, if is the majority function on variables, then its value vector, indexed from 0 to 3, is .

We note that Cicalese et al. previously presented a simple algorithm for symmetric Boolean function evaluation in a deterministic, on-line setting, where the goal is to minimize a worst-case competitive ratio cicalese2011competitive. Their results do not apply to the SBFE problem.

Approximation algorithms and open questions: We present two approximation algorithms solving the SBFE problem for symmetric Boolean functions. The first uses the goal value approach of Deshpande et al. DeshpandeGoalValue (Section 4). It achieves an -approximation. The second is a simple algorithm whose approximation factor is , where is the number of “blocks” of consecutive 0’s and consecutive 1’s in the value vector for . For example, the value vector has two blocks of 0’s and one block of 1’s, so . The -approximation algorithm uses the -of- evaluation algorithm of Salloum, Breuer, and Ben-Dov as a subroutine. Which approximation factor is smaller, or , depends on the relationship between and for the function in question.

To achieve the approximation bound for the first algorithm, we prove a new structural result on symmetric Boolean functions: we show that the (submodular) goal value of a symmetric Boolean function is upper bounded by . This bound is almost tight, because for even , the goal value of the -of- function for is exactly  bachetalgoalvalue.

It remains an open question whether the SBFE problem for symmetric Boolean functions is NP-hard. It is also open whether there is a polynomial-time constant-factor approximation algorithm.

We note that the SBFE problem is known to be NP-hard for certain classes of Boolean formulas, including linear threshold formulas heuristicLeastCostCox and monotone DNF (or CNF) formulas allen2017evaluation. The SBFE problem for linear threshold formulas has a polynomial-time constant-factor approximation algorithm, but if , the SBFE problem for monotone DNF (CNF) formulas has no such approximation algorithm DeshpandeGoalValue; allen2017evaluation.

Any symmetric Boolean function with must be either a -of- function or the negation of a -of- function. Hence the SBFE problem for symmetric Boolean functions with can be solved exactly in polynomial time. As we discuss below, there are polynomial-time exact algorithms solving the SBFE problem for some specific symmetric Boolean functions with . However, even in the unit-cost case, it is an open question whether there is a polynomial-time algorithm solving the SBFE problem for arbitrary symmetric Boolean functions with .

Evaluation vs. verification: The correctness of the algorithm solving the SBFE problem for -of- functions, due to Salloum, Breuer, and Ben-Dov, is based on a relationship between the evaluation problem and a related verification problem. Intuitively, in the verification problem, you are given the same inputs as in the evaluation problem, and you are also given the value of . You need only perform enough tests to verify that the given value is correct. The correctness of the Salloum-Breuer-Ben-Dov algorithm for -of- functions is based on the fact that for -of- functions, optimal expected evaluation cost is equal to optimal expected verification cost (cf. unluyurtBorosDoubleRegular).

Subsequently, Das et al. showed that, in the special case of unit-costs (i.e., for all ), equality of optimal evaluation and verification costs holds for all symmetric Boolean functions Dasetal12. (Their work was inspired by work of Kowchik and Kumar, who re-discovered the unit-cost version of the Salloum-Breuer-Ben-Dov algorithm kowshikkumar13.)

The work of Das et al. did not address the question of whether equality of optimal expected evaluation and verification costs holds for all symmetric Boolean functions when costs are arbitrary. We give a counterexample showing that it does not hold.


Preliminary versions of the results in this paper appeared previously in a conference paper gkenosisetal18. That paper also contained results for the unit-cost version of the SBFE problem, including a polynomial-time 4-approximation algorithm solving the SBFE problem for arbitrary symmetric Boolean functions in the unit-cost case. Subsequent analysis has shown that the algorithm actually achieves a 2-approximation  GHKL20.

2 Preliminaries

An adaptive evaluation strategy for a Boolean function is a sequential order in which to “test” the variables of , so as to determine the value of on an initially unknown assignment . Testing reveals its value. The choice of the next test can depend on the outcomes of the previous tests.

An adaptive evaluation strategy for corresponds to a

Boolean decision tree computing

. Each internal node of such a tree is labeled with a variable of , and has two children, one corresponding to and the other to . Each leaf of the tree is labeled either 0 or 1. An assignment induces a root-leaf path in the tree, determined by the values of the in . The leaf at the end of that path is labeled with the value of .

We do not require an SBFE algorithm to output the entire decision tree corresponding to the computed adaptive evaluation strategy, as that tree could be of exponential size. It is sufficient for the algorithm to implement the strategy by sequentially computing the next test to perform, in an on-line fashion, and finally outputting the value of .

Consider fixed values for the costs (where ) and probabilities (where ) for the variables . We formally define the expected costs of adaptive evaluation and verification strategies for as follows. Given an adaptive evaluation strategy for , and an assignment , we use to denote the sum of the costs of the tests performed in using on . The expected cost of is , where is the probability of . We say that is an optimal adaptive evaluation strategy for if it has minimum possible expected cost.

A partial assignment is a vector . For , we use to denote , the number of entries of that are set to . A partial assignment represents the information that is known while performing tests. Specifically, for a partial assignment , indicates that the value of is still unknown, otherwise equals the outcome of the test on .

We use to denote the restriction of function to the bits with , produced by fixing the remaining bits according to their values . We call the function induced from by partial assignment .

An assignment is an extension of a partial assignment , written , if for all such that .

A partial assignment is a certificate for a Boolean function if has the same value for all such that .

For , let . An adaptive verification strategy for consists of two adaptive evaluation strategies for , one for each . The expected cost of the verification strategy is and it is optimal if it minimizes this expected cost.

If is an evaluation strategy for , we call the -cost of . For , we say that is -optimal if it has minimum possible -cost. In an optimal verification strategy for , each component evaluation strategy must be -optimal.

A Boolean function is symmetric if its output on depends only on , the number of 1’s in . The value vector for such a function is the dimensional vector , indexed from to , whose th entry is the value of on inputs such that . We partition the value vector into blocks. A block is a maximal subvector of such that entries of the subvector have the same value. Using to denote the number of blocks of the value vector, we define to be the minimum indices of each of the blocks, where , and we define . Block is the subvector of containing the entries indexed by the elements in .

We say that an assignment belongs to block if . If belongs to block , then is equal to .

A function is monotone if whenever . It is submodular if for , such that , and , we have . Here denotes the partial assignment produced from by setting to , and similarly for .

3 Exact algorithms for special classes of symmetric functions

Before presenting our approximation algorithms, we describe exact algorithms solving the SBFE problem for some special classes of symmetric functions.

It is well-known that if is the Boolean OR function, then it is optimal to test the variables in nondecreasing order of the ratio (cf. unluyurtReview). Dually, if is Boolean AND, it is optimal to test in nondecreasing order.

The Salloum-Breuer-Ben-Dov algorithm solving the SBFE problem for -of- functions is recursive and works as follows. Suppose . Create two permutations of the ’s, one in nondecreasing order of the ratio , and one in nondecreasing order of the ratio . By the pigeonhole principle, there must exist a variable that appears within the first variables of the first permutation, and within the first variables of the second permutation. Find such a variable and test it. If , this reduces the problem to a -of- evaluation problem, and if , it reduces the problem to a -of- evaluation problem. Solve the reduced problem recursively. Assuming at the start, the base cases are where , implying that the value of must be 1, and where , implying that the value of must be 0.

The correctness of this algorithm relies on the relationship between the verification and evaluation problems for -of- functions (cf. unluyurtBorosDoubleRegular), as discussed in Section 7.

We note here that an almost identical algorithm solves the SBFE problem for exactly- functions, which output 1 iff exactly of their inputs are 1. The only real difference in the algorithm is that instead of a base case for , there is a base case for , implying that the value of is 0. The correctness proof is nearly identical to that for the -of- algorithm. The unit-cost version of the algorithm for exactly- functions was previously introduced by Acharya al. Acharyaetal11. (They used the name “delta functions” to refer to the exactly- functions. A long, but more descriptive, name for them would be “exactly--of- functions.”)

The value vector for an exactly- function contains exactly one 1, so for any exactly- function where .

Another interesting example of a symmetric function with is the consensus function. The output of the consensus function is 1 iff all of its inputs are equal, so its value vector has 1’s only in its first and last positions. There is a polynomial-time exact algorithm that solves the SBFE problem for the consensus function, and for its complement, the not-all-equal function. We presented the unit-cost version of the algorithm in GHKL20; the extension to arbitrary costs (which we presented in our conference paper gkenosisetal18) is straightforward.

4 Goal value and Adaptive Greedy

The first algorithm we present for evaluating arbitrary symmetric functions uses the goal value approach of Deshpande et al. DeshpandeGoalValue. (They called it the -value approach.) The idea behind the approach is to solve the SBFE problem by reducing it to a (binary-state) Stochastic Submodular Cover problem. The latter problem is similar to the SBFE problem, with the following differences. Instead of being given a Boolean function to evaluate, you are given (an oracle for) a monotone, submodular utility function , where . For simplicity, we will assume in what follows that is integer-valued, so in fact, . The function has the property that for all , the value of is equal to some common value . We call the goal value of . Instead of performing tests until the value of a Boolean function can be determined, tests must be performed until the partial assignment representing the test outcomes so far satisfies . The problem is then to compute an adaptive strategy that minimizes expected testing cost.

To reduce the SBFE problem to the Stochastic Submodular Cover problem, we take the Boolean function that is to be evaluated in the SBFE problem and use it to construct a utility function . The function must be a goal function for , meaning that it satisfies the following properties:

  • is submodular

  • is monotone

  • there exists a value such that for all , iff is a certificate for .

For probabilities and costs , finding an adaptive strategy of minimum expected cost for achieving goal utility (as measured by ) is then equivalent to finding an optimal adaptive evaluation strategy for .

The Adaptive Greedy algorithm of Golovin and Krause golovinKrause is an approximation algorithm for the Stochastic Submodular Cover problem. To choose the variable to test next, it uses the following greedy rule: Choose the whose test outcome maximizes the expected increase in utility, per unit cost (with respect to , the , and the ). There are a number of different proofs showing that Adaptive Greedy achieves an approximation bound for the Stochastic Submodular Cover problem hellersteinKletenikParthasarathy21; DeshpandeGoalValue; hellerstein2018revisiting; imetal.222The tightest of these bounds is the bound due to Hellerstein et al. hellersteinKletenikParthasarathy21. An earlier proof of a bound was found to have an error golovinKrause; nanSaligrama; golovinKrauseArxivv5. A recent bound, due to Esfandiari et al. esfandiarietal19, also applies to a generalization of the Stochastic Submodular Cover problem.

Thus once a submodular goal function is constructed for Boolean function , running Adaptive Greedy on results in an -approximation to the optimal adaptive strategy for evaluating . The challenge in the goal value approach is to construct so that its goal value is small, resulting in a small approximation factor. This is not possible for all classes of Boolean functions . The goal value of a Boolean function is the minimum goal value of any submodular goal function for . The goal value of every Boolean function is upper bounded by  bachetalgoalvalue. While we show that symmetric functions have goal value polynomial in , some classes of Boolean functions have goal value exponential in  DeshpandeGoalValue; bachetalgoalvalue.

5 An -approximation algorithm based on goal value

We present the -approximation algorithm for the SBFE problem for symmetric functions, using the goal value approach. To implement this approach, we construct a submodular goal function for . The construction is in the proof of the following theorem, which gives an upper bound on the goal value of any symmetric function.

Theorem 1.

The goal value of any symmetric Boolean function is strictly less than .

Proof.

Let be a symmetric Boolean function. We construct a submodular goal function for using its value vector . The construction is based on a graph , defined as follows. The graph has vertices, , where corresponds to position of . Let be the number of blocks in . Partition the vertices into subsets, where each subset contains the vertices corresponding to the positions contained in a single block. The graph is the complete -partite graph induced by this partition, so there is an edge from to iff positions and are in different blocks of .

We construct a goal function for that assigns a value to each partial assignment . For , let be the set of vertices of such that either or . Say that covers an edge of if contains at least one of its endpoints. Let be the set of edges of that are covered by . We define .

We now argue that is a goal function for . Clearly . If then and . Thus , so is monotone.

Every partial assignment induces a function of which is a symmetric function on the variables for which . The value vector of is produced from by deleting its first entries and its last entries (which are disjoint, since ). The function is constant iff all entries in its value vector are equal. That is, is a certificate for iff removing the first and last entries from results in the remaining entries all being members of a single block of . This latter property holds iff is the set of all edges of . Thus is equal to the total number of edges of iff is a certificate of .

Finally, we show that is submodular. Consider some and such that . Let ; an analogous argument holds for . Consider , which is equal to . Let . Vertex is the only vertex in that is not in . It follows that the edges in are precisely the pairs where is in the set , and and are in different blocks of .

Now consider such that and . Let .

Consider an edge in . Clearly . Since and , we have as well. Since and are in different blocks of , and , it follows that and are also in different blocks of . Thus for each edge in there is a corresponding edge in . It follows that and therefore . Since an analogous argument holds if , is submodular.

The number of edges in is maximized when each position of is in its own block. In this case, the goal value of the constructed is . This implies that the goal value of any symmetric function is at most .

To see that it is strictly less than , note that there are only two symmetric functions for which each position of is in its own block: the parity function and its complement. For each of these functions, the above construction does not achieve minimum possible goal value. The goal value of these functions is , and it is achieved by the utility function  bachetalgoalvalue. Thus every symmetric Boolean function has goal value strictly less than . ∎

The construction in the above proof generalizes a previous construction for -of- functions. In that case, the graph is bipartite and the constructed function achieves minimum possible goal value for the -of- function bachetalgoalvalue. It is an open problem to give a construction that achieves minimum possible goal value for every symmetric function.

Having described the construction, we prove the following theorem.

Theorem 2.

There is a polynomial-time -approximation algorithm for the SBFE problem for symmetric Boolean functions.

Proof.

The input to the SBFE problem for symmetric Boolean functions is the value vector for the symmetric function that is to be evaluated.

The algorithm uses to construct the graph defined in the proof of Theorem 1. Once is constructed, the value of the associated utility function can be easily computed on any given partial assignment. The algorithm runs the Adaptive Greedy algorithm of Golovin and Krause on to determine the order in which to perform the tests. Let be the partial assignment representing the outcomes of all the tests performed by Adaptive Greedy. By the proof of Theorem 1, the entries are all equal to the desired value . The algorithm outputs one of them, e.g., .

Let denote the goal value of . As shown in the proof of Theorem 1, it is at most . Because Adaptive Greedy is an -approximation algorithm for Stochastic Submodular Cover, the above algorithm achieves an approximation factor of . ∎

6 The -approximation algorithm

The -approximation algorithm for the SBFE problem is simple. It runs the algorithm for evaluating -of- function, due to Salloum, Breuer, and Ben-Dov, once for each of the values associated with the value vector of . The run for sets , and determines whether the initially unknown assignment satisfies , i.e., whether belongs to a block numbered or higher. Once this is done for the values , it is straightforward to determine which block of the value vector contains , and hence to determine the value of .

We present the pseudocode for this algorithm as Algorithm 1 and show it achieves a -approximation. In the pseudocode, we denote as the -of- function with . We note that in different iterations of the for loop, the strategy that is executed in the body may choose a test that was already performed in a previous iteration. The test does not actually have to be repeated, as the outcome can be stored after the first time the test is performed, and accessed whenever the test is chosen again.

  for  to  do
     Using the -of- evaluation algorithm of Salloum, Breuer, and Ben-Dov, perform tests to find the value of
  end for
  if for all , set   //  recall that
  else set
  return  
Algorithm 1 -approximation algorithm

The correctness of the algorithm follows easily from the facts that iff , and that , and so .

We now examine the expected cost of the strategy computed in Algorithm 1. Let denote the expected cost of evaluating using the optimal -of- strategy. Let be the expected cost of the optimal adaptive strategy for .

Lemma 1.

for .

Proof.

Let be the decision tree corresponding to an optimal adaptive strategy for evaluating . Consider using to evaluate on an initially unknown input . When a leaf of is reached, we have discovered the values of some of the bits of . Let be the partial assignment representing that knowledge. Recall that is the function induced from by . The value vector of is a subvector of the value vector of . More particularly, it is the subvector stretching from index of to index . Since is an evaluation strategy for , reaching a leaf of means that we have enough information to determine . Thus all entries of the subvector must be equal, implying that it is contained within a single block of . We call this the block associated with the leaf.

For each block , we can create a new tree from which evaluates the function . We do this by relabeling the leaves of : if the leaf is associated with block , then we label the leaf with output value 1 if , and with 0 otherwise. is an adaptive strategy for evaluating .

The expected cost of evaluating using is equal to , since the structure of the tree is unchanged from (we’ve only changed the labels on leaves). Since cannot do better than the optimal -of- strategy, . ∎

Lemma 1 yields the following theorem:

Theorem 3.

There is a polynomial-time -approximation algorithm for the SBFE problem for symmetric Boolean functions.

Proof.

Let denote the cost of evaluating using the optimal -of- evaluation algorithm. Then the expected cost of Algorithm 1 is . Reversing the order of summation, this is equal to . Thus by Lemma 1, the total cost of Algorithm 1 is at most . ∎

We note that we could easily modify this algorithm to use binary search (to find the index of the block containing ), rather than sequential search. However, while this seeemingly would lead to an approximation bound of , we do not know how to prove that the modified algorithm actually achieves such a bound. The difficulty in simply adapting the previous analysis is as follows. Consider the expression for the expected cost of the previous algorithm, . If the algorithm is modified to use binary search rather than sequential search, then for each , the inner sum, , can be replaced by a sum of terms of the form . However, these terms would not be the same for all , because they depend on the execution of the binary search associated with . This prevents us from reversing the order of summation, as we did in the previous argument, which prevents us from completing the proof and attaining the desired bound.

7 Verification vs. Evaluation

Recall the definitions associated with verification strategies and their costs, from Section 2. Let be a Boolean function with associated costs and probabilities . Let and denote the minimum possible expected verification cost and minimum possible expected evaluation cost, respectively, of .

The correctness of the algorithm of Salloum, Breuer, and Ben-Dov solving the SBFE problem for -of- functions is based on the following lemma.

Lemma 2.

BenDov81 Consider an instance of the SBFE problem. If is a -of- function, then the evaluation strategy that tests the bits in nondecreasing order of the ratio , until the value of can be determined, is 1-optimal.

A dual lemma states that testing in nondecreasing order of is 0-optimal when is a -of- function.

It is obvious that Before discussing this relationship further, we describe the proof of correctness for the Salloum-Breuer-Ben-Dov algorithm, presented in Section 3, for -of- functions. Since any 1-optimal strategy must perform at least tests before terminating, the strategy that tests in nondecreasing order is still 1-optimal if you permute the first tests in the ordering arbitrarily. Similarly, the strategy that tests in nondecreasing order is still 0-optimal if you permute the first tests arbitrarily. The algorithm first tests an that appears within the first tests of the ordering, and also within the first tests of the ordering. Thus testing such an first is consistent with both a 1-optimal and a 0-optimal strategy. This also holds for the recursive calls of the algorithm. Thus the strategy produced by the algorithm is both 0-optimal and 1-optimal, and because , it is an optimal evaluation strategy.

The above correctness proof also implies that for -of- functions , .

Das et al. Dasetal12 showed that in the unit-cost case, holds when is an arbitrary symmetric Boolean function . Their proof also relied on showing that there exists a variable such that testing first is consistent with both a 1-optimal and a 0-optimal strategy. However, the proof involves additional insights and arguments that were not needed for -of- functions, and it does not imply a polynomial-time procedure to find such an .

We show that the result of Das et al. does not hold for arbitrary costs. We describe a symmetric Boolean function , and associated and , for which the minimum expected cost of evaluation exceeds the minimum expected cost of verification. The description of the function is in the proof of the following theorem.

Theorem 4.

There exists a symmetric Boolean function , costs , and probabilities , such that .

Proof.

We give a function on bits with . The value vector of is . The costs and probabilities for the bits are given in Table 1.

bit cost
0.1 5000
0.3 6000
0.9 3000
0.8 5000
Table 1: Costs and probabilities of bits

Consider the evaluation tree for given in Figure 1; we denote it by . We assume left edges correspond to and right edges to . So, for example, if is such that and , then and the sum of the tests performed by on is . The expected cost of is .

In fact, is optimal, meaning that it has minimum possible expected cost over all adaptive strategies for evaluating . Thus . The optimality of can be shown by computing the expected cost of only 12 candidate trees, as follows. For any evaluation strategy, if the outcome of the first test is 1, then the induced problem is to evaluate the function with value vector ; this new function is the negation of a 2-of-3 function. If the outcome of the first test is 0 and the outcome of the second test is 1, then the induced problem is to evaluate the function with value vector . This function is the negation of a 2-of-2 function. If the outcome of the first test is 0 and the outcome of the second test is also 0, then the induced problem is to evaluate the function with value vector . This is a 1-of-2 function. Since we know the optimal evaluation strategies for -of- functions (and their negations), to determine the optimal evaluation tree for , we only need to determine which variables appear in the root of the tree and in its left child. We do this by trying all 12 choices for these two variables, and computing the expected costs of the associated trees. The results are shown in Table 2; the optimal expected cost is bolded.

Figure 1: Optimal evaluation tree for
root left child expected cost of tree
15,529
15,259
16,042
14,881
14,643
15,616
14,618
14,670
14,623
15,394
15,616
15,406
Table 2: Possible evaluation trees for and their costs

Now consider the problem of verifying . Recall that a verification strategy for consists of two evaluation strategies, one for assignments in and one for assignments in . If were equal to , then since is an optimal adaptive strategy for , it would have to be -optimal for each . Otherwise, if were not -optimal for some , one could achieve an expected verification cost lower than by using an -optimal tree for the assignments in and the tree for the assignments in .

In Figure 2 we show a truncated version of an evaluation tree for whose 1-cost is . (In fact, this is the optimal 1-cost.) In the figure, X designates a leaf which is not reachable on any for which , and thus that node and its descendants in the original tree do not affect the 1-cost.

The 1-cost of the optimal evaluation tree in Figure 1 is . Because the 1-cost of the tree in Figure 2 tree is less than the 1-cost of , .

Figure 2: Tree with optimal 1-cost

Acknowledgments

Partial support for this work came from NSF Award IIS-1217968 (for all authors), from NSF Award IIS-1909335 (for L. Hellerstein), from a PSC-CUNY Award, jointly funded by The Professional Staff Congress and The City University of New York (for D. Kletenik). We thank Zach Pomerantz for experiments that gave us useful insights into the goal value of symmetric functions and the anonymous referees for their comments.

References