 # All Classical Adversary Methods are Equivalent for Total Functions

We show that all known classical adversary lower bounds on randomized query complexity are equivalent for total functions, and are equal to the fractional block sensitivity fbs(f). That includes the Kolmogorov complexity bound of Laplante and Magniez and the earlier relational adversary bound of Aaronson. For partial functions, we show unbounded separations between fbs(f) and other adversary bounds, as well as between the relational and Kolmogorov complexity bounds. We also show that, for partial functions, fractional block sensitivity cannot give lower bounds larger than √(n ·bs(f)), where n is the number of variables and bs(f) is the block sensitivity. Then we exhibit a partial function f that matches this upper bound, fbs(f) = Ω(√(n ·bs(f))).

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Query complexity of functions is one of the simplest and most useful models of computation. It is used to show lower bounds on the amount of time required to solve a computational task, and to compare the capabilities of the quantum, randomized and deterministic models of computation. Thus providing lower bounds in the query model is essential in understanding the complexity of computational problems.

In the query model, an algorithm has to compute a function , given a string from , where and are finite alphabets. With a single query, it can provide the oracle with an index and receive back the value . After a number of queries (possibly, adaptive), the algorithm must compute . The cost of the computation is the number of queries made by the algorithm.

The query complexity of a function in the deterministic setting is denoted by

and is also called the decision tree complexity. The two-sided bounded-error randomized and quantum query complexities are denoted by

and

, respectively (which means that given any input, the algorithm must produce a correct answer with probability at least 2/3). For a comprehensive survey on the power of these models, see

[BdW02], and for the state-of-the-art relationships between them, see [ABDK16].

In this work, we investigate the relation among a certain set of lower bound techniques on , called the classical adversary methods, and how they connect to other well-known lower bounds on the randomized query complexity.

### 1.1 Known Lower Bounds

One of the first general lower bound methods on randomized query complexity is Yao’s minimax principle, which states that it is sufficient to exhibit a hard distribution on the inputs and lower bound the complexity of any deterministic algorithm under such distribution [Yao77]. Yao’s minimax principle is known to be optimal for any function but involves a hard-to-describe and hard-to-compute quantity (the complexity of the best deterministic algorithm under some distribution).

More concrete randomized lower bounds are block sensitivity [Nis89] and the approximate degree of the polynomial representing the function [NS94] introduced by Nisan and Szegedy. Afterwards, Aaronson extended the notion of the certificate complexity (a deterministic lower bound) to the randomized setting by introducing randomized certificate complexity [Aar08]. Following this result, both Tal and Gilmer, Saks and Srinivasan independently discovered the fractional block sensitivity lower bound [Tal13, GSS16], which is equal to the fractional certificate complexity

measure, as respective dual linear programs. Since these measures are relaxations of block sensitivity and certificate complexity if written as integer programs, they satisfy the following hierarchy:

 bs(f)≤fbs(f)=FC(f)≤C(f).

Perhaps surprisingly, fractional block sensitivity turned out to be equivalent to randomized certificate complexity, . Approximate degree and fractional block sensitivity are incomparable in general, but it has been shown that [KT16] and [Nis89, BBC01].

Currently one of the strongest lower bounds is the partition bound of Jain and Klauck [JK10], which is larger than all of the above mentioned randomized lower bounds (even the approximate degree), and the classical adversary methods listed below. Its power is illustrated by the function (an And of Ors on variables), where it gives a tight lower bound, while all of the other lower bounds give only . The quantum query complexity is also a powerful lower bound on , as it is incomparable with [AKK16]. Recently, Ben-David and Kothari introduced the randomized sabotage complexity lower bound, which can be even larger than and for some functions [BDK16], and so far no examples are known where it is smaller.

In a separate line of research, Ambainis gave a versatile quantum adversary lower bound method with a wide range of applications [Amb00]. Since then, many generalizations of the quantum adversary method have been introduced (see [ŠS06] for a list of known quantum adversary bounds). Several of these formulations have been lifted back to the randomized setting. Aaronson proved a classical analogue of Ambainis’ relational adversary bound and used it to provide a lower bound for the local search problem [Aar06]. Laplante and Magniez introduced the Kolmogorov complexity adversary bound for both quantum and classical settings and showed that it subsumes many other adversary techniques. [LM04]. They also gave a classical variation of Ambainis’ adversary bound in a different way than Aaronson. Some of the other adversary methods like spectral adversary have not been generalized back to the randomized setting.

While some relations between the adversary bounds had been known before, Špalek and Szegedy proved that practically all known quantum adversary methods are in fact equivalent [ŠS06]

(this excludes the general quantum adversary bound, which gives an exact estimate on quantum query complexity for all Boolean functions

[HLS07, Rei09]). This result cannot be immediately generalized to the classical setting, as the equivalence follows through the spectral adversary which has no classical analogue. They also showed that the quantum adversary cannot give lower bounds better than a certain “certificate complexity barrier”. Recently, Kulkarni and Tal strenghtened the barrier using fractional certificate complexity. Specifically, for any Boolean function the quantum adversary is at most , if is total, and at most , if is partial [KT16].111Here, and stand for the maximum fractional certificate complexity over negative and positive inputs, respectively.

With the advances on the quantum adversary front, one could hope for a similar equivalence result to also hold for the classical adversary bounds. Some relations are known: Laplante and Magniez have shown that the Kolmogorov complexity lower bound is at least as strong as Aaronson’s relational and Ambainis’ weighted adversary bounds [LM04]

. Jain and Klauck have noted that the minimax over probability distributions adversary bound is at most

for total functions [JK10]. In general, the relationships among the classical adversary bounds until this point remained unclear.

### 1.2 Our Results

Our main result shows that the known classical adversary bounds are all equivalent for total functions. That includes Aaronson’s relational adversary bound , Ambainis’ weighted adversary bound , the Kolmogorov complexity adversary bound and the minimax over probability distributions adversary bound . Surprisingly, they are equivalent to the fractional block sensitivity .

We also add to this list a certain restricted version of the relational adversary bound. More specifically, we require that the relation matrix between the inputs has rank 1, and denote this (seemingly weaker) lower bound by . Thus for total functions , where the latter is much easier to calculate for Boolean functions.

All this shows that is a fundamental lower bound measure for total functions with many different formulations, including the previously known and . Another interesting corollary is that since the quantum certificate complexity is a lower bound on the quantum query complexity [Aar08], we have that by taking the square root of any of the adversary bounds above, we obtain a quantum lower bound for total functions.

Along the way, for partial functions we show the equivalence between and , and also between and . In the case of partial functions, becomes weaker than all these adversary methods. In particular, we show an example of a function where each of these adversary methods gives an lower bound, while fractional block sensitivity is . We also show that and are not equivalent for partial functions, as there exists an example where is constant, but .

We also show a “block sensitivity” barrier for fractional block sensitivity. Namely, for any partial function , the fractional block sensitivity is at most . Note that the adversary bounds do not bear this limitation, as witnessed by the aforementioned example. This result is tight, as we exhibit a partial function that matches this upper bound.

Even though our results are similar to the quantum case in [ŠS06] in spirit, the proof methods are different.

## 2 Preliminaries

In this section we define the complexity measures we are going to work with in the paper. In the following definitions and the rest of the paper consider to be a partial function with domain , where are some finite alphabets and is the length of the input string. Throughout the paper we assume that is not constant.

#### Block Sensitivity.

For , a subset of indices is a sensitive block of if there exists a such that and . The block sensitivity of on is the maximum number of disjoint subsets such that is a sensitive block of for each . The block sensitivity of is defined as .

Let be the set of sensitive blocks of . The fractional block sensitivity of on is defined as the optimal value of the following linear program:

 maximize ∑B∈Bwx(B) subject to ∀i∈[n]:∑B∈Bi∈Bwx(B)≤1.

Here . The fractional block sensitivity of is defined as .

When the weights are taken as either 0 or 1, the optimal solution to the corresponding integer program is equal to . Hence is a relaxation of , and we have .

#### Certificate complexity.

An assignment is a map . Informally, the elements of are the values fixed by the assignment and * is a wildcard symbol that can be any letter of . A string is said to be consistent with if for all such that , we have . The length of is the number of positions that fixes to a letter of .

For an , an -certificate for is an assignment such that for all strings we have . The certificate complexity of on is the size of the shortest -certificate that is consistent with. The certificate complexity of is defined as .

The fractional certificate complexity of on is defined as the optimal value of the following linear program:

 minimize ∑i∈[n]vx(i) subject to ∀y∈S s.t. f(x)≠f(y):∑i:xi≠yivx(i)≥1.

Here for each . The fractional certificate complexity of is defined as .

When the weights are taken as either 0 or 1, the optimal solution to the corresponding integer program is equal to . Hence is a relaxation of , and we have .

It has been shown that and are dual linear programs, hence their optimal values are equal, . As an immediate corollary, .

#### One-sided measures.

For Boolean functions with , for each measure from and a Boolean value , define the corresponding one-sided measure as

 Mb(f)=maxx∈f−1(b)M(f,x).

According to the earlier definitions, we then have . These one-sided measures are useful when, for example, working with compositions of Or with some Boolean function.

#### Kolmogorov complexity.

A set of strings is called prefix-free if there are no two strings in

such that one is a proper prefix of the other. Equivalently we can think of the strings as programs for the Turing machine. Let

be a universal Turing machine and fix a prefix-free set . The prefix-free Kolmogorov complexity of given , is defined as the length of the shortest program from that prints when given :

 K(x|y)=min{|P|∣P∈S,M(P,y)=x}.

For a detailed introduction on Kolmogorov complexity, we refer the reader to [LV08].

Let be a function, where . The following are all known to be lower bounds on bounded-error randomized query complexity.

Let be a real-valued function such that for all and whenever . Then for and an index , let222We take the reciprocals of the expressions, compared to Aaronson’s definition.

 θ(x,i)=∑y∈SR(x,y)∑y∈S:xi≠yiR(x,y),

where is undefined if the denominator is 0. Denote333One can show that there exist optimal solutions for , thus we can maximize over instead of taking the supremum.

 CRA(f)=maxRminx,y∈S,i∈[n]:R(x,y)>0,xi≠yimax{θ(x,i),θ(y,i)}.

We introduce the following restriction of the relational adversary bound. Let be any matrix of rank 1, such that:

• There exist such that for all .

• whenever .

Then set .

Let and . Note that for every , either or must be 0, as must be 0, therefore . Then denote

 CRA1(f)=maxu,vminx∈X,y∈Y,i∈[n]:u(x)v(y)>0,xi≠yimax{θ(x,i),θ(y,i)}.

where can be simplified to

Naturally, .

As whenever , we have that for every output either or . Therefore, effectively bounds the complexity of differentiating between two non-overlapping sets of outputs. This leads to the following equivalent definition for :

###### Proposition 1.

Let be a partition of the output alphabet, i.e., . Let and be probability distributions over and , respectively. Then

 CRA1(f)=maxA,Bp,qmini∈[n],g1,g2∈G:g1≠g2∃x∈X,y∈Y:p(x)q(y)>01min{Prx∼p[xi≠g1],Pry∼q[yi≠g2]}.

For the proof of this proposition see Appendix A.

#### Weighted adversary bound [Amb03, Lm04].

Let be weight schemes as follows.

• Every pair is assigned a non-negative weight such that whenever .

• Every triple is assigned a non-negative weight such that whenever or , and for all such that .

For all , let and . Denote

 CWA(f)=maxw,w′minx,y∈S,i∈[n]w(x,y)≠0,xi≠yimax{wt(x)v(x,i),wt(y)v(y,i)}.

#### Kolmogorov complexity [Lm04].

Let be any finite string.444By the argument of [ŠS06], we take the minimum over the strings instead of the algorithms computing . Denote

 CKA(f)=minσmaxx,y∈Sf(x)≠f(y)1∑i:xi≠yimin{2−K(i|x,σ),2−K(i|y,σ)}.

#### Minimax over probability distributions [Lm04].

Let be a set of probability distributions over . Denote

 CMM(f)=minpmaxx,y∈Sf(x)≠f(y)1∑i:xi≠yimin{px(i),py(i)}.

## 4 Equivalence of the Adversary Bounds

In this section we prove the main theorem:

###### Theorem 2.

Let be a partial Boolean function, where . Then

• ,

Moreover, for total functions , we have

 fbs(f)=CMM(f).

The part has been already proven in [LM04].

### 4.1 Fractional Block Sensitivity and the Weighted Adversary Method

First, we prove that fractional block sensitivity lower bounds the relational adversary bound for any partial function.

###### Proposition 3.

Let be a partial Boolean function, where . Then

 fbs(f)≤CRA1(f).
###### Proof.

Let be such that and denote . Let and .

Let be the set of sensitive blocks of . Let be an optimal solution to the linear program, that is, . For each , pick a single such that . Then define for all . It is clear that has a corresponding rank 1 matrix , as it has only one row (corresponding to ) that is not all zeros.

Let be any input such that . Then for any such that ,

 θ(x,i)=∑B∈Bw(B)∑B∈B:i∈Bw(B)=fbs(f,x)∑B∈B:i∈Bw(B)≥fbs(f),

as . On the other hand, note that

 θ(y,i)=w(B)w(B)=1,

where . Therefore, for this ,

 minx,y∈S,i∈[n]:R(x,y)>0,xi≠yimax{θ(x,i),θ(y,i)}≥miny∈S′,i∈[n]:R(x,y)>0,xi≠yimax{fbs(f),1}=fbs(f),

and the claim follows. ∎

As mentioned in [LM04], is a weaker version of . We show that in fact they are exactly equal to each other:

###### Proposition 4.

Let be a partial Boolean function, where . Then

 CRA(f)=CWA(f).
###### Proof.
• First we show that .

Suppose that is the function for which the relational bound achieves maximum value. Let for any such that and . This pair of weight schemes satisfies the conditions of the weighted adversary bound. The value of the latter with is equal to . As the weighted adversary bound is a maximization measure, .

• Now we show that .

Let be optimal weight schemes for the weighted adversary bound. Let for any such that . Let . Then

 θ(x,i)=∑y∈S′R(x,y)∑y∈S′:xi≠yiR(x,y)=∑y∈S′w(x,y)∑y∈S′:xi≠yiw(x,y)≥∑y∈S′w(x,y)∑y∈S′:xi≠yiw′(x,y,i)=wt(x)v(x,i),

as by the properties of . Similarly, . Therefore, for any and such that and , we have

 max{θ(x,i),θ(y,i)}≥max{wt(x)v(x,i),wt(y)v(y,i)}.

As the relational adversary bound is also a maximization measure, . ∎

The proof of this proposition also shows why and are equivalent — the weight function is redundant in the classical case (in contrast to the quantum setting).

### 4.2 Kolmogorov Complexity and Minimax over Distributions

In this section we prove the equivalence between the mimimax over probability distributions and Kolmogorov complexity adversary bound. It has been shown in the proof of the main theorem of [LM04] that . Here we show the other direction using a well-known result from coding theory.

###### Proposition 5 (Kraft’s inequality).

Let be any prefix-free set of finite strings. Then

 ∑x∈S2−|x|≤1.
###### Proposition 6.

Let be a partial Boolean function, where . Then

 CKA(f)≥CMM(f).
###### Proof.

Let be the binary string for which achieves the smallest value. Define the set of probability distributions on as follows. Let and . The set of programs that print out , given and , is prefix-free (by the definition of ), as the information given to all programs is the same. Thus by Kraft’s inequality, we have .

Examine the value of the minimax bound with this set of probability distributions. For any and , we have

 min{px(i),py(i)}=min{2−K(i∣x,σ)sx,2−K(i∣y,σ)sy}≥min{2−K(i|x,σ),2−K(i|y,σ)}.

Therefore, . ∎

### 4.3 Fractional Block Sensitivity and Minimax over Distributions

Now we proceed to prove that for total functions, fractional block sensitivity is equal to the minimax over probability distributions. The latter has an equivalent form of the following program.

###### Lemma 7.

For any partial Boolean function , where ,

where is any set of weight functions .

###### Proof.

Denote by the optimal value of the given program.

• First we prove that .

Construct a set of weight functions by , where is an optimal set of probability distributions for the minimax bound. Then for any such that ,

 ∑i:xi≠yimin{vx(i),vy(i)}=CMM(f)⋅∑i:xi≠yimin{px(i),py(i)}≥CMM(f)⋅1CMM(f)=1.

On the other hand, the value of this solution is given by

 maxx∈S∑i∈[n]vx(i)=maxx∈SCMM(f)⋅∑i∈[n]px(i)=CMM(f).
• Now we prove that .

Let be an optimal solution for the given program. Set . Construct a set of probability distributions by . Then for any such that , we have

 ∑i:xi≠yimin{px(i),py(i)}=∑i:xi≠yimin{vx(i)sx,vy(i)sy}≥1μ⋅∑i:xi≠yimin{vx(i),vy(i)}≥1μ.

Therefore, . ∎

In this case we prove that for total functions the minimax over probability distributions is equal to the fractional certificate complexity . The result follows since . The proof of this claim is almost immediate in light of the following “fractional certificate intersection” lemma by Kulkarni and Tal:

###### Proposition 8 ([Kt16], Lemma 6.2).

Let be a total function555Kulkarni and Tal prove the lemma for Boolean functions, but it is straightforward to check that their proof also works for functions with arbitrary input and output alphabets. and be a feasible solution for the linear program. Then for any two inputs such that , we have

 ∑i:xi≠yimin{vx(i),vy(i)}≥1.

Let be a total function. Suppose that is a feasible solution for the program. Then for any such that ,

 ∑i:xi≠yivx(i)≥∑i:xi≠yimin{vx(i),vy(i)}≥1.

Hence this is also a feasible solution for the linear program. On the other hand, if is a feasible solution for linear program, then it is also a feasible solution for the program by Proposition 8. Therefore, .

## 5 Separations for Partial Functions

### 5.1 Fractional Block Sensitivity vs. Adversary Bounds

Here we show an example of a partial function that provides an unbounded separation between the adversary measures and fractional block sensitivity.

###### Theorem 9.

There exists a partial Boolean function , where , such that and .

###### Proof.

Let be an even number and be the set of bit strings of Hamming weight 1. Define the “greater than half” function to be 1 iff for .

For the first part, the certificate complexity is constant . To certify the value of greater than half, it is enough to certify the position of the unique such that . The claim follows, as for any .

For the second part, by Theorem 2, it suffices to show that . Let and . Let for all . Suppose that are such that (and thus ). Then

 θ(x,i) =∑y∗∈YR(x,y∗)∑y∗∈Y:xi≠y∗iR(x,y∗)=n/2n/2=1, θ(y,i) =∑x∗∈XR(x∗,y)∑x∗∈X:x∗i≠yiR(x∗,y)=n/21=n/2.

Therefore, . Similarly, if is such an index that and , we also have . Also note that has a corresponding rank 1 matrix , hence . ∎

We note that a similar function was used to prove lower bounds on the problem of inverting a permutation [Amb00, Aar06]. More specifically, we are given a permutation , and the function is 0 if and 1 otherwise. With a single query, one can find the value of for any . By construction, a lower bound on also gives a lower bound on computing this function.

### 5.2 Weighted Adversary vs. Kolmogorov Complexity Bound

Here we show that, for a variant of the ordered search problem, the Kolmogorov complexity bound gives a tight logarithmic lower bound, while the weighted adversary gives only a constant value lower bound.

###### Theorem 10.

There exists a partial Boolean function , where , such that and .

###### Proof.

Let . In other words, is any string starting with some number of 0s followed by all 1s. Define the “ordered search parity” function to be , where is the last index such that (in the special case , assume that ).

For simplicity, further assume that is even. First, we prove that . We use the argument of Laplante and Magniez and the distance scheme method they have adapted from [HNS01]:

###### Proposition 11 ([Lm04], Theorem 5).

Let be a Boolean function, where . Let be a non-negative integer function on such that whenever . Let . Define the right load to be the maximum over all values , of the number of such that and . The left load is defined similarly, inverting and . Then

 CKA(f)=Ω⎛⎜⎝W|S|minx,y,iD(x,y)≠0,xi≠yimax{1RL(x,i),1LL(y,i)}⎞⎟⎠.

For each pair such that and , let . Then we have

 W=n/2∑k=1((n+1)−(2k−1))12k−1=(n+1)n/2∑k=112k−1−n2.

Since as a harmonic number, we have that .

On the other hand, since for every and positive integer there is at most one such that , we have that for any such that and . Since , by Proposition 11,

 CKA(\textscOspn)=Ω(nlognn)=Ω(logn).

Now we prove that . Let ; we start by fixing an enumeration of . By , , we denote the unique element of satisfying (it is a negative input for ); by , , we denote the unique element of satisfying (it is a positive input for ).

We claim that for every , , with nonnegative entries we have

 min(i,j)∈[N+1]×[N]:rij>0mint∈[n]:x(i)t≠y(j)tmax{θ(x(i),t),θ(y(j),t)}≤2,

unless for all . Since is defined only for which are not identically zero, we conclude that .

For all , we set

 tij=min{t:x(i)t≠y(j)t}=1+min{\textscInd(x(i)),\textscInd(y(j))}={2i−1,i≤j,2j,i>j..

We shall show that, unless , there is a pair satisfying

 rij>0andmax{θ(x(i),tij),θ(y(j),tij)}≤2. (1)

Consider and . Then we have and

 θ(x(i),tij)=∑Nk=1rik∑jk=1rik,θ(y(j),tij)=∑N+1l=1rlj∑N+1l=j+1rlj. (2)

Now consider and . Then we have and

 θ(x(i),tij)=∑Nk=1rik∑Nk=irik,θ(y(j),tij)=∑N+1l=1rlj∑