
Optimal Separation and Strong Direct Sum for Randomized Query Complexity
We establish two results regarding the query complexity of boundederror...
read it

A Tight Composition Theorem for the Randomized Query Complexity of Partial Functions
We prove two new results about the randomized query complexity of compos...
read it

A note on the tight example in On the randomised query complexity of composition
We make two observations regarding a recent tight example for a composit...
read it

The Power of Many Samples in Query Complexity
The randomized query complexity R(f) of a boolean function f{0,1}^n→{0,1...
read it

Quantum algorithms and approximating polynomials for composed functions with shared inputs
We give new quantum algorithms for evaluating composed functions whose i...
read it

kForrelation Optimally Separates Quantum and Classical Query Complexity
Aaronson and Ambainis (SICOMP `18) showed that any partial function on N...
read it

Space efficient quantum algorithms for mode, minentropy and kdistinctness
We study the problem of determining if the mode of the output distributi...
read it
A New Minimax Theorem for Randomized Algorithms
The celebrated minimax principle of Yao (1977) says that for any Booleanvalued function f with finite domain, there is a distribution μ over the domain of f such that computing f to error ϵ against inputs from μ is just as hard as computing f to error ϵ on worstcase inputs. Notably, however, the distribution μ depends on the target error level ϵ: the hard distribution which is tight for bounded error might be trivial to solve to small bias, and the hard distribution which is tight for a small bias level might be far from tight for bounded error levels. In this work, we introduce a new type of minimax theorem which can provide a hard distribution μ that works for all bias levels at once. We show that this works for randomized query complexity, randomized communication complexity, some randomized circuit models, quantum query and communication complexities, approximate polynomial degree, and approximate logrank. We also prove an improved version of Impagliazzo's hardcore lemma. Our proofs rely on two innovations over the classical approach of using Von Neumann's minimax theorem or linear programming duality. First, we use Sion's minimax theorem to prove a minimax theorem for ratios of bilinear functions representing the cost and score of algorithms. Second, we introduce a new way to analyze lowbias randomized algorithms by viewing them as "forecasting algorithms" evaluated by a proper scoring rule. The expected score of the forecasting version of a randomized algorithm appears to be a more finegrained way of analyzing the bias of the algorithm. We show that such expected scores have many elegant mathematical properties: for example, they can be amplified linearly instead of quadratically. We anticipate forecasting algorithms will find use in future work in which a finegrained analysis of smallbias algorithms is required.
READ FULL TEXT
Comments
There are no comments yet.