The Query Complexity of Mastermind with ℓ_p Distances

09/24/2019
by   Manuel Fernandez, et al.
0

Consider a variant of the Mastermind game in which queries are ℓ_p distances, rather than the usual Hamming distance. That is, a codemaker chooses a hidden vector y∈{-k,-k+1,...,k-1,k}^n and answers to queries of the form ‖y-x‖_p where x∈{-k,-k+1,...,k-1,k}^n. The goal is to minimize the number of queries made in order to correctly guess y. Motivated by this question, in this work, we develop a nonadaptive polynomial time algorithm that works for a natural class of separable distance measures, i.e. coordinate-wise sums of functions of the absolute value. This in particular includes distances such as the smooth max (LogSumExp) as well as many widely-studied M-estimator losses, such as ℓ_p norms, the ℓ_1-ℓ_2 loss, the Huber loss, and the Fair estimator loss. When we apply this result to ℓ_p queries, we obtain an upper bound of O(min{n,nlog k/log n}) queries for any real 1≤ p<∞. We also show matching lower bounds up to constant factors for the ℓ_p problem, even for adaptive algorithms for the approximation version of the problem, in which the problem is to output y' such that ‖y'-y‖_p≤ R for any R≤ k^1-εn^1/p for constant ε>0. Thus, essentially any approximation of this problem is as hard as finding the hidden vector exactly, up to constant factors. Finally, we show that for the noisy version of the problem, i.e. the setting when the codemaker answers queries with any q = (1±ε)‖y-x‖_p, there is no query efficient algorithm.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset