Fast and Regret Optimal Best Arm Identification: Fundamental Limits and Low-Complexity Algorithms
This paper considers a stochastic multi-armed bandit (MAB) problem with dual objectives: (i) quick identification and commitment to the optimal arm, and (ii) reward maximization throughout a sequence of T consecutive rounds. Though each objective has been individually well-studied, i.e., best arm identification for (i) and regret minimization for (ii), the simultaneous realization of both objectives remains an open problem, despite its practical importance. This paper introduces Regret Optimal Best Arm Identification (ROBAI) which aims to achieve these dual objectives. To solve ROBAI with both pre-determined stopping time and adaptive stopping time requirements, we present the ๐ค๐ฎ๐ข๐ฏ algorithm and its variants respectively, which not only achieve asymptotic optimal regret in both Gaussian and general bandits, but also commit to the optimal arm in ๐ช(log T) rounds with pre-determined stopping time and ๐ช(log^2 T) rounds with adaptive stopping time. We further characterize lower bounds on the commitment time (equivalent to sample complexity) of ROBAI, showing that ๐ค๐ฎ๐ข๐ฏ and its variants are sample optimal with pre-determined stopping time, and almost sample optimal with adaptive stopping time. Numerical results confirm our theoretical analysis and reveal an interesting โover-explorationโ phenomenon carried by classic ๐ด๐ข๐ก algorithms, such that ๐ค๐ฎ๐ข๐ฏ has smaller regret even though it stops exploration much earlier than ๐ด๐ข๐ก (๐ช(log T) versus ๐ช(T)), which suggests over-exploration is unnecessary and potentially harmful to system performance.
READ FULL TEXT