Efficient Online-Bandit Strategies for Minimax Learning Problems

05/28/2021
by   Christophe Roux, et al.
0

Several learning problems involve solving min-max problems, e.g., empirical distributional robust learning or learning with non-standard aggregated losses. More specifically, these problems are convex-linear problems where the minimization is carried out over the model parameters w∈𝒲 and the maximization over the empirical distribution p∈𝒦 of the training set indexes, where 𝒦 is the simplex or a subset of it. To design efficient methods, we let an online learning algorithm play against a (combinatorial) bandit algorithm. We argue that the efficiency of such approaches critically depends on the structure of 𝒦 and propose two properties of 𝒦 that facilitate designing efficient algorithms. We focus on a specific family of sets 𝒮_n,k encompassing various learning applications and provide high-probability convergence guarantees to the minimax values.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro