Online Non-Convex Learning: Following the Perturbed Leader is Optimal

03/19/2019
by   Arun Sai Suggala, et al.
2

We study the problem of online learning with non-convex losses, where the learner has access to an offline optimization oracle. We show that the classical Follow the Perturbed Leader (FTPL) algorithm achieves optimal regret rate of O(T^-1/2) in this setting. This improves upon the previous best-known regret rate of O(T^-1/3) for FTPL. We further show that an optimistic variant of FTPL achieves better regret bounds when the sequence of losses encountered by the learner is `predictable'.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset