Cooperative Online Learning: Keeping your Neighbors Updated

01/23/2019
by   Nicolò Cesa-Bianchi, et al.
18

We study an asynchronous online learning setting with a network of agents. At each time step, some of the agents are activated, requested to make a prediction, and pay the corresponding loss. The loss function is then revealed to these agents and also to their neighbors in the network. When activations are stochastic, we show that the regret achieved by N agents running the standard online Mirror Descent is O(√(α T)), where T is the horizon and α< N is the independence number of the network. This is in contrast to the regret Ω(√(N T)) which N agents incur in the same setting when feedback is not shared. We also show a matching lower bound of order √(α T) that holds for any given network. When the pattern of agent activations is arbitrary, the problem changes significantly: we prove a Ω(T) lower bound on the regret that holds for any online algorithm oblivious to the feedback source.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset