Cooperative Online Learning

by   Tommaso R. Cesari, et al.

In this preliminary (and unpolished) version of the paper, we study an asynchronous online learning setting with a network of agents. At each time step, some of the agents are activated, requested to make a prediction, and pay the corresponding loss. Some feedback is then revealed to these agents and is later propagated through the network. We consider the case of full, bandit, and semi-bandit feedback. In particular, we construct a reduction to delayed single-agent learning that applies to both the full and the bandit feedback case and allows to obtain regret guarantees for both settings. We complement these results with a near-matching lower bound.


page 1

page 2

page 3

page 4


Cooperative Online Learning: Keeping your Neighbors Updated

We study an asynchronous online learning setting with a network of agent...

Near-Optimal Regret for Adversarial MDP with Delayed Bandit Feedback

The standard assumption in reinforcement learning (RL) is that agents ob...

An Efficient Algorithm for Cooperative Semi-Bandits

We consider the problem of asynchronous online combinatorial optimizatio...

Multiclass Classification using dilute bandit feedback

This paper introduces a new online learning framework for multiclass cla...

Multi-Agent Online Optimization with Delays: Asynchronicity, Adaptivity, and Optimism

Online learning has been successfully applied to many problems in which ...

Bandit Principal Component Analysis

We consider a partial-feedback variant of the well-studied online PCA pr...

Instance-Sensitive Algorithms for Pure Exploration in Multinomial Logit Bandit

Motivated by real-world applications such as fast fashion retailing and ...