Multi-armed Bandit Learning on a Graph

09/20/2022
by   Tianpeng Zhang, et al.
0

The multi-armed bandit(MAB) problem is a simple yet powerful framework that has been extensively studied in the context of decision-making under uncertainty. In many real-world applications, such as robotic applications, selecting an arm corresponds to a physical action that constrains the choices of the next available arms (actions). Motivated by this, we study an extension of MAB called the graph bandit, where an agent travels over a graph trying to maximize the reward collected from different nodes. The graph defines the freedom of the agent in selecting the next available nodes at each step. We assume the graph structure is fully available, but the reward distributions are unknown. Built upon an offline graph-based planning algorithm and the principle of optimism, we design an online learning algorithm that balances long-term exploration-exploitation using the principle of optimism. We show that our proposed algorithm achieves O(|S|√(T)log(T)+D|S|log T) learning regret, where |S| is the number of nodes and D is the diameter of the graph, which is superior compared to the best-known reinforcement learning algorithms under similar settings. Numerical experiments confirm that our algorithm outperforms several benchmarks. Finally, we present a synthetic robotic application modeled by the graph bandit framework, where a robot moves on a network of rural/suburban locations to provide high-speed internet access using our proposed algorithm.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/23/2021

Multi-armed Bandit Algorithm against Strategic Replication

We consider a multi-armed bandit problem in which a set of arms is regis...
research
11/29/2018

Regret Bounds for Stochastic Combinatorial Multi-Armed Bandits with Linear Space Complexity

Many real-world problems face the dilemma of choosing best K out of N op...
research
01/25/2019

Almost Boltzmann Exploration

Boltzmann exploration is widely used in reinforcement learning to provid...
research
06/25/2023

Joint Learning of Network Topology and Opinion Dynamics Based on Bandit Algorithms

We study joint learning of network topology and a mixed opinion dynamics...
research
05/22/2019

Thresholding Graph Bandits with GrAPL

In this paper, we introduce a new online decision making paradigm that w...
research
09/10/2017

Variational inference for the multi-armed contextual bandit

In many biomedical, science, and engineering problems, one must sequenti...
research
08/28/2023

Simple Modification of the Upper Confidence Bound Algorithm by Generalized Weighted Averages

The multi-armed bandit (MAB) problem is a classical problem that models ...

Please sign up or login with your details

Forgot password? Click here to reset