Combinatorial Multi-Objective Multi-Armed Bandit Problem

03/11/2018
by   Doruk Öner, et al.
0

In this paper, we introduce the COmbinatorial Multi-Objective Multi-Armed Bandit (COMO-MAB) problem that captures the challenges of combinatorial and multi-objective online learning simultaneously. In this setting, the goal of the learner is to choose an action at each time, whose reward vector is a linear combination of the reward vectors of the arms in the action, to learn the set of super Pareto optimal actions, which includes the Pareto optimal actions and actions that become Pareto optimal after adding an arbitrary small positive number to their expected reward vectors. We define the Pareto regret performance metric and propose a fair learning algorithm whose Pareto regret is O(N L^3 T), where T is the time horizon, N is the number of arms and L is the maximum number of arms in an action. We show that COMO-MAB has a wide range of applications, including recommending bundles of items to users and network routing, and focus on a resource-allocation application for multi-user communication in the presence of multidimensional performance metrics, where we show that our algorithm outperforms existing MAB algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/01/2022

Pareto Regret Analyses in Multi-objective Multi-armed Bandit

We study Pareto optimality in multi-objective multi-armed bandit by prov...
research
02/10/2023

Piecewise-Stationary Multi-Objective Multi-Armed Bandit with Application to Joint Communications and Sensing

We study a multi-objective multi-armed bandit problem in a dynamic envir...
research
06/26/2020

On Regret with Multiple Best Arms

We study regret minimization problem with the existence of multiple best...
research
07/01/2023

Adaptive Algorithms for Relaxed Pareto Set Identification

In this paper we revisit the fixed-confidence identification of the Pare...
research
03/11/2018

Multi-objective Contextual Bandit Problem with Similarity Information

In this paper we propose the multi-objective contextual bandit problem w...
research
12/09/2022

Networked Restless Bandits with Positive Externalities

Restless multi-armed bandits are often used to model budget-constrained ...
research
06/06/2022

Robust Pareto Set Identification with Contaminated Bandit Feedback

We consider the Pareto set identification (PSI) problem in multi-objecti...

Please sign up or login with your details

Forgot password? Click here to reset