Learning Optimal Admission Control in Partially Observable Queueing Networks

08/04/2023
by   Jonatha Anselmi, et al.
0

We present an efficient reinforcement learning algorithm that learns the optimal admission control policy in a partially observable queueing network. Specifically, only the arrival and departure times from the network are observable, and optimality refers to the average holding/rejection cost in infinite horizon. While reinforcement learning in Partially Observable Markov Decision Processes (POMDP) is prohibitively expensive in general, we show that our algorithm has a regret that only depends sub-linearly on the maximal number of jobs in the network, S. In particular, in contrast with existing regret analyses, our regret bound does not depend on the diameter of the underlying Markov Decision Process (MDP), which in most queueing systems is at least exponential in S. The novelty of our approach is to leverage Norton's equivalent theorem for closed product-form queueing networks and an efficient reinforcement learning algorithm for MDPs with the structure of birth-and-death processes.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/08/2021

Sublinear Regret for Learning POMDPs

We study the model-based undiscounted reinforcement learning for partial...
research
03/14/2023

Act-Then-Measure: Reinforcement Learning for Partially Observable Environments with Active Measuring

We study Markov decision processes (MDPs), where agents have direct cont...
research
01/31/2021

Fast Rates for the Regret of Offline Reinforcement Learning

We study the regret of reinforcement learning from offline data generate...
research
11/15/2022

Reinforcement Learning Methods for Wordle: A POMDP/Adaptive Control Approach

In this paper we address the solution of the popular Wordle puzzle, usin...
research
10/02/2020

Reinforcement Learning of Simple Indirect Mechanisms

We introduce the use of reinforcement learning for indirect mechanisms, ...
research
12/07/2017

Remarks on Bayesian Control Charts

There is a considerable amount of ongoing research on the use of Bayesia...
research
02/25/2021

Online Learning for Unknown Partially Observable MDPs

Solving Partially Observable Markov Decision Processes (POMDPs) is hard....

Please sign up or login with your details

Forgot password? Click here to reset