
Minimax Regret for Stochastic Shortest Path
We study the Stochastic Shortest Path (SSP) problem in which an agent ha...
read it

Adversarial Stochastic Shortest Path
Stochastic shortest path (SSP) is a wellknown problem in planning and c...
read it

Online Learning for Stochastic Shortest Path Model via Posterior Sampling
We consider the problem of online reinforcement learning for the Stochas...
read it

NoRegret Exploration in GoalOriented Reinforcement Learning
Many popular reinforcement learning problems (e.g., navigation in a maze...
read it

Improved Sample Complexity for Incremental Autonomous Exploration in MDPs
We investigate the exploration of an unknown environment when no reward ...
read it

Nearoptimal Bayesian Solution For Unknown Discrete Markov Decision Process
We tackle the problem of acting in an unknown finite and discrete Markov...
read it

Learning to Route Efficiently with EndtoEnd Feedback: The Value of Networked Structure
We introduce efficient algorithms which achieve nearly optimal regrets f...
read it
Nearoptimal Regret Bounds for Stochastic Shortest Path
Stochastic shortest path (SSP) is a wellknown problem in planning and control, in which an agent has to reach a goal state in minimum total expected cost. In the learning formulation of the problem, the agent is unaware of the environment dynamics (i.e., the transition function) and has to repeatedly play for a given number of episodes while reasoning about the problem's optimal solution. Unlike other wellstudied models in reinforcement learning (RL), the length of an episode is not predetermined (or bounded) and is influenced by the agent's actions. Recently, Tarbouriech et al. (2019) studied this problem in the context of regret minimization and provided an algorithm whose regret bound is inversely proportional to the square root of the minimum instantaneous cost. In this work we remove this dependence on the minimum cost—we give an algorithm that guarantees a regret bound of O(B_ S √(A K)), where B_ is an upper bound on the expected cost of the optimal policy, S is the set of states, A is the set of actions and K is the number of episodes. We additionally show that any learning algorithm must have at least Ω(B_√(S A K)) regret in the worst case.
READ FULL TEXT
Comments
There are no comments yet.