-
Variational Regret Bounds for Reinforcement Learning
We consider undiscounted reinforcement learning in Markov decision proce...
read it
-
Regret Bounds for Kernel-Based Reinforcement Learning
We consider the exploration-exploitation dilemma in finite-horizon reinf...
read it
-
Reinforcement Learning for Non-Stationary Markov Decision Processes: The Blessing of (More) Optimism
We consider un-discounted reinforcement learning (RL) in Markov decision...
read it
-
A Sliding-Window Algorithm for Markov Decision Processes with Arbitrarily Changing Rewards and Transitions
We consider reinforcement learning in changing Markov Decision Processes...
read it
-
Optimizing for the Future in Non-Stationary MDPs
Most reinforcement learning methods are based upon the key assumption th...
read it
-
Non-Stationary Markov Decision Processes a Worst-Case Approach using Model-Based Reinforcement Learning
This work tackles the problem of robust zero-shot planning in non-statio...
read it
-
Provably adaptive reinforcement learning in metric spaces
We study reinforcement learning in continuous state and action spaces en...
read it
A Kernel-Based Approach to Non-Stationary Reinforcement Learning in Metric Spaces
In this work, we propose KeRNS: an algorithm for episodic reinforcement learning in non-stationary Markov Decision Processes (MDPs) whose state-action set is endowed with a metric. Using a non-parametric model of the MDP built with time-dependent kernels, we prove a regret bound that scales with the covering dimension of the state-action space and the total variation of the MDP with time, which quantifies its level of non-stationarity. Our method generalizes previous approaches based on sliding windows and exponential discounting used to handle changing environments. We further propose a practical implementation of KeRNS, we analyze its regret and validate it experimentally.
READ FULL TEXT
Comments
There are no comments yet.