DeepAI AI Chat
Log In Sign Up

Continuous-time Value Function Approximation in Reproducing Kernel Hilbert Spaces

06/08/2018
by   Motoya Ohnishi, et al.
Keio University
KTH Royal Institute of Technology
RIKEN
0

Motivated by the success of reinforcement learning (RL) for discrete-time tasks such as AlphaGo and Atari games, there has been a recent surge of interest in using RL for continuous-time control of physical systems (cf. many challenging tasks in OpenAI Gym and the DeepMind Control Suite). Since discretization of time is susceptible to error, it is methodologically more desirable to handle the system dynamics directly in continuous time. However, very few techniques exist for continuous-time RL and they lack flexibility in value function approximation. In this paper, we propose a novel framework for continuous-time value function approximation based on reproducing kernel Hilbert spaces. The resulting framework is so flexible that it can accommodate any kind of kernel-based approach, such as Gaussian processes and the adaptive projected subgradient method, and it allows us to handle uncertainties and nonstationarity without prior knowledge about the environment or what basis functions to employ. We demonstrate the validity of the presented framework through experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

02/09/2021

Continuous-Time Model-Based Reinforcement Learning

Model-based reinforcement learning (MBRL) approaches rely on discrete-ti...
06/20/2021

Optimal Strategies for Decision Theoretic Online Learning

We extend the drifting games analysis to continuous time and show that t...
06/15/2020

The Reflectron: Exploiting geometry for learning generalized linear models

Generalized linear models (GLMs) extend linear regression by generating ...
02/15/2023

CERiL: Continuous Event-based Reinforcement Learning

This paper explores the potential of event cameras to enable continuous ...
08/23/2021

A generalized stacked reinforcement learning method for sampled systems

A common setting of reinforcement learning (RL) is a Markov decision pro...
04/15/2021

Predictor-Corrector(PC) Temporal Difference(TD) Learning (PCTD)

Using insight from numerical approximation of ODEs and the problem formu...