Temporal-Differential Learning in Continuous Environments

06/01/2020
by   Tao Bian, et al.
0

In this paper, a new reinforcement learning (RL) method known as the method of temporal differential is introduced. Compared to the traditional temporal-difference learning method, it plays a crucial role in developing novel RL techniques for continuous environments. In particular, the continuous-time least squares policy evaluation (CT-LSPE) and the continuous-time temporal-differential (CT-TD) learning methods are developed. Both theoretical and empirical evidences are provided to demonstrate the effectiveness of the proposed temporal-differential learning methodology.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/30/2023

Policy Optimization for Continuous Reinforcement Learning

We study reinforcement learning (RL) in the setting of continuous time a...
research
07/18/2023

Continuous-Time Reinforcement Learning: New Design Algorithms with Theoretical Insights and Performance Guarantees

Continuous-time nonlinear optimal control problems hold great promise in...
research
02/15/2023

CERiL: Continuous Event-based Reinforcement Learning

This paper explores the potential of event cameras to enable continuous ...
research
02/16/2022

On a Variance Reduction Correction of the Temporal Difference for Policy Evaluation in the Stochastic Continuous Setting

This paper deals with solving continuous time, state and action optimiza...
research
02/24/2023

Neural Laplace Control for Continuous-time Delayed Systems

Many real-world offline reinforcement learning (RL) problems involve con...
research
04/15/2021

Predictor-Corrector(PC) Temporal Difference(TD) Learning (PCTD)

Using insight from numerical approximation of ODEs and the problem formu...
research
03/28/2021

A Temporal Kernel Approach for Deep Learning with Continuous-time Information

Sequential deep learning models such as RNN, causal CNN and attention me...

Please sign up or login with your details

Forgot password? Click here to reset