DeepAI AI Chat
Log In Sign Up

Variable-Decision Frequency Option Critic

by   Amirmohammad Karimi, et al.

In classic reinforcement learning algorithms, agents make decisions at discrete and fixed time intervals. The physical duration between one decision and the next becomes a critical hyperparameter. When this duration is too short, the agent needs to make many decisions to achieve its goal, aggravating the problem's difficulty. But when this duration is too long, the agent becomes incapable of controlling the system. Physical systems, however, do not need a constant control frequency. For learning agents, it is desirable to operate with low frequency when possible and high frequency when necessary. We propose a framework called Continuous-Time Continuous-Options (CTCO), where the agent chooses options as sub-policies of variable durations. Such options are time-continuous and can interact with the system at any desired frequency providing a smooth change of actions. The empirical analysis shows that our algorithm is competitive w.r.t. other time-abstraction techniques, such as classic option learning and action repetition, and practically overcomes the difficult choice of the decision frequency.


page 1

page 6


Diversity-Enriched Option-Critic

Temporal abstraction allows reinforcement learning agents to represent k...

Attention Option-Critic

Temporal abstraction in reinforcement learning is the ability of an agen...

Disentangling Options with Hellinger Distance Regularizer

In reinforcement learning (RL), temporal abstraction still remains as an...

Learning to Explore by Reinforcement over High-Level Options

Autonomous 3D environment exploration is a fundamental task for various ...

Options of Interest: Temporal Abstraction with Interest Functions

Temporal abstraction refers to the ability of an agent to use behaviours...

Better Collective Decisions via Uncertainty Reduction

We consider an agent community wishing to decide on several binary issue...

Delta Hedging of Derivatives using Deep Reinforcement Learning

Building on previous work of Kolm and Ritter (2019) and Cao et al. (2019...