TempoRL: Learning When to Act

by   André Biedenkapp, et al.

Reinforcement learning is a powerful approach to learn behaviour through interactions with an environment. However, behaviours are usually learned in a purely reactive fashion, where an appropriate action is selected based on an observation. In this form, it is challenging to learn when it is necessary to execute new decisions. This makes learning inefficient, especially in environments that need various degrees of fine and coarse control. To address this, we propose a proactive setting in which the agent not only selects an action in a state but also for how long to commit to that action. Our TempoRL approach introduces skip connections between states and learns a skip-policy for repeating the same action along these skips. We demonstrate the effectiveness of TempoRL on a variety of traditional and deep RL environments, showing that our approach is capable of learning successful policies up to an order of magnitude faster than vanilla Q-learning.


page 8

page 14

page 16


The act of remembering: a study in partially observable reinforcement learning

Reinforcement Learning (RL) agents typically learn memoryless policies—p...

Measuring and Characterizing Generalization in Deep Reinforcement Learning

Deep reinforcement-learning methods have achieved remarkable performance...

Dynamic Frame skip Deep Q Network

Deep Reinforcement Learning methods have achieved state of the art perfo...

Learn What Not to Learn: Action Elimination with Deep Reinforcement Learning

Learning how to act when there are many available actions in each state ...

Reactive Reinforcement Learning in Asynchronous Environments

The relationship between a reinforcement learning (RL) agent and an asyn...

PaintBot: A Reinforcement Learning Approach for Natural Media Painting

We propose a new automated digital painting framework, based on a painti...

Composable Energy Policies for Reactive Motion Generation and Reinforcement Learning

Reactive motion generation problems are usually solved by computing acti...