Q-Learning in enormous action spaces via amortized approximate maximization

01/22/2020
by   Tom Van de Wiele, et al.
0

Applying Q-learning to high-dimensional or continuous action spaces can be difficult due to the required maximization over the set of possible actions. Motivated by techniques from amortized inference, we replace the expensive maximization over all actions with a maximization over a small subset of possible actions sampled from a learned proposal distribution. The resulting approach, which we dub Amortized Q-learning (AQL), is able to handle discrete, continuous, or hybrid action spaces while maintaining the benefits of Q-learning. Our experiments on continuous control tasks with up to 21 dimensional actions show that AQL outperforms D3PG (Barth-Maron et al, 2018) and QT-Opt (Kalashnikov et al, 2018). Experiments on structured discrete action spaces demonstrate that AQL can efficiently learn good policies in spaces with thousands of discrete actions.

READ FULL TEXT

page 14

page 15

research
05/14/2017

Discrete Sequential Prediction of Continuous Actions for Deep RL

It has long been assumed that high dimensional continuous control proble...
research
04/13/2021

Learning and Planning in Complex Action Spaces

Many important real-world problems have action spaces that are high-dime...
research
06/10/2020

Marginal Utility for Planning in Continuous or Large Discrete Action Spaces

Sample-based planning is a powerful family of algorithms for generating ...
research
05/31/2023

Handling Large Discrete Action Spaces via Dynamic Neighborhood Construction

Large discrete action spaces remain a central challenge for reinforcemen...
research
10/07/2021

Design Strategy Network: A deep hierarchical framework to represent generative design strategies in complex action spaces

Generative design problems often encompass complex action spaces that ma...
research
01/29/2019

Discretizing Continuous Action Space for On-Policy Optimization

In this work, we show that discretizing action space for continuous cont...
research
10/13/2017

Unsupervised Real-Time Control through Variational Empowerment

We introduce a methodology for efficiently computing a lower bound to em...

Please sign up or login with your details

Forgot password? Click here to reset