DeepAI

# Observation Space Matters: Benchmark and Optimization Algorithm

Recent advances in deep reinforcement learning (deep RL) enable researchers to solve challenging control problems, from simulated environments to real-world robotic tasks. However, deep RL algorithms are known to be sensitive to the problem formulation, including observation spaces, action spaces, and reward functions. There exist numerous choices for observation spaces but they are often designed solely based on prior knowledge due to the lack of established principles. In this work, we conduct benchmark experiments to verify common design choices for observation spaces, such as Cartesian transformation, binary contact flags, a short history, or global positions. Then we propose a search algorithm to find the optimal observation spaces, which examines various candidate observation spaces and removes unnecessary observation channels with a Dropout-Permutation test. We demonstrate that our algorithm significantly improves learning speed compared to manually designed observation spaces. We also analyze the proposed algorithm by evaluating different hyperparameters.

• 2 publications
• 29 publications
10/09/2020

### Learning to Locomote: Understanding How Environment Design Matters for Deep Reinforcement Learning

Learning to locomote is one of the most common tasks in physics-based an...
01/01/2022

### Transfer RL across Observation Feature Spaces via Model-Based Regularization

In many reinforcement learning (RL) applications, the observation space ...
04/15/2021

### Generalising Discrete Action Spaces with Conditional Action Trees

There are relatively few conventions followed in reinforcement learning ...
09/12/2022

08/24/2018