Optimizing the Neural Architecture of Reinforcement Learning Agents

11/30/2020
by   N. Mazyavkina, et al.
0

Reinforcement learning (RL) enjoyed significant progress over the last years. One of the most important steps forward was the wide application of neural networks. However, architectures of these neural networks are typically constructed manually. In this work, we study recently proposed neural architecture search (NAS) methods for optimizing the architecture of RL agents. We carry out experiments on the Atari benchmark and conclude that modern NAS methods find architectures of RL agents outperforming a manually selected one.

READ FULL TEXT
research
01/19/2021

ES-ENAS: Combining Evolution Strategies with Neural Architecture Search at No Extra Cost for Reinforcement Learning

We introduce ES-ENAS, a simple neural architecture search (NAS) algorith...
research
04/15/2022

Resource-Constrained Neural Architecture Search on Tabular Datasets

The best neural architecture for a given machine learning problem depend...
research
06/04/2021

RL-DARTS: Differentiable Architecture Search for Reinforcement Learning

We introduce RL-DARTS, one of the first applications of Differentiable A...
research
11/09/2019

Learning to reinforcement learn for Neural Architecture Search

Reinforcement learning (RL) is a goal-oriented learning solution that ha...
research
12/17/2019

Generative Teaching Networks: Accelerating Neural Architecture Search by Learning to Generate Synthetic Training Data

This paper investigates the intriguing question of whether we can create...
research
09/13/2021

RADARS: Memory Efficient Reinforcement Learning Aided Differentiable Neural Architecture Search

Differentiable neural architecture search (DNAS) is known for its capaci...
research
06/03/2020

FBNetV3: Joint Architecture-Recipe Search using Neural Acquisition Function

Neural Architecture Search (NAS) yields state-of-the-art neural networks...

Please sign up or login with your details

Forgot password? Click here to reset