Learning to reinforcement learn for Neural Architecture Search

11/09/2019
by   J. Gomez Robles, et al.
0

Reinforcement learning (RL) is a goal-oriented learning solution that has proven to be successful for Neural Architecture Search (NAS) on the CIFAR and ImageNet datasets. However, a limitation of this approach is its high computational cost, making it unfeasible to replay it on other datasets. Through meta-learning, we could bring this cost down by adapting previously learned policies instead of learning them from scratch. In this work, we propose a deep meta-RL algorithm that learns an adaptive policy over a set of environments, making it possible to transfer it to previously unseen tasks. The algorithm was applied to various proof-of-concept environments in the past, but we adapt it to the NAS problem. We empirically investigate the agent's behavior during training when challenged to design chain-structured neural architectures for three datasets with increasing levels of hardness, to later fix the policy and evaluate it on two unseen datasets of different difficulty. Our results show that, under resource constraints, the agent effectively adapts its strategy during training to design better architectures than the ones designed by a standard RL algorithm, and can design good architectures during the evaluation on previously unseen environments. We also provide guidelines on the applicability of our framework in a more complex NAS setting by studying the progress of the agent when challenged to design multi-branch architectures.

READ FULL TEXT
research
07/02/2021

Rapid Neural Architecture Search by Learning to Generate Graphs from Datasets

Despite the success of recent Neural Architecture Search (NAS) methods o...
research
04/15/2022

Resource-Constrained Neural Architecture Search on Tabular Datasets

The best neural architecture for a given machine learning problem depend...
research
11/30/2020

Optimizing the Neural Architecture of Reinforcement Learning Agents

Reinforcement learning (RL) enjoyed significant progress over the last y...
research
05/12/2022

Warm-starting DARTS using meta-learning

Neural architecture search (NAS) has shown great promise in the field of...
research
06/04/2021

RL-DARTS: Differentiable Architecture Search for Reinforcement Learning

We introduce RL-DARTS, one of the first applications of Differentiable A...
research
05/26/2023

Meta-prediction Model for Distillation-Aware NAS on Unseen Datasets

Distillation-aware Neural Architecture Search (DaNAS) aims to search for...

Please sign up or login with your details

Forgot password? Click here to reset