Neuroevolution of Recurrent Architectures on Control Tasks

04/03/2023
by   Maximilien Le Clei, et al.
0

Modern artificial intelligence works typically train the parameters of fixed-sized deep neural networks using gradient-based optimization techniques. Simple evolutionary algorithms have recently been shown to also be capable of optimizing deep neural network parameters, at times matching the performance of gradient-based techniques, e.g. in reinforcement learning settings. In addition to optimizing network parameters, many evolutionary computation techniques are also capable of progressively constructing network architectures. However, constructing network architectures from elementary evolution rules has not yet been shown to scale to modern reinforcement learning benchmarks. In this paper we therefore propose a new approach in which the architectures of recurrent neural networks dynamically evolve according to a small set of mutation rules. We implement a massively parallel evolutionary algorithm and run experiments on all 19 OpenAI Gym state-based reinforcement learning control tasks. We find that in most cases, dynamic agents match or exceed the performance of gradient-based agents while utilizing orders of magnitude fewer parameters. We believe our work to open avenues for real-life applications where network compactness and autonomous design are of critical importance. We provide our source code, final model checkpoints and full results at github.com/MaximilienLC/nra.

READ FULL TEXT
research
04/03/2023

Generative Adversarial Neuroevolution for Control Behaviour Imitation

There is a recent surge in interest for imitation learning, with large h...
research
04/03/2023

Evolving Artificial Neural Networks To Imitate Human Behaviour In Shinobi III : Return of the Ninja Master

Our society is increasingly fond of computational tools. This phenomenon...
research
02/07/2023

Hebbian and Gradient-based Plasticity Enables Robust Memory and Rapid Learning in RNNs

Rapidly learning from ongoing experiences and remembering past events wi...
research
12/06/2018

On the stability analysis of optimal state feedbacks as represented by deep neural models

Research has shown how the optimal feedback control of several non linea...
research
08/06/2018

On Optimizing Deep Convolutional Neural Networks by Evolutionary Computing

Optimization for deep networks is currently a very active area of resear...
research
06/24/2012

Practical recommendations for gradient-based training of deep architectures

Learning algorithms related to artificial neural networks and in particu...
research
07/25/2020

Learning Variational Data Assimilation Models and Solvers

This paper addresses variational data assimilation from a learning point...

Please sign up or login with your details

Forgot password? Click here to reset