EvoJAX: Hardware-Accelerated Neuroevolution

02/10/2022
by   Yujin Tang, et al.
5

Evolutionary computation has been shown to be a highly effective method for training neural networks, particularly when employed at scale on CPU clusters. Recent work have also showcased their effectiveness on hardware accelerators, such as GPUs, but so far such demonstrations are tailored for very specific tasks, limiting applicability to other domains. We present EvoJAX, a scalable, general purpose, hardware-accelerated neuroevolution toolkit. Building on top of the JAX library, our toolkit enables neuroevolution algorithms to work with neural networks running in parallel across multiple TPU/GPUs. EvoJAX achieves very high performance by implementing the evolution algorithm, neural network and task all in NumPy, which is compiled just-in-time to run on accelerators. We provide extensible examples of EvoJAX for a wide range of tasks, including supervised learning, reinforcement learning and generative art. Since EvoJAX can find solutions to most of these tasks within minutes on a single accelerator, compared to hours or days when using CPUs, our toolkit can significantly shorten the iteration cycle of evolutionary computation experiments. EvoJAX is available at https://github.com/google/evojax

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/08/2019

Accelerated Neural Networks on OpenCL Devices Using SYCL-DNN

Over the past few years machine learning has seen a renewed explosion of...
research
02/20/2020

Neural Network Compression Framework for fast model inference

In this work we present a new framework for neural networks compression ...
research
12/08/2022

evosax: JAX-based Evolution Strategies

The deep learning revolution has greatly been accelerated by the 'hardwa...
research
01/29/2023

EvoX: A Distributed GPU-accelerated Library towards Scalable Evolutionary Computation

During the past decades, evolutionary computation (EC) has demonstrated ...
research
07/16/2020

Interpretable Neuroevolutionary Models for Learning Non-Differentiable Functions and Programs

A key factor in the modern success of deep learning is the astonishing e...
research
03/20/2022

Exascale Grid Optimization (ExaGO) toolkit: An open-source high-performance package for solving large-scale grid optimization problems

This paper introduces the Exascale Grid Optimization (ExaGO) toolkit, a ...
research
09/14/2021

Greenformer: Factorization Toolkit for Efficient Deep Neural Networks

While the recent advances in deep neural networks (DNN) bring remarkable...

Please sign up or login with your details

Forgot password? Click here to reset