PathNet: Evolution Channels Gradient Descent in Super Neural Networks

01/30/2017
by   Chrisantha Fernando, et al.
0

For artificial general intelligence (AGI) it would be efficient if multiple users trained the same giant neural network, permitting parameter reuse, without catastrophic forgetting. PathNet is a first step in this direction. It is a neural network algorithm that uses agents embedded in the neural network whose task is to discover which parts of the network to re-use for new tasks. Agents are pathways (views) through the network which determine the subset of parameters that are used and updated by the forwards and backwards passes of the backpropogation algorithm. During learning, a tournament selection genetic algorithm is used to select pathways through the neural network for replication and mutation. Pathway fitness is the performance of that pathway measured according to a cost function. We demonstrate successful transfer learning; fixing the parameters along a path learned on task A and re-evolving a new population of paths for task B, allows task B to be learned faster than it could be learned from scratch or after fine-tuning. Paths evolved on task B re-use parts of the optimal path evolved on task A. Positive transfer was demonstrated for binary MNIST, CIFAR, and SVHN supervised learning classification tasks, and a set of Atari and Labyrinth reinforcement learning tasks, suggesting PathNets have general applicability for neural network training. Finally, PathNet also significantly improves the robustness to hyperparameter choices of a parallel asynchronous reinforcement learning algorithm (A3C).

READ FULL TEXT

page 7

page 8

page 9

page 11

page 12

page 16

research
09/26/2020

Lineage Evolution Reinforcement Learning

We propose a general agent population learning system, and on this basis...
research
10/15/2019

Orthogonal Gradient Descent for Continual Learning

Neural networks are achieving state of the art and sometimes super-human...
research
04/24/2019

Evolving Neural Networks in Reinforcement Learning by means of UMDAc

Neural networks are gaining popularity in the reinforcement learning fie...
research
04/15/2020

Transfer-Learning-Aware Neuro-Evolution for Diseases Detection in Chest X-Ray Images

The neural network needs excessive costs of time because of the complexi...
research
02/20/2023

Multiobjective Evolutionary Pruning of Deep Neural Networks with Transfer Learning for improving their Performance and Robustness

Evolutionary Computation algorithms have been used to solve optimization...
research
08/14/2010

Discover & eXplore Neural Network (DXNN) Platform, a Modular TWEANN

In this paper I present a novel type of Topology and Weight Evolving Art...
research
10/04/2018

The Dynamics of Differential Learning I: Information-Dynamics and Task Reachability

We study the topology of the space of learning tasks, which is critical ...

Please sign up or login with your details

Forgot password? Click here to reset