One Big Net For Everything

02/24/2018
by   Juergen Schmidhuber, et al.
0

I apply recent work on "learning to think" (2015) and on PowerPlay (2011) to the incremental training of an increasingly general problem solver, continually learning to solve new tasks without forgetting previous skills. The problem solver is a single recurrent neural network (or similar general purpose computer) called ONE. ONE is unusual in the sense that it is trained in various ways, e.g., by black box optimization / reinforcement learning / artificial evolution as well as supervised / unsupervised learning. For example, ONE may learn through neuroevolution to control a robot through environment-changing actions, and learn through unsupervised gradient descent to predict future inputs and vector-valued reward signals as suggested in 1990. User-given tasks can be defined through extra goal-defining input patterns, also proposed in 1990. Suppose ONE has already learned many skills. Now a copy of ONE can be re-trained to learn a new skill, e.g., through neuroevolution without a teacher. Here it may profit from re-using previously learned subroutines, but it may also forget previous skills. Then ONE is retrained in PowerPlay style (2011) on stored input/output traces of (a) ONE's copy executing the new skill and (b) previous instances of ONE whose skills are still considered worth memorizing. Simultaneously, ONE is retrained on old traces (even those of unsuccessful trials) to become a better predictor, without additional expensive interaction with the enviroment. More and more control and prediction skills are thus collapsed into ONE, like in the chunker-automatizer system of the neural history compressor (1991). This forces ONE to relate partially analogous skills (with shared algorithmic information) to each other, creating common subroutines in form of shared subnetworks of ONE, to greatly speed up subsequent learning of additional, novel but algorithmically related skills.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/21/2022

One After Another: Learning Incremental Skills for a Changing World

Reward-free, unsupervised discovery of skills is an attractive alternati...
research
05/25/2018

A Scalable Approach to Multi-Context Continual Learning via Lifelong Skill Encoding

Continual or lifelong learning (CL) is one of the most challenging probl...
research
10/31/2012

First Experiments with PowerPlay

Like a scientist or a playing child, PowerPlay not only learns new skill...
research
12/22/2011

POWERPLAY: Training an Increasingly General Problem Solver by Continually Searching for the Simplest Still Unsolvable Problem

Most of computer science focuses on automatically solving given computat...
research
10/27/2021

Direct then Diffuse: Incremental Unsupervised Skill Discovery for State Covering and Goal Reaching

Learning meaningful behaviors in the absence of reward is a difficult pr...
research
07/24/2017

Copy the dynamics using a learning machine

Is it possible to generally construct a dynamical system to simulate a b...
research
10/22/2019

Learning Humanoid Robot Running Skills through Proximal Policy Optimization

In the current level of evolution of Soccer 3D, motion control is a key ...

Please sign up or login with your details

Forgot password? Click here to reset