A general approach to progressive learning

by   Joshua T. Vogelstein, et al.

In biological learning, data is used to improve performance on the task at hand, while simultaneously improving performance on both previously encountered tasks and as yet unconsidered future tasks. In contrast, classical machine learning starts from a blank slate, or tabula rasa, using data only for the single task at hand. While typical transfer learning algorithms can improve performance on future tasks, their performance degrades upon learning new tasks. Many recent approaches have attempted to mitigate this issue, called catastrophic forgetting, to maintain performance given new tasks. But striving to avoid forgetting sets the goal unnecessarily low: the goal of progressive learning, whether biological or artificial, is to improve performance on all tasks (including past and future) with any new data. We propose a general approach to progressive learning that ensembles representations, rather than learners. We show that ensembling representations—including representations learned by decision forests or neural networks—enables both forward and backward transfer on a variety of simulated and real data tasks, including vision, language, and adversarial tasks. This work suggests that further improvements in progressive learning may follow from a deeper understanding of how biological learning achieves such high degrees of efficiency.



There are no comments yet.


page 4

page 5

page 6

page 8

page 9

page 13

page 16


A general approach to progressive intelligence

In biological learning, data is used to improve performance on the task ...

Measuring Catastrophic Forgetting in Neural Networks

Deep neural networks are used in many state-of-the-art systems for machi...

Synaptic Metaplasticity in Binarized Neural Networks

While deep neural networks have surpassed human performance in multiple ...

Learning to Remember from a Multi-Task Teacher

Recent studies on catastrophic forgetting during sequential learning typ...

Generative Memory for Lifelong Reinforcement Learning

Our research is focused on understanding and applying biological memory ...

Progressive Continual Learning for Spoken Keyword Spotting

Catastrophic forgetting is a thorny challenge when updating keyword spot...

Sequential mastery of multiple tasks: Networks naturally learn to learn

We explore the behavior of a standard convolutional neural net in a sett...

Code Repositories


NeuroData's package for exploring and using progressive learning algorithms

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.