Supermasks in Superposition

06/26/2020
by   Mitchell Wortsman, et al.
0

We present the Supermasks in Superposition (SupSup) model, capable of sequentially learning thousands of tasks without catastrophic forgetting. Our approach uses a randomly initialized, fixed base network and for each task finds a subnetwork (supermask) that achieves good performance. If task identity is given at test time, the correct subnetwork can be retrieved with minimal memory usage. If not provided, SupSup can infer the task using gradient-based optimization to find a linear superposition of learned supermasks which minimizes the output entropy. In practice we find that a single gradient step is often sufficient to identify the correct mask, even among 2500 tasks. We also showcase two promising extensions. First, SupSup models can be trained entirely without task identity information, as they may detect when they are uncertain about new data and allocate an additional supermask for the new training distribution. Finally the entire, growing set of supermasks can be stored in a constant-sized reservoir by implicitly storing them as attractors in a fixed-sized Hopfield network.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/18/2016

Expert Gate: Lifelong Learning with a Network of Experts

In this paper we introduce a model of lifelong learning, based on a Netw...
research
10/18/2022

Exclusive Supermask Subnetwork Training for Continual Learning

Continual Learning (CL) methods mainly focus on avoiding catastrophic fo...
research
11/28/2017

Block Neural Network Avoids Catastrophic Forgetting When Learning Multiple Task

In the present work we propose a Deep Feed Forward network architecture ...
research
05/25/2023

SketchOGD: Memory-Efficient Continual Learning

When machine learning models are trained continually on a sequence of ta...
research
01/19/2018

Piggyback: Adding Multiple Tasks to a Single, Fixed Network by Learning to Mask

This work presents a method for adding multiple tasks to a single, fixed...
research
05/22/2022

RVAE-LAMOL: Residual Variational Autoencoder to Enhance Lifelong Language Learning

Lifelong Language Learning (LLL) aims to train a neural network to learn...
research
06/03/2019

Continual learning with hypernetworks

Artificial neural networks suffer from catastrophic forgetting when they...

Please sign up or login with your details

Forgot password? Click here to reset