Piggyback: Adding Multiple Tasks to a Single, Fixed Network by Learning to Mask

01/19/2018
by   Arun Mallya, et al.
0

This work presents a method for adding multiple tasks to a single, fixed deep neural network without affecting performance on already learned tasks. By building upon concepts from network quantization and sparsification, we learn binary masks that "piggyback", or are applied to an existing network to provide good performance on a new task. These masks are learned in an end-to-end differentiable fashion, and incur a low overhead of 1 bit per network parameter, per task. Even though the underlying network is fixed, the ability to mask certain weights allows for the learning of a large number of filters. We show improved performance on a variety of classification tasks, including those with large domain shifts from the natural images of ImageNet. Unlike prior work, we can augment the capabilities of a network without suffering from catastrophic forgetting or competition between tasks, while incurring the least overhead per added task. We demonstrate the applicability of our method to multiple architectures, and obtain accuracies comparable with individual networks trained per task.

READ FULL TEXT

page 5

page 9

research
11/15/2017

PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning

This paper presents a method for adding multiple tasks to a single deep ...
research
08/26/2023

Differentiable Weight Masks for Domain Transfer

One of the major drawbacks of deep learning models for computer vision h...
research
01/23/2020

Ternary Feature Masks: continual learning without any forgetting

In this paper, we propose an approach without any forgetting to continua...
research
10/05/2022

ImpressLearn: Continual Learning via Combined Task Impressions

This work proposes a new method to sequentially train a deep neural netw...
research
05/28/2018

Adding New Tasks to a Single Network with Weight Trasformations using Binary Masks

Visual recognition algorithms are required today to exhibit adaptive abi...
research
06/26/2020

Supermasks in Superposition

We present the Supermasks in Superposition (SupSup) model, capable of se...
research
11/15/2019

Explanatory Masks for Neural Network Interpretability

Neural network interpretability is a vital component for applications ac...

Please sign up or login with your details

Forgot password? Click here to reset