DeepAI AI Chat
Log In Sign Up

Piggyback: Adding Multiple Tasks to a Single, Fixed Network by Learning to Mask

by   Arun Mallya, et al.

This work presents a method for adding multiple tasks to a single, fixed deep neural network without affecting performance on already learned tasks. By building upon concepts from network quantization and sparsification, we learn binary masks that "piggyback", or are applied to an existing network to provide good performance on a new task. These masks are learned in an end-to-end differentiable fashion, and incur a low overhead of 1 bit per network parameter, per task. Even though the underlying network is fixed, the ability to mask certain weights allows for the learning of a large number of filters. We show improved performance on a variety of classification tasks, including those with large domain shifts from the natural images of ImageNet. Unlike prior work, we can augment the capabilities of a network without suffering from catastrophic forgetting or competition between tasks, while incurring the least overhead per added task. We demonstrate the applicability of our method to multiple architectures, and obtain accuracies comparable with individual networks trained per task.


page 5

page 9


PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning

This paper presents a method for adding multiple tasks to a single deep ...

Ternary Feature Masks: continual learning without any forgetting

In this paper, we propose an approach without any forgetting to continua...

ImpressLearn: Continual Learning via Combined Task Impressions

This work proposes a new method to sequentially train a deep neural netw...

Adding New Tasks to a Single Network with Weight Trasformations using Binary Masks

Visual recognition algorithms are required today to exhibit adaptive abi...

Supermasks in Superposition

We present the Supermasks in Superposition (SupSup) model, capable of se...

Incremental multi-domain learning with network latent tensor factorization

The prominence of deep learning, large amount of annotated data and incr...

Explanatory Masks for Neural Network Interpretability

Neural network interpretability is a vital component for applications ac...