DeepAI AI Chat
Log In Sign Up

PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning

11/15/2017
by   Arun Mallya, et al.
University of Illinois at Urbana-Champaign
0

This paper presents a method for adding multiple tasks to a single deep neural network while avoiding catastrophic forgetting. Inspired by network pruning techniques, we exploit redundancies in large deep networks to free up parameters that can then be employed to learn new tasks. By performing iterative pruning and network re-training, we are able to sequentially "pack" multiple tasks into a single network while ensuring minimal drop in performance and minimal storage overhead. Unlike prior work that uses proxy losses to maintain accuracy on older tasks, we always optimize for the task at hand. We perform extensive experiments on a variety of network architectures and large-scale datasets, and observe much better robustness against catastrophic forgetting than prior work. In particular, we are able to add three fine-grained classification tasks to a single ImageNet-trained VGG-16 network and achieve accuracies close to those of separately trained networks for each task.

READ FULL TEXT
01/19/2018

Piggyback: Adding Multiple Tasks to a Single, Fixed Network by Learning to Mask

This work presents a method for adding multiple tasks to a single, fixed...
06/21/2021

Iterative Network Pruning with Uncertainty Regularization for Lifelong Sentiment Classification

Lifelong learning capabilities are crucial for sentiment classifiers to ...
04/12/2019

Incremental multi-domain learning with network latent tensor factorization

The prominence of deep learning, large amount of annotated data and incr...
07/23/2019

Adaptive Compression-based Lifelong Learning

The problem of a deep learning model losing performance on a previously ...
02/02/2018

Alleviating catastrophic forgetting using context-dependent gating and synaptic stabilization

Humans and most animals can learn new tasks without forgetting old ones....
12/24/2020

Mixed-Privacy Forgetting in Deep Networks

We show that the influence of a subset of the training samples can be re...
05/02/2017

Analyzing Knowledge Transfer in Deep Q-Networks for Autonomously Handling Multiple Intersections

We analyze how the knowledge to autonomously handle one type of intersec...