Incremental multi-domain learning with network latent tensor factorization

04/12/2019
by   Adrian Bulat, et al.
8

The prominence of deep learning, large amount of annotated data and increasingly powerful hardware made it possible to reach remarkable performance for supervised classification tasks, in many cases saturating the training sets. However, adapting the learned classification to new domains remains a hard problem due to at least three reasons: (1) the domains and the tasks might be drastically different; (2) there might be very limited amount of annotated data on the new domain and (3) full training of a new model for each new task is prohibitive in terms of memory, due to the shear number of parameter of deep networks. Instead, new tasks should be learned incrementally, building on prior knowledge from already learned tasks, and without catastrophic forgetting, i.e. without hurting performance on prior tasks. To our knowledge this paper presents the first method for multi-domain/task learning without catastrophic forgetting using a fully tensorized architecture. Our main contribution is a method for multi-domain learning which models groups of identically structured blocks within a CNN as a high-order tensor. We show that this joint modelling naturally leverages correlations across different layers and results in more compact representations for each new task/domain over previous methods which have focused on adapting each layer separately. We apply the proposed method to 10 datasets of the Visual Decathlon Challenge and show that our method offers on average about 7.5x reduction in number of parameters and superior performance in terms of both classification accuracy and Decathlon score. In particular, our method outperforms all prior work on the Visual Decathlon Challenge.

READ FULL TEXT
research
06/11/2019

Incremental Classifier Learning Based on PEDCC-Loss and Cosine Distance

The main purpose of incremental learning is to learn new knowledge while...
research
11/15/2017

PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning

This paper presents a method for adding multiple tasks to a single deep ...
research
04/24/2021

Piggyback GAN: Efficient Lifelong Learning for Image Conditioned Generation

Humans accumulate knowledge in a lifelong fashion. Modern deep neural ne...
research
04/17/2019

A Multi-Task Learning Framework for Overcoming the Catastrophic Forgetting in Automatic Speech Recognition

Recently, data-driven based Automatic Speech Recognition (ASR) systems h...
research
11/11/2021

Kronecker Factorization for Preventing Catastrophic Forgetting in Large-scale Medical Entity Linking

Multi-task learning is useful in NLP because it is often practically des...
research
10/14/2022

Parameter Sharing in Budget-Aware Adapters for Multi-Domain Learning

Deep learning has achieved state-of-the-art performance on several compu...
research
06/02/2022

Leveraging Systematic Knowledge of 2D Transformations

The existing deep learning models suffer from out-of-distribution (o.o.d...

Please sign up or login with your details

Forgot password? Click here to reset