Efficient parametrization of multi-domain deep neural networks

03/27/2018
by   Sylvestre-Alvise Rebuffi, et al.
0

A practical limitation of deep neural networks is their high degree of specialization to a single task and visual domain. Recently, inspired by the successes of transfer learning, several authors have proposed to learn instead universal, fixed feature extractors that, used as the first stage of any deep network, work well for several tasks and domains simultaneously. Nevertheless, such universal features are still somewhat inferior to specialized networks. To overcome this limitation, in this paper we propose to consider instead universal parametric families of neural networks, which still contain specialized problem-specific models, but differing only by a small number of parameters. We study different designs for such parametrizations, including series and parallel residual adapters, joint adapter compression, and parameter allocations, and empirically identify the ones that yield the highest compression. We show that, in order to maximize performance, it is necessary to adapt both shallow and deep layers of a deep network, but the required changes are very small. We also show that these universal parametrization are very effective for transfer learning, where they outperform traditional fine-tuning techniques.

READ FULL TEXT
research
03/29/2022

Kernel Modulation: A Parameter-Efficient Method for Training Convolutional Neural Networks

Deep Neural Networks, particularly Convolutional Neural Networks (ConvNe...
research
10/25/2018

K For The Price Of 1: Parameter Efficient Multi-task And Transfer Learning

We introduce a novel method that enables parameter-efficient transfer an...
research
03/25/2020

Not all domains are equally complex: Adaptive Multi-Domain Learning

Deep learning approaches are highly specialized and require training sep...
research
02/23/2019

Transfer Learning for Non-Intrusive Load Monitoring

Non-intrusive load monitoring (NILM) is a technique to recover source ap...
research
04/06/2022

Universal Representations: A Unified Look at Multiple Task and Domain Learning

We propose a unified look at jointly learning multiple vision tasks and ...
research
05/28/2023

One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning

Fine-tuning pre-trained language models for multiple tasks tends to be e...
research
05/22/2017

Learning multiple visual domains with residual adapters

There is a growing interest in learning data representations that work w...

Please sign up or login with your details

Forgot password? Click here to reset