Boosting Binary Masks for Multi-Domain Learning through Affine Transformations

03/25/2021
by   Massimiliano Mancini, et al.
0

In this work, we present a new, algorithm for multi-domain learning. Given a pretrained architecture and a set of visual domains received sequentially, the goal of multi-domain learning is to produce a single model performing a task in all the domains together. Recent works showed how we can address this problem by masking the internal weights of a given original conv-net through learned binary variables. In this work, we provide a general formulation of binary mask based models for multi-domain learning by affine transformations of the original network parameters. Our formulation obtains significantly higher levels of adaptation to new domains, achieving performances comparable to domain-specific models while requiring slightly more than 1 bit per network parameter per additional domain. Experiments on two popular benchmarks showcase the power of our approach, achieving performances close to state-of-the-art methods on the Visual Decathlon Challenge.

READ FULL TEXT

page 2

page 14

page 15

research
05/28/2018

Adding New Tasks to a Single Network with Weight Trasformations using Binary Masks

Visual recognition algorithms are required today to exhibit adaptive abi...
research
08/28/2020

Learning to Balance Specificity and Invariance for In and Out of Domain Generalization

We introduce Domain-specific Masks for Generalization, a model for impro...
research
05/25/2021

Affine Transport for Sim-to-Real Domain Adaptation

Sample-efficient domain adaptation is an open problem in robotics. In th...
research
01/14/2022

Domain-shift adaptation via linear transformations

A predictor, f_A : X → Y, learned with data from a source domain (A) mig...
research
05/28/2020

Disentanglement Then Reconstruction: Learning Compact Features for Unsupervised Domain Adaptation

Recent works in domain adaptation always learn domain invariant features...
research
02/06/2013

Learning Belief Networks in Domains with Recursively Embedded Pseudo Independent Submodels

A pseudo independent (PI) model is a probabilistic domain model (PDM) wh...
research
07/20/2022

Doge Tickets: Uncovering Domain-general Language Models by Playing Lottery Tickets

Over-parameterized models, typically pre-trained language models (LMs), ...

Please sign up or login with your details

Forgot password? Click here to reset