Synergies Between Disentanglement and Sparsity: a Multi-Task Learning Perspective

11/26/2022
by   Sébastien Lachapelle, et al.
0

Although disentangled representations are often said to be beneficial for downstream tasks, current empirical and theoretical understanding is limited. In this work, we provide evidence that disentangled representations coupled with sparse base-predictors improve generalization. In the context of multi-task learning, we prove a new identifiability result that provides conditions under which maximally sparse base-predictors yield disentangled representations. Motivated by this theoretical result, we propose a practical approach to learn disentangled representations based on a sparsity-promoting bi-level optimization problem. Finally, we explore a meta-learning version of this algorithm based on group Lasso multiclass SVM base-predictors, for which we derive a tractable dual formulation. It obtains competitive results on standard few-shot classification benchmarks, while each task is using only a fraction of the learned representations.

READ FULL TEXT

page 28

page 32

page 33

page 34

page 35

research
10/07/2021

On the relationship between disentanglement and multi-task learning

One of the main arguments behind studying disentangled representations i...
research
05/18/2020

Efficient Image Gallery Representations at Scale Through Multi-Task Learning

Image galleries provide a rich source of diverse information about a pro...
research
06/16/2021

Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Training and Effective Adaptation

Multi-task learning (MTL) aims to improve the generalization of several ...
research
04/07/2019

Meta-Learning with Differentiable Convex Optimization

Many meta-learning approaches for few-shot learning rely on simple base ...
research
09/26/2013

High-dimensional Joint Sparsity Random Effects Model for Multi-task Learning

Joint sparsity regularization in multi-task learning has attracted much ...
research
09/11/2020

Towards Interpretable Multi-Task Learning Using Bilevel Programming

Interpretable Multi-Task Learning can be expressed as learning a sparse ...
research
05/20/2020

Reducing Overlearning through Disentangled Representations by Suppressing Unknown Tasks

Existing deep learning approaches for learning visual features tend to o...

Please sign up or login with your details

Forgot password? Click here to reset