Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Training and Effective Adaptation

06/16/2021
by   Haoxiang Wang, et al.
0

Multi-task learning (MTL) aims to improve the generalization of several related tasks by learning them jointly. As a comparison, in addition to the joint training scheme, modern meta-learning allows unseen tasks with limited labels during the test phase, in the hope of fast adaptation over them. Despite the subtle difference between MTL and meta-learning in the problem formulation, both learning paradigms share the same insight that the shared structure between existing training tasks could lead to better generalization and adaptation. In this paper, we take one important step further to understand the close connection between these two learning paradigms, through both theoretical analysis and empirical investigation. Theoretically, we first demonstrate that MTL shares the same optimization formulation with a class of gradient-based meta-learning (GBML) algorithms. We then prove that for over-parameterized neural networks with sufficient depth, the learned predictive functions of MTL and GBML are close. In particular, this result implies that the predictions given by these two models are similar over the same unseen task. Empirically, we corroborate our theoretical findings by showing that, with proper implementation, MTL is competitive against state-of-the-art GBML algorithms on a set of few-shot image classification benchmarks. Since existing GBML algorithms often involve costly second-order bi-level optimization, our first-order MTL method is an order of magnitude faster on large-scale datasets such as mini-ImageNet. We believe this work could help bridge the gap between these two learning paradigms, and provide a computationally efficient alternative to GBML that also supports fast task adaptation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/13/2022

Multi-Task Meta Learning: learn how to adapt to unseen tasks

This work aims to integrate two learning paradigms Multi-Task Learning (...
research
11/23/2021

Sharing to learn and learning to share - Fitting together Meta-Learning, Multi-Task Learning, and Transfer Learning : A meta review

Integrating knowledge across different domains is an essential feature o...
research
10/03/2019

Is Fast Adaptation All You Need?

Gradient-based meta-learning has proven to be highly effective at learni...
research
01/28/2023

A Closer Look at Few-shot Classification Again

Few-shot classification consists of a training phase where a model is le...
research
11/26/2022

Synergies Between Disentanglement and Sparsity: a Multi-Task Learning Perspective

Although disentangled representations are often said to be beneficial fo...
research
06/12/2020

Attentive Feature Reuse for Multi Task Meta learning

We develop new algorithms for simultaneous learning of multiple tasks (e...
research
06/18/2019

Fast and Flexible Multi-Task Classification Using Conditional Neural Adaptive Processes

The goal of this paper is to design image classification systems that, a...

Please sign up or login with your details

Forgot password? Click here to reset