Boosting a Model Zoo for Multi-Task and Continual Learning

06/06/2021
by   Rahul Ramesh, et al.
0

Leveraging data from multiple tasks, either all at once, or incrementally, to learn one model is an idea that lies at the heart of multi-task and continual learning methods. Ideally, such a model predicts each task more accurately than if the task were trained in isolation. We show using tools in statistical learning theory (i) how tasks can compete for capacity, i.e., including a particular task can deteriorate the accuracy on a given task, and (ii) that the ideal set of tasks that one should train together in order to perform well on a given task is different for different tasks. We develop methods to discover such competition in typical benchmark datasets which suggests that the prevalent practice of training with all tasks leaves performance on the table. This motivates our "Model Zoo", which is a boosting-based algorithm that builds an ensemble of models, each of which is very small, and it is trained on a smaller set of tasks. Model Zoo achieves large gains in prediction accuracy compared to state-of-the-art methods across a variety of existing benchmarks in multi-task and continual learning, as well as more challenging ones of our creation. We also show that even a model trained independently on all tasks outperforms all existing multi-task and continual learning methods.

READ FULL TEXT

page 5

page 19

page 20

page 21

page 22

page 27

page 28

research
10/26/2022

Is Multi-Task Learning an Upper Bound for Continual Learning?

Continual and multi-task learning are common machine learning approaches...
research
02/25/2019

ORACLE: Order Robust Adaptive Continual LEarning

The order of the tasks a continual learning model encounters may have la...
research
11/24/2020

Generalized Variational Continual Learning

Continual learning deals with training models on new tasks and datasets ...
research
07/09/2021

Behavior Self-Organization Supports Task Inference for Continual Robot Learning

Recent advances in robot learning have enabled robots to become increasi...
research
12/31/2020

Continual Learning in Task-Oriented Dialogue Systems

Continual learning in task-oriented dialogue systems can allow us to add...
research
05/23/2023

Continual Learning on Dynamic Graphs via Parameter Isolation

Many real-world graph learning tasks require handling dynamic graphs whe...
research
12/09/2022

Selective Amnesia: On Efficient, High-Fidelity and Blind Suppression of Backdoor Effects in Trojaned Machine Learning Models

In this paper, we present a simple yet surprisingly effective technique ...

Please sign up or login with your details

Forgot password? Click here to reset