TAME: Task Agnostic Continual Learning using Multiple Experts

10/08/2022
by   Haoran Zhu, et al.
17

The goal of lifelong learning is to continuously learn from non-stationary distributions, where the non-stationarity is typically imposed by a sequence of distinct tasks. Prior works have mostly considered idealistic settings, where the identity of tasks is known at least at training. In this paper we focus on a fundamentally harder, so-called task-agnostic setting where the task identities are not known and the learning machine needs to infer them from the observations. Our algorithm, which we call TAME (Task-Agnostic continual learning using Multiple Experts), automatically detects the shift in data distributions and switches between task expert networks in an online manner. At training, the strategy for switching between tasks hinges on an extremely simple observation that for each new coming task there occurs a statistically-significant deviation in the value of the loss function that marks the onset of this new task. At inference, the switching between experts is governed by the selector network that forwards the test sample to its relevant expert network. The selector network is trained on a small subset of data drawn uniformly at random. We control the growth of the task expert networks as well as selector network by employing online pruning. Our experimental results show the efficacy of our approach on benchmark continual learning data sets, outperforming the previous task-agnostic methods and even the techniques that admit task identities at both training and testing, while at the same time using a comparable model size.

READ FULL TEXT
research
04/07/2020

Class-Agnostic Continual Learning of Alternating Languages and Domains

Continual Learning has been often framed as the problem of training a mo...
research
08/05/2022

Task-agnostic Continual Hippocampus Segmentation for Smooth Population Shifts

Most continual learning methods are validated in settings where task bou...
research
10/01/2020

Task Agnostic Continual Learning Using Online Variational Bayes with Fixed-Point Updates

Background: Catastrophic forgetting is the notorious vulnerability of ne...
research
05/28/2022

Task-Agnostic Continual Reinforcement Learning: In Praise of a Simple Baseline

We study task-agnostic continual reinforcement learning (TACRL) in which...
research
11/14/2022

Hierarchically Structured Task-Agnostic Continual Learning

One notable weakness of current machine learning algorithms is the poor ...
research
06/19/2020

Task-Agnostic Online Reinforcement Learning with an Infinite Mixture of Gaussian Processes

Continuously learning to solve unseen tasks with limited experience has ...

Please sign up or login with your details

Forgot password? Click here to reset