DeepAI
Log In Sign Up

Multiple Modes for Continual Learning

09/29/2022
by   Siddhartha Datta, et al.
60

Adapting model parameters to incoming streams of data is a crucial factor to deep learning scalability. Interestingly, prior continual learning strategies in online settings inadvertently anchor their updated parameters to a local parameter subspace to remember old tasks, else drift away from the subspace and forget. From this observation, we formulate a trade-off between constructing multiple parameter modes and allocating tasks per mode. Mode-Optimized Task Allocation (MOTA), our contributed adaptation strategy, trains multiple modes in parallel, then optimizes task allocation per mode. We empirically demonstrate improvements over baseline continual learning strategies and across varying distribution shifts, namely sub-population, domain, and task shift.

READ FULL TEXT
03/02/2022

Continual Learning of Multi-modal Dynamics with External Memory

We study the problem of fitting a model to a dynamical environment when ...
05/19/2022

Interpolating Compressed Parameter Subspaces

Inspired by recent work on neural subspaces and mode connectivity, we re...
07/13/2022

CoSCL: Cooperation of Small Continual Learners is Stronger than a Big One

Continual learning requires incremental compatibility with a sequence of...
08/05/2022

Task-agnostic Continual Hippocampus Segmentation for Smooth Population Shifts

Most continual learning methods are validated in settings where task bou...
04/15/2020

Dark Experience for General Continual Learning: a Strong, Simple Baseline

Neural networks struggle to learn continuously, as they forget the old k...
04/20/2020

CLOPS: Continual Learning of Physiological Signals

Deep learning algorithms are known to experience destructive interferenc...
12/13/2020

Monitoring multimode processes: a modified PCA algorithm with continual learning ability

For multimode processes, one has to establish local monitoring models co...