Mixture-of-Variational-Experts for Continual Learning

10/25/2021
by   Heinke Hihn, et al.
0

One significant shortcoming of machine learning is the poor ability of models to solve new problems quicker and without forgetting acquired knowledge. To better understand this issue, continual learning has emerged to systematically investigate learning protocols where the model sequentially observes samples generated by a series of tasks. First, we propose an optimality principle that facilitates a trade-off between learning and forgetting. We derive this principle from an information-theoretic formulation of bounded rationality and show its connections to other continual learning methods. Second, based on this principle, we propose a neural network layer for continual learning, called Mixture-of-Variational-Experts (MoVE), that alleviates forgetting while enabling the beneficial transfer of knowledge to new tasks. Our experiments on variants of the MNIST and CIFAR10 datasets demonstrate the competitive performance of MoVE layers when compared to state-of-the-art approaches.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/14/2022

Hierarchically Structured Task-Agnostic Continual Learning

One notable weakness of current machine learning algorithms is the poor ...
research
06/26/2017

Gradient Episodic Memory for Continual Learning

One major obstacle towards AI is the poor ability of models to solve new...
research
06/26/2020

Continual Learning from the Perspective of Compression

Connectionist models such as neural networks suffer from catastrophic fo...
research
10/21/2021

Center Loss Regularization for Continual Learning

The ability to learn different tasks sequentially is essential to the de...
research
05/28/2021

More Is Better: An Analysis of Instance Quantity/Quality Trade-off in Rehearsal-based Continual Learning

The design of machines and algorithms capable of learning in a dynamical...
research
06/30/2020

Enabling Continual Learning with Differentiable Hebbian Plasticity

Continual learning is the problem of sequentially learning new tasks or ...
research
06/01/2022

Transfer without Forgetting

This work investigates the entanglement between Continual Learning (CL) ...

Please sign up or login with your details

Forgot password? Click here to reset