Reproducibility Report: La-MAML: Look-ahead Meta Learning for Continual Learning

by   Joel Joseph, et al.

The Continual Learning (CL) problem involves performing well on a sequence of tasks under limited compute. Current algorithms in the domain are either slow, offline or sensitive to hyper-parameters. La-MAML, an optimization-based meta-learning algorithm claims to be better than other replay-based, prior-based and meta-learning based approaches. According to the MER paper [1], metrics to measure performance in the continual learning arena are Retained Accuracy (RA) and Backward Transfer-Interference (BTI). La-MAML claims to perform better in these values when compared to the SOTA in the domain. This is the main claim of the paper, which we shall be verifying in this report.


page 1

page 2

page 3

page 4


La-MAML: Look-ahead Meta Learning for Continual Learning

The continual learning problem involves training models with limited cap...

Online Fast Adaptation and Knowledge Accumulation: a New Approach to Continual Learning

Learning from non-stationary data remains a great challenge for machine ...

Meta-Learning for Natural Language Understanding under Continual Learning Framework

Neural network has been recognized with its accomplishments on tackling ...

Value-based Bayesian Meta-reinforcement Learning and Traffic Signal Control

Reinforcement learning methods for traffic signal control has gained inc...

Inexact-ADMM Based Federated Meta-Learning for Fast and Continual Edge Learning

In order to meet the requirements for performance, safety, and latency i...

Meta-attention for ViT-backed Continual Learning

Continual learning is a longstanding research topic due to its crucial r...

CLOPS: Continual Learning of Physiological Signals

Deep learning algorithms are known to experience destructive interferenc...