Reproducibility Report: La-MAML: Look-ahead Meta Learning for Continual Learning

02/11/2021
by   Joel Joseph, et al.
12

The Continual Learning (CL) problem involves performing well on a sequence of tasks under limited compute. Current algorithms in the domain are either slow, offline or sensitive to hyper-parameters. La-MAML, an optimization-based meta-learning algorithm claims to be better than other replay-based, prior-based and meta-learning based approaches. According to the MER paper [1], metrics to measure performance in the continual learning arena are Retained Accuracy (RA) and Backward Transfer-Interference (BTI). La-MAML claims to perform better in these values when compared to the SOTA in the domain. This is the main claim of the paper, which we shall be verifying in this report.

READ FULL TEXT

page 1

page 2

page 3

page 4

07/27/2020

La-MAML: Look-ahead Meta Learning for Continual Learning

The continual learning problem involves training models with limited cap...
03/12/2020

Online Fast Adaptation and Knowledge Accumulation: a New Approach to Continual Learning

Learning from non-stationary data remains a great challenge for machine ...
11/03/2020

Meta-Learning for Natural Language Understanding under Continual Learning Framework

Neural network has been recognized with its accomplishments on tackling ...
10/01/2020

Value-based Bayesian Meta-reinforcement Learning and Traffic Signal Control

Reinforcement learning methods for traffic signal control has gained inc...
12/16/2020

Inexact-ADMM Based Federated Meta-Learning for Fast and Continual Edge Learning

In order to meet the requirements for performance, safety, and latency i...
03/22/2022

Meta-attention for ViT-backed Continual Learning

Continual learning is a longstanding research topic due to its crucial r...
04/20/2020

CLOPS: Continual Learning of Physiological Signals

Deep learning algorithms are known to experience destructive interferenc...