Meta Continual Learning via Dynamic Programming

08/05/2020
by   R. Krishnan, et al.
0

Meta-continual learning algorithms seek to rapidly train a model when faced with similar tasks sampled sequentially from a task distribution. Although impressive strides have been made in this area, there is no theoretical framework that enables systematic analysis of key learning challenges, such as generalization and catastrophic forgetting. We introduce a new theoretical framework for meta-continual learning using dynamic programming, analyze generalization and catastrophic forgetting, and establish conditions of optimality. We show that existing meta-continual learning methods can be derived from the proposed dynamic programming framework. Moreover, we develop a new dynamic-programming-based meta-continual approach that adopts stochastic-gradient-driven alternating optimization method. We show that, on meta-continual learning benchmark data sets, our theoretically grounded meta-continual learning approach is better than or comparable to the purely empirical strategies adopted by the existing state-of-the-art methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset