Fixed Design Analysis of Regularization-Based Continual Learning

03/17/2023
by   Haoran Li, et al.
0

We consider a continual learning (CL) problem with two linear regression tasks in the fixed design setting, where the feature vectors are assumed fixed and the labels are assumed to be random variables. We consider an ℓ_2-regularized CL algorithm, which computes an Ordinary Least Squares parameter to fit the first dataset, then computes another parameter that fits the second dataset under an ℓ_2-regularization penalizing its deviation from the first parameter, and outputs the second parameter. For this algorithm, we provide tight bounds on the average risk over the two tasks. Our risk bounds reveal a provable trade-off between forgetting and intransigence of the ℓ_2-regularized CL algorithm: with a large regularization parameter, the algorithm output forgets less information about the first task but is intransigent to extract new information from the second task; and vice versa. Our results suggest that catastrophic forgetting could happen for CL with dissimilar tasks (under a precise similarity measurement) and that a well-tuned ℓ_2-regularization can partially mitigate this issue by introducing intransigence.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/09/2020

FoCL: Feature-Oriented Continual Learning for Generative Models

In this paper, we propose a general framework in continual learning for ...
research
06/16/2022

Continual Learning with Guarantees via Weight Interval Constraints

We introduce a new training paradigm that enforces interval constraints ...
research
05/28/2019

Uncertainty-based Continual Learning with Adaptive Regularization

We introduce a new regularization-based continual learning algorithm, du...
research
06/06/2023

Continual Learning in Linear Classification on Separable Data

We analyze continual learning on a sequence of separable linear classifi...
research
04/11/2023

Task Difficulty Aware Parameter Allocation Regularization for Lifelong Learning

Parameter regularization or allocation methods are effective in overcomi...
research
11/03/2022

Continual Learning of Neural Machine Translation within Low Forgetting Risk Regions

This paper considers continual learning of large-scale pretrained neural...
research
05/19/2022

How catastrophic can catastrophic forgetting be in linear regression?

To better understand catastrophic forgetting, we study fitting an overpa...

Please sign up or login with your details

Forgot password? Click here to reset