Overcoming General Knowledge Loss with Selective Parameter Finetuning

08/23/2023
by   Wenxuan Zhang, et al.
0

Foundation models encompass an extensive knowledge base and offer remarkable transferability. However, this knowledge becomes outdated or insufficient over time. The challenge lies in updating foundation models to accommodate novel information while retaining their original ability. In this paper, we present a novel approach to achieving continual model updates by effecting localized modifications to a small subset of parameters. Guided by insights gleaned from prior analyses of foundational models, we first localize a specific layer for model refinement and then introduce an importance scoring mechanism designed to update only the most crucial weights. Our method is exhaustively evaluated on foundational vision-language models, measuring its efficacy in both learning new information and preserving pre-established knowledge across a diverse spectrum of continual learning tasks, including Aircraft, Birdsnap CIFAR-100, CUB, Cars, and GTSRB. The results show that our method improves the existing continual learning methods by 0.5% - 10% on average, and reduces the loss of pre-trained knowledge from around 5% to 0.97%. Comprehensive ablation studies substantiate our method design, shedding light on the contributions of each component to controllably learning new knowledge and mitigating the forgetting of pre-trained knowledge.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/10/2023

Investigating Forgetting in Pre-Trained Representations Through Continual Learning

Representation forgetting refers to the drift of contextualized represen...
research
02/17/2023

New Insights for the Stability-Plasticity Dilemma in Online Continual Learning

The aim of continual learning is to learn new tasks continuously (i.e., ...
research
09/11/2022

Continual Learning for Pose-Agnostic Object Recognition in 3D Point Clouds

Continual Learning aims to learn multiple incoming new tasks continually...
research
03/12/2023

Towards General Purpose Medical AI: Continual Learning Medical Foundation Model

Inevitable domain and task discrepancies in real-world scenarios can imp...
research
06/02/2023

GateON: an unsupervised method for large scale continual learning

The objective of continual learning (CL) is to learn tasks sequentially ...
research
03/09/2023

SLCA: Slow Learner with Classifier Alignment for Continual Learning on a Pre-trained Model

The goal of continual learning is to improve the performance of recognit...
research
01/22/2021

Continual Learning of Generative Models with Limited Data: From Wasserstein-1 Barycenter to Adaptive Coalescence

Learning generative models is challenging for a network edge node with l...

Please sign up or login with your details

Forgot password? Click here to reset