A study on the plasticity of neural networks

05/31/2021
by   Tudor Berariu, et al.
0

One aim shared by multiple settings, such as continual learning or transfer learning, is to leverage previously acquired knowledge to converge faster on the current task. Usually this is done through fine-tuning, where an implicit assumption is that the network maintains its plasticity, meaning that the performance it can reach on any given task is not affected negatively by previously seen tasks. It has been observed recently that a pretrained model on data from the same distribution as the one it is fine-tuned on might not reach the same generalisation as a freshly initialised one. We build and extend this observation, providing a hypothesis for the mechanics behind it. We discuss the implication of losing plasticity for continual learning which heavily relies on optimising pretrained models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/04/2021

Understanding Continual Learning Settings with Data Distribution Drift Analysis

Classical machine learning algorithms often assume that the data are dra...
research
11/30/2022

Continual Learning with Distributed Optimization: Does CoCoA Forget?

We focus on the continual learning problem where the tasks arrive sequen...
research
06/26/2023

Continual Learning for Out-of-Distribution Pedestrian Detection

A continual learning solution is proposed to address the out-of-distribu...
research
07/01/2021

Improving Human Motion Prediction Through Continual Learning

Human motion prediction is an essential component for enabling closer hu...
research
02/28/2023

Adapter Incremental Continual Learning of Efficient Audio Spectrogram Transformers

Continual learning involves training neural networks incrementally for n...
research
06/24/2020

OvA-INN: Continual Learning with Invertible Neural Networks

In the field of Continual Learning, the objective is to learn several ta...
research
09/30/2022

Task Formulation Matters When Learning Continually: A Case Study in Visual Question Answering

Continual learning aims to train a model incrementally on a sequence of ...

Please sign up or login with your details

Forgot password? Click here to reset