A comprehensive, application-oriented study of catastrophic forgetting in DNNs

05/20/2019
by   B. Pfülb, et al.
5

We present a large-scale empirical study of catastrophic forgetting (CF) in modern Deep Neural Network (DNN) models that perform sequential (or: incremental) learning. A new experimental protocol is proposed that enforces typical constraints encountered in application scenarios. As the investigation is empirical, we evaluate CF behavior on the hitherto largest number of visual classification datasets, from each of which we construct a representative number of Sequential Learning Tasks (SLTs) in close alignment to previous works on CF. Our results clearly indicate that there is no model that avoids CF for all investigated datasets and SLTs under application conditions. We conclude with a discussion of potential solutions and workarounds to CF, notably for the EWC and IMM models.

READ FULL TEXT

page 6

page 7

page 8

page 9

page 12

research
05/20/2019

Catastrophic forgetting: still a problem for DNNs

We investigate the performance of DNNs when trained on class-incremental...
research
05/03/2020

Continuous Learning in a Single-Incremental-Task Scenario with Spike Features

Deep Neural Networks (DNNs) have two key deficiencies, their dependence ...
research
04/17/2021

On Learning the Geodesic Path for Incremental Learning

Neural networks notoriously suffer from the problem of catastrophic forg...
research
10/20/2021

Behavioral Experiments for Understanding Catastrophic Forgetting

In this paper we explore whether the fundamental tool of experimental ps...
research
08/11/2022

Empirical investigations on WVA structural issues

In this paper we want to present the results of empirical verification o...
research
09/22/2020

An Empirical Study on Neural Keyphrase Generation

Recent years have seen a flourishing of neural keyphrase generation work...

Please sign up or login with your details

Forgot password? Click here to reset