On the Robustness, Generalization, and Forgetting of Shape-Texture Debiased Continual Learning

11/21/2022
by   Zenglin Shi, et al.
0

Tremendous progress has been made in continual learning to maintain good performance on old tasks when learning new tasks by tackling the catastrophic forgetting problem of neural networks. This paper advances continual learning by further considering its out-of-distribution robustness, in response to the vulnerability of continually trained models to distribution shifts (e.g., due to data corruptions and domain shifts) in inference. To this end, we propose shape-texture debiased continual learning. The key idea is to learn generalizable and robust representations for each task with shape-texture debiased training. In order to transform standard continual learning to shape-texture debiased continual learning, we propose shape-texture debiased data generation and online shape-texture debiased self-distillation. Experiments on six datasets demonstrate the benefits of our approach in improving generalization and robustness, as well as reducing forgetting. Our analysis on the flatness of the loss landscape explains the advantages. Moreover, our approach can be easily combined with new advanced architectures such as vision transformer, and applied to more challenging scenarios such as exemplar-free continual learning.

READ FULL TEXT

page 1

page 6

research
10/21/2021

Wide Neural Networks Forget Less Catastrophically

A growing body of research in continual learning is devoted to overcomin...
research
02/01/2022

Architecture Matters in Continual Learning

A large body of research in continual learning is devoted to overcoming ...
research
06/26/2023

Continual Learning for Out-of-Distribution Pedestrian Detection

A continual learning solution is proposed to address the out-of-distribu...
research
07/26/2022

S-Prompts Learning with Pre-trained Transformers: An Occam's Razor for Domain Incremental Learning

State-of-the-art deep neural networks are still struggling to address th...
research
07/15/2023

Learning Expressive Priors for Generalization and Uncertainty Estimation in Neural Networks

In this work, we propose a novel prior learning method for advancing gen...
research
06/01/2023

Out-of-distribution forgetting: vulnerability of continual learning to intra-class distribution shift

Continual learning (CL) is an important technique to allow artificial ne...
research
09/30/2022

Task Formulation Matters When Learning Continually: A Case Study in Visual Question Answering

Continual learning aims to train a model incrementally on a sequence of ...

Please sign up or login with your details

Forgot password? Click here to reset