DeepAI AI Chat
Log In Sign Up

Online Continual Learning with Contrastive Vision Transformer

by   Zhen Wang, et al.
The University of Sydney

Online continual learning (online CL) studies the problem of learning sequential tasks from an online data stream without task boundaries, aiming to adapt to new data while alleviating catastrophic forgetting on the past tasks. This paper proposes a framework Contrastive Vision Transformer (CVT), which designs a focal contrastive learning strategy based on a transformer architecture, to achieve a better stability-plasticity trade-off for online CL. Specifically, we design a new external attention mechanism for online CL that implicitly captures previous tasks' information. Besides, CVT contains learnable focuses for each class, which could accumulate the knowledge of previous classes to alleviate forgetting. Based on the learnable focuses, we design a focal contrastive loss to rebalance contrastive learning between new and past classes and consolidate previously learned representations. Moreover, CVT contains a dual-classifier structure for decoupling learning current classes and balancing all observed classes. The extensive experimental results show that our approach achieves state-of-the-art performance with even fewer parameters on online CL benchmarks and effectively alleviates the catastrophic forgetting.


page 1

page 2

page 3

page 4


Mitigating Forgetting in Online Continual Learning via Contrasting Semantically Distinct Augmentations

Online continual learning (OCL) aims to enable model learning from a non...

Mitigating Catastrophic Forgetting in Task-Incremental Continual Learning with Adaptive Classification Criterion

Task-incremental continual learning refers to continually training a mod...

Rehearsal-Free Domain Continual Face Anti-Spoofing: Generalize More and Forget Less

Face Anti-Spoofing (FAS) is recently studied under the continual learnin...

New Insights on Reducing Abrupt Representation Change in Online Continual Learning

In the online continual learning paradigm, agents must learn from a chan...

CCL: Continual Contrastive Learning for LiDAR Place Recognition

Place recognition is an essential and challenging task in loop closing a...

Complementary Calibration: Boosting General Continual Learning with Collaborative Distillation and Self-Supervision

General Continual Learning (GCL) aims at learning from non independent a...