PIVOT: Prompting for Video Continual Learning

12/09/2022
by   Andrés Villa, et al.
8

Modern machine learning pipelines are limited due to data availability, storage quotas, privacy regulations, and expensive annotation processes. These constraints make it difficult or impossible to maintain a large-scale model trained on growing annotation sets. Continual learning directly approaches this problem, with the ultimate goal of devising methods where a neural network effectively learns relevant patterns for new (unseen) classes without significantly altering its performance on previously learned ones. In this paper, we address the problem of continual learning for video data. We introduce PIVOT, a novel method that leverages the extensive knowledge in pre-trained models from the image domain, thereby reducing the number of trainable parameters and the associated forgetting. Unlike previous methods, ours is the first approach that effectively uses prompting mechanisms for continual learning without any in-domain pre-training. Our experiments show that PIVOT improves state-of-the-art methods by a significant 27 20-task ActivityNet setup.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/29/2023

Progressive Prompts: Continual Learning for Language Models

We introduce Progressive Prompts - a simple and efficient approach for c...
research
01/23/2022

vCLIMB: A Novel Video Class Incremental Learning Benchmark

Continual learning (CL) is under-explored in the video domain. The few e...
research
09/03/2022

Continual Learning for Steganalysis

To detect the existing steganographic algorithms, recent steganalysis me...
research
11/03/2021

Continual Learning of Semantic Segmentation using Complementary 2D-3D Data Representations

Semantic segmentation networks are usually pre-trained and not updated d...
research
12/13/2021

Ex-Model: Continual Learning from a Stream of Trained Models

Learning continually from non-stationary data streams is a challenging r...
research
03/09/2022

Memory Efficient Continual Learning for Neural Text Classification

Learning text classifiers based on pre-trained language models has becom...
research
11/09/2022

Continual learning autoencoder training for a particle-in-cell simulation via streaming

The upcoming exascale era will provide a new generation of physics simul...

Please sign up or login with your details

Forgot password? Click here to reset