CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks

06/18/2022
by   Tejas Srinivasan, et al.
7

Current state-of-the-art vision-and-language models are evaluated on tasks either individually or in a multi-task setting, overlooking the challenges of continually learning (CL) tasks as they arrive. Existing CL benchmarks have facilitated research on task adaptation and mitigating "catastrophic forgetting", but are limited to vision-only and language-only tasks. We present CLiMB, a benchmark to study the challenge of learning multimodal tasks in a CL setting, and to systematically evaluate how upstream continual learning can rapidly generalize to new multimodal and unimodal tasks. CLiMB includes implementations of several CL algorithms and a modified Vision-Language Transformer (ViLT) model that can be deployed on both multimodal and unimodal tasks. We find that common CL methods can help mitigate forgetting during multimodal task learning, but do not enable cross-task knowledge transfer. We envision that CLiMB will facilitate research on a new class of CL algorithms for this challenging multimodal setting.

READ FULL TEXT

page 2

page 14

research
04/04/2023

I2I: Initializing Adapters with Improvised Knowledge

Adapters present a promising solution to the catastrophic forgetting pro...
research
05/15/2023

Continual Multimodal Knowledge Graph Construction

Multimodal Knowledge Graph Construction (MKGC) involves creating structu...
research
05/30/2023

Learning without Forgetting for Vision-Language Models

Class-Incremental Learning (CIL) or continual learning is a desired capa...
research
04/14/2021

Continual learning in cross-modal retrieval

Multimodal representations and continual learning are two areas closely ...
research
12/31/2021

Avoiding Catastrophe: Active Dendrites Enable Multi-Task Learning in Dynamic Environments

A key challenge for AI is to build embodied systems that operate in dyna...
research
03/25/2023

Task-Attentive Transformer Architecture for Continual Learning of Vision-and-Language Tasks Using Knowledge Distillation

The size and the computational load of fine-tuning large-scale pre-train...
research
09/20/2023

Create and Find Flatness: Building Flat Training Spaces in Advance for Continual Learning

Catastrophic forgetting remains a critical challenge in the field of con...

Please sign up or login with your details

Forgot password? Click here to reset