CLIP-guided Prototype Modulating for Few-shot Action Recognition

03/06/2023
by   Xiang Wang, et al.
0

Learning from large-scale contrastive language-image pre-training like CLIP has shown remarkable success in a wide range of downstream tasks recently, but it is still under-explored on the challenging few-shot action recognition (FSAR) task. In this work, we aim to transfer the powerful multimodal knowledge of CLIP to alleviate the inaccurate prototype estimation issue due to data scarcity, which is a critical problem in low-shot regimes. To this end, we present a CLIP-guided prototype modulating framework called CLIP-FSAR, which consists of two key components: a video-text contrastive objective and a prototype modulation. Specifically, the former bridges the task discrepancy between CLIP and the few-shot video task by contrasting videos and corresponding class text descriptions. The latter leverages the transferable textual concepts from CLIP to adaptively refine visual prototypes with a temporal Transformer. By this means, CLIP-FSAR can take full advantage of the rich semantic priors in CLIP to obtain reliable prototypes and achieve accurate few-shot classification. Extensive experiments on five commonly used benchmarks demonstrate the effectiveness of our proposed method, and CLIP-FSAR significantly outperforms existing state-of-the-art methods under various settings. The source code and models will be publicly available at https://github.com/alibaba-mmai-research/CLIP-FSAR.

READ FULL TEXT

page 2

page 4

page 8

page 9

page 10

research
06/21/2021

TNT: Text-Conditioned Network with Transductive Inference for Few-Shot Video Classification

Recently, few-shot learning has received increasing interest. Existing e...
research
09/17/2021

ActionCLIP: A New Paradigm for Video Action Recognition

The canonical approach to video action recognition dictates a neural mod...
research
08/09/2023

Seeing in Flowing: Adapting CLIP for Action Recognition with Motion Prompts Learning

The Contrastive Language-Image Pre-training (CLIP) has recently shown re...
research
04/03/2023

MoLo: Motion-augmented Long-short Contrastive Learning for Few-shot Action Recognition

Current state-of-the-art approaches for few-shot action recognition achi...
research
08/18/2023

Boosting Few-shot Action Recognition with Graph-guided Hybrid Matching

Class prototype construction and matching are core aspects of few-shot a...
research
12/03/2022

VLG: General Video Recognition with Web Textual Knowledge

Video recognition in an open and dynamic world is quite challenging, as ...
research
10/20/2020

Depth Guided Adaptive Meta-Fusion Network for Few-shot Video Recognition

Humans can easily recognize actions with only a few examples given, whil...

Please sign up or login with your details

Forgot password? Click here to reset