OrdinalCLIP: Learning Rank Prompts for Language-Guided Ordinal Regression

06/06/2022
by   Wanhua Li, et al.
1

This paper presents a language-powered paradigm for ordinal regression. Existing methods usually treat each rank as a category and employ a set of weights to learn these concepts. These methods are easy to overfit and usually attain unsatisfactory performance as the learned concepts are mainly derived from the training set. Recent large pre-trained vision-language models like CLIP have shown impressive performance on various visual tasks. In this paper, we propose to learn the rank concepts from the rich semantic CLIP latent space. Specifically, we reformulate this task as an image-language matching problem with a contrastive objective, which regards labels as text and obtains a language prototype from a text encoder for each rank. While prompt engineering for CLIP is extremely time-consuming, we propose OrdinalCLIP, a differentiable prompting method for adapting CLIP for ordinal regression. OrdinalCLIP consists of learnable context tokens and learnable rank embeddings; The learnable rank embeddings are constructed by explicitly modeling numerical continuity, resulting in well-ordered, compact language prototypes in the CLIP space. Once learned, we can only save the language prototypes and discard the huge language model, resulting in zero additional computational overhead compared with the linear head counterpart. Experimental results show that our paradigm achieves competitive performance in general ordinal regression tasks, and gains improvements in few-shot and distribution shift settings for age estimation.

READ FULL TEXT

page 7

page 15

page 16

page 17

page 18

page 19

page 20

research
03/25/2021

Learning Probabilistic Ordinal Embeddings for Uncertainty-Aware Regression

Uncertainty is the only certainty there is. Modeling data uncertainty is...
research
01/15/2023

CORE: Learning Consistent Ordinal REpresentations for Image Ordinal Estimation

The goal of image ordinal estimation is to estimate the ordinal label of...
research
09/14/2023

PRE: Vision-Language Prompt Learning with Reparameterization Encoder

Large pre-trained vision-language models such as CLIP have demonstrated ...
research
05/23/2022

Supporting Vision-Language Model Inference with Causality-pruning Knowledge Prompt

Vision-language models are pre-trained by aligning image-text pairs in a...
research
11/21/2022

Expectation-Maximization Contrastive Learning for Compact Video-and-Language Representations

Most video-and-language representation learning approaches employ contra...
research
06/01/2023

Adapting Pre-trained Language Models to Vision-Language Tasks via Dynamic Visual Prompting

Pre-trained language models (PLMs) have played an increasing role in mul...
research
12/19/2018

A Novel Large-scale Ordinal Regression Model

Ordinal regression (OR) is a special multiclass classification problem w...

Please sign up or login with your details

Forgot password? Click here to reset