Generating Image-Specific Text Improves Fine-grained Image Classification

07/21/2023
by   Emily Mu, et al.
0

Recent vision-language models outperform vision-only models on many image classification tasks. However, because of the absence of paired text/image descriptions, it remains difficult to fine-tune these models for fine-grained image classification. In this work, we propose a method, GIST, for generating image-specific fine-grained text descriptions from image-only datasets, and show that these text descriptions can be used to improve classification. Key parts of our method include 1. prompting a pretrained large language model with domain-specific prompts to generate diverse fine-grained text descriptions for each class and 2. using a pretrained vision-language model to match each image to label-preserving text descriptions that capture relevant visual features in the image. We demonstrate the utility of GIST by fine-tuning vision-language models on the image-and-generated-text pairs to learn an aligned vision-language representation space for improved classification. We evaluate our learned representation space in full-shot and few-shot scenarios across four diverse fine-grained classification datasets, each from a different domain. Our method achieves an average improvement of 4.1% in accuracy over CLIP linear probes and an average of 1.1% improvement in accuracy over the previous state-of-the-art image-text classification method on the full-shot datasets. Our method achieves similar improvements across few-shot regimes. Code is available at https://github.com/emu1729/GIST.

READ FULL TEXT

page 6

page 7

page 8

page 11

page 12

page 13

page 14

research
07/21/2023

Enhancing CLIP with GPT-4: Harnessing Visual Descriptions as Prompts

Contrastive pretrained large Vision-Language Models (VLMs) like CLIP hav...
research
05/25/2023

Diversify Your Vision Datasets with Automatic Diffusion-Based Augmentation

Many fine-grained classification tasks, like rare animal identification,...
research
10/07/2022

SVL-Adapter: Self-Supervised Adapter for Vision-Language Pretrained Models

Vision-language models such as CLIP are pretrained on large volumes of i...
research
07/10/2023

Leveraging Multiple Descriptive Features for Robust Few-shot Image Learning

Modern image classification is based upon directly predicting model clas...
research
09/30/2014

Evaluation of Output Embeddings for Fine-Grained Image Classification

Image classification has advanced significantly in recent years with the...
research
07/26/2022

V^2L: Leveraging Vision and Vision-language Models into Large-scale Product Retrieval

Product retrieval is of great importance in the ecommerce domain. This p...
research
11/05/2021

The Curious Layperson: Fine-Grained Image Recognition without Expert Labels

Most of us are not experts in specific fields, such as ornithology. None...

Please sign up or login with your details

Forgot password? Click here to reset