Pragmatic Inference with a CLIP Listener for Contrastive Captioning

06/15/2023
by   Jiefu Ou, et al.
0

We propose a simple yet effective and robust method for contrastive captioning: generating discriminative captions that distinguish target images from very similar alternative distractor images. Our approach is built on a pragmatic inference procedure that formulates captioning as a reference game between a speaker, which produces possible captions describing the target, and a listener, which selects the target given the caption. Unlike previous methods that derive both speaker and listener distributions from a single captioning model, we leverage an off-the-shelf CLIP model to parameterize the listener. Compared with captioner-only pragmatic models, our method benefits from rich vision language alignment representations from CLIP when reasoning over distractors. Like previous methods for discriminative captioning, our method uses a hyperparameter to control the tradeoff between the informativity (how likely captions are to allow a human listener to discriminate the target image) and the fluency of the captions. However, we find that our method is substantially more robust to the value of this hyperparameter than past methods, which allows us to automatically optimize the captions for informativity - outperforming past methods for discriminative captioning by 11 to 15

READ FULL TEXT

page 1

page 12

research
10/06/2017

Contrastive Learning for Image Captioning

Image captioning, a popular topic in computer vision, has achieved subst...
research
04/02/2016

Reasoning About Pragmatics with Neural Listeners and Speakers

We present a model for pragmatically describing scenes, in which contras...
research
09/30/2020

Creative Captioning: An AI Grand Challenge Based on the Dixit Board Game

We propose a new class of "grand challenge" AI problems that we call cre...
research
08/08/2022

Distinctive Image Captioning via CLIP Guided Group Optimization

Image captioning models are usually trained according to human annotated...
research
04/07/2020

Context-Aware Group Captioning via Self-Attention and Contrastive Features

While image captioning has progressed rapidly, existing works focus main...
research
07/20/2023

Identifying Interpretable Subspaces in Image Representations

We propose Automatic Feature Explanation using Contrasting Concepts (FAL...
research
04/15/2018

Pragmatically Informative Image Captioning with Character-Level Reference

We combine a neural image captioner with a Rational Speech Acts (RSA) mo...

Please sign up or login with your details

Forgot password? Click here to reset