Simple Token-Level Confidence Improves Caption Correctness

05/11/2023
by   Suzanne Petryk, et al.
0

The ability to judge whether a caption correctly describes an image is a critical part of vision-language understanding. However, state-of-the-art models often misinterpret the correctness of fine-grained details, leading to errors in outputs such as hallucinating objects in generated captions or poor compositional reasoning. In this work, we explore Token-Level Confidence, or TLC, as a simple yet surprisingly effective method to assess caption correctness. Specifically, we fine-tune a vision-language model on image captioning, input an image and proposed caption to the model, and aggregate either algebraic or learned token confidences over words or sequences to estimate image-caption consistency. Compared to sequence-level scores from pretrained models, TLC with algebraic confidence measures achieves a relative improvement in accuracy by 10 outperforms prior state-of-the-art in image and group scores for compositional reasoning in Winoground by a relative 37 data are available, a learned confidence estimator provides further improved performance, reducing object hallucination rates in MS COCO Captions by a relative 30

READ FULL TEXT

page 7

page 14

research
08/20/2021

Group-based Distinctive Image Captioning with Memory Attention

Describing images using natural language is widely known as image captio...
research
04/07/2022

Winoground: Probing Vision and Language Models for Visio-Linguistic Compositionality

We present a novel task and dataset for evaluating the ability of vision...
research
08/09/2015

Image Representations and New Domains in Neural Image Captioning

We examine the possibility that recent promising results in automatic ca...
research
05/31/2023

Dense and Aligned Captions (DAC) Promote Compositional Reasoning in VL Models

Vision and Language (VL) models offer an effective method for aligning r...
research
10/23/2018

A Neural Compositional Paradigm for Image Captioning

Mainstream captioning models often follow a sequential structure to gene...
research
12/07/2020

Confidence-aware Non-repetitive Multimodal Transformers for TextCaps

When describing an image, reading text in the visual scene is crucial to...

Please sign up or login with your details

Forgot password? Click here to reset