The World of an Octopus: How Reporting Bias Influences a Language Model's Perception of Color

10/15/2021
by   Cory Paik, et al.
0

Recent work has raised concerns about the inherent limitations of text-only pretraining. In this paper, we first demonstrate that reporting bias, the tendency of people to not state the obvious, is one of the causes of this limitation, and then investigate to what extent multimodal training can mitigate this issue. To accomplish this, we 1) generate the Color Dataset (CoDa), a dataset of human-perceived color distributions for 521 common objects; 2) use CoDa to analyze and compare the color distribution found in text, the distribution captured by language models, and a human's perception of color; and 3) investigate the performance differences between text-only and multimodal models on CoDa. Our results show that the distribution of colors that a language model recovers correlates more strongly with the inaccurate distribution found in text than with the ground-truth, supporting the claim that reporting bias negatively impacts and inherently limits text-only training. We then demonstrate that multimodal models can leverage their visual training to mitigate these effects, providing a promising avenue for future research.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/04/2022

Visual Commonsense in Pretrained Unimodal and Multimodal Models

Our commonsense knowledge about objects includes their typical visual at...
research
08/08/2023

Multimodal Color Recommendation in Vector Graphic Documents

Color selection plays a critical role in graphic document design and req...
research
09/13/2021

Can Language Models Encode Perceptual Structure Without Grounding? A Case Study in Color

Pretrained language models have been shown to encode relational informat...
research
02/03/2023

Controlling for Stereotypes in Multimodal Language Model Evaluation

We propose a methodology and design two benchmark sets for measuring to ...
research
09/23/2021

Transferring Knowledge from Vision to Language: How to Achieve it and how to Measure it?

Large language models are known to suffer from the hallucination problem...
research
11/10/2022

Nano: Nested Human-in-the-Loop Reward Learning for Few-shot Language Model Control

Pretrained language models have demonstrated extraordinary capabilities ...
research
05/09/2023

ChatGPT as a Text Simplification Tool to Remove Bias

The presence of specific linguistic signals particular to a certain sub-...

Please sign up or login with your details

Forgot password? Click here to reset