Spoken ObjectNet: A Bias-Controlled Spoken Caption Dataset

10/14/2021
by   Ian Palmer, et al.
12

Visually-grounded spoken language datasets can enable models to learn cross-modal correspondences with very weak supervision. However, modern audio-visual datasets contain biases that undermine the real-world performance of models trained on that data. We introduce Spoken ObjectNet, which is designed to remove some of these biases and provide a way to better evaluate how effectively models will perform in real-world scenarios. This dataset expands upon ObjectNet, which is a bias-controlled image dataset that features similar image classes to those present in ImageNet. We detail our data collection pipeline, which features several methods to improve caption quality, including automated language model checks. Lastly, we show baseline results on image retrieval and audio retrieval tasks. These results show that models trained on other datasets and then evaluated on Spoken ObjectNet tend to perform poorly due to biases in other datasets that the models have learned. We also show evidence that the performance decrease is due to the dataset controls, and not the transfer setting.

READ FULL TEXT
research
03/30/2023

Hindi as a Second Language: Improving Visually Grounded Speech with Semantically Similar Samples

The objective of this work is to explore the learning of visually ground...
research
12/15/2022

You were saying? – Spoken Language in the V3C Dataset

This paper presents an analysis of the distribution of spoken language i...
research
11/22/2020

QuerYD: A video dataset with high-quality textual and audio narrations

We introduce QuerYD, a new large-scale dataset for retrieval and event l...
research
04/04/2018

Jointly Discovering Visual Objects and Spoken Words from Raw Sensory Input

In this paper, we explore neural network models that learn to associate ...
research
04/03/2022

Adjusting for Bias with Procedural Data

3D softwares are now capable of producing highly realistic images that l...
research
04/03/2019

Revisiting Visual Grounding

We revisit a particular visual grounding method: the "Image Retrieval Us...
research
09/19/2019

Large-scale representation learning from visually grounded untranscribed speech

Systems that can associate images with their spoken audio captions are a...

Please sign up or login with your details

Forgot password? Click here to reset