QuerYD: A video dataset with high-quality textual and audio narrations

11/22/2020
by   Andreea-Maria Oncescu, et al.
11

We introduce QuerYD, a new large-scale dataset for retrieval and event localisation in video. A unique feature of our dataset is the availability of two audio tracks for each video: the original audio, and a high-quality spoken description of the visual content. The dataset is based on YouDescribe, a volunteer project that assists visually-impaired people by attaching voiced narrations to existing YouTube videos. This ever-growing collection of videos contains highly detailed, temporally aligned audio and text annotations. The content descriptions are more relevant than dialogue, and more detailed than previous description attempts, which can be observed to contain many superficial or uninformative descriptions. To demonstrate the utility of the QuerYD dataset, we show that it can be used to train and benchmark strong models for retrieval and event localisation. All data, code and models will be made available, and we hope that QuerYD inspires further research on video understanding with written and spoken natural language.

READ FULL TEXT

page 2

page 4

research
12/17/2021

Audio Retrieval with Natural Language Queries: A Benchmark Study

The objectives of this work are cross-modal text-audio and audio-text re...
research
05/10/2021

Spoken Moments: Learning Joint Audio-Visual Representations from Video Descriptions

When people observe events, they are able to abstract key information an...
research
08/17/2023

Bridging High-Quality Audio and Video via Language for Sound Effects Retrieval from Visual Queries

Finding the right sound effects (SFX) to match moments in a video is a d...
research
01/23/2018

What did you Mention? A Large Scale Mention Detection Benchmark for Spoken and Written Text

We describe a large, high-quality benchmark for the evaluation of Mentio...
research
10/01/2020

FSD50K: an Open Dataset of Human-Labeled Sound Events

Most existing datasets for sound event recognition (SER) are relatively ...
research
04/06/2016

Improving LSTM-based Video Description with Linguistic Knowledge Mined from Text

This paper investigates how linguistic knowledge mined from large text c...
research
10/14/2021

Spoken ObjectNet: A Bias-Controlled Spoken Caption Dataset

Visually-grounded spoken language datasets can enable models to learn cr...

Please sign up or login with your details

Forgot password? Click here to reset