Bridging High-Quality Audio and Video via Language for Sound Effects Retrieval from Visual Queries

08/17/2023
by   Julia Wilkins, et al.
0

Finding the right sound effects (SFX) to match moments in a video is a difficult and time-consuming task, and relies heavily on the quality and completeness of text metadata. Retrieving high-quality (HQ) SFX using a video frame directly as the query is an attractive alternative, removing the reliance on text metadata and providing a low barrier to entry for non-experts. Due to the lack of HQ audio-visual training data, previous work on audio-visual retrieval relies on YouTube (in-the-wild) videos of varied quality for training, where the audio is often noisy and the video of amateur quality. As such it is unclear whether these systems would generalize to the task of matching HQ audio to production-quality video. To address this, we propose a multimodal framework for recommending HQ SFX given a video frame by (1) leveraging large language models and foundational vision-language models to bridge HQ audio and video to create audio-visual pairs, resulting in a highly scalable automatic audio-visual data curation pipeline; and (2) using pre-trained audio and visual encoders to train a contrastive learning-based retrieval system. We show that our system, trained using our automatic data curation pipeline, significantly outperforms baselines trained on in-the-wild data on the task of HQ SFX retrieval for video. Furthermore, while the baselines fail to generalize to this task, our system generalizes well from clean to in-the-wild data, outperforming the baselines on a dataset of YouTube videos despite only being trained on the HQ audio-visual pairs. A user study confirms that people prefer SFX retrieved by our system over the baseline 67 of the time both for HQ and in-the-wild data. Finally, we present ablations to determine the impact of model and data pipeline design choices on downstream retrieval performance. Please visit our project website to listen to and view our SFX retrieval results.

READ FULL TEXT
research
11/22/2020

QuerYD: A video dataset with high-quality textual and audio narrations

We introduce QuerYD, a new large-scale dataset for retrieval and event l...
research
06/05/2023

Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding

We present Video-LLaMA, a multi-modal framework that empowers Large Lang...
research
07/27/2023

PEANUT: A Human-AI Collaborative Tool for Annotating Audio-Visual Data

Audio-visual learning seeks to enhance the computer's multi-modal percep...
research
08/20/2019

From Text to Sound: A Preliminary Study on Retrieving Sound Effects to Radio Stories

Sound effects play an essential role in producing high-quality radio sto...
research
10/11/2022

Match Cutting: Finding Cuts with Smooth Visual Transitions

A match cut is a transition between a pair of shots that uses similar fr...
research
08/09/2023

Data Player: Automatic Generation of Data Videos with Narration-Animation Interplay

Data visualizations and narratives are often integrated to convey data s...
research
09/05/2023

Generating Realistic Images from In-the-wild Sounds

Representing wild sounds as images is an important but challenging task ...

Please sign up or login with your details

Forgot password? Click here to reset