Audio-Visual Embedding for Cross-Modal MusicVideo Retrieval through Supervised Deep CCA

08/10/2019
by   Donghuo Zeng, et al.
0

Deep learning has successfully shown excellent performance in learning joint representations between different data modalities. Unfortunately, little research focuses on cross-modal correlation learning where temporal structures of different data modalities, such as audio and video, should be taken into account. Music video retrieval by given musical audio is a natural way to search and interact with music contents. In this work, we study cross-modal music video retrieval in terms of emotion similarity. Particularly, audio of an arbitrary length is used to retrieve a longer or full-length music video. To this end, we propose a novel audio-visual embedding algorithm by Supervised Deep CanonicalCorrelation Analysis (S-DCCA) that projects audio and video into a shared space to bridge the semantic gap between audio and video. This also preserves the similarity between audio and visual contents from different videos with the same class label and the temporal structure. The contribution of our approach is mainly manifested in the two aspects: i) We propose to select top k audio chunks by attention-based Long Short-Term Memory (LSTM)model, which can represent good audio summarization with local properties. ii) We propose an end-to-end deep model for cross-modal audio-visual learning where S-DCCA is trained to learn the semantic correlation between audio and visual modalities. Due to the lack of music video dataset, we construct 10K music video dataset from YouTube 8M dataset. Some promising results such as MAP and precision-recall show that our proposed model can be applied to music video retrieval.

READ FULL TEXT

page 1

page 7

research
11/24/2017

Deep Cross-Modal Correlation Learning for Audio and Lyrics in Music Retrieval

Little research focuses on cross-modal correlation learning where tempor...
research
12/01/2020

MusicTM-Dataset for Joint Representation Learning among Sheet Music, Lyrics, and Musical Audio

This work present a music dataset named MusicTM-Dataset, which is utiliz...
research
04/21/2021

Deep Music Retrieval for Fine-Grained Videos by Exploiting Cross-Modal-Encoded Voice-Overs

Recently, the witness of the rapidly growing popularity of short videos ...
research
05/18/2020

End-to-End Lip Synchronisation

The goal of this work is to synchronise audio and video of a talking fac...
research
12/11/2019

deepsing: Generating Sentiment-aware Visual Stories using Cross-modal Music Translation

In this paper we propose a deep learning method for performing attribute...
research
11/07/2022

Complete Cross-triplet Loss in Label Space for Audio-visual Cross-modal Retrieval

The heterogeneity gap problem is the main challenge in cross-modal retri...
research
12/14/2017

Towards Deep Modeling of Music Semantics using EEG Regularizers

Modeling of music audio semantics has been previously tackled through le...

Please sign up or login with your details

Forgot password? Click here to reset