Unsupervised Cross-Modal Audio Representation Learning from Unstructured Multilingual Text

03/27/2020
by   Alexander Schindler, et al.
0

We present an approach to unsupervised audio representation learning. Based on a triplet neural network architecture, we harnesses semantically related cross-modal information to estimate audio track-relatedness. By applying Latent Semantic Indexing (LSI) we embed corresponding textual information into a latent vector space from which we derive track relatedness for online triplet selection. This LSI topic modelling facilitates fine-grained selection of similar and dissimilar audio-track pairs to learn the audio representation using a Convolution Recurrent Neural Network (CRNN). By this we directly project the semantic context of the unstructured text modality onto the learned representation space of the audio modality without deriving structured ground-truth annotations from it. We evaluate our approach on the Europeana Sounds collection and show how to improve search in digital audio libraries by harnessing the multilingual meta-data provided by numerous European digital libraries. We show that our approach is invariant to the variety of annotation styles as well as to the different languages of this collection. The learned representations perform comparable to the baseline of handcrafted features, respectively exceeding this baseline in similarity retrieval precision at higher cut-offs with only 15% of the baseline's feature vector length.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/10/2019

Deep Triplet Neural Networks with Cluster-CCA for Audio-Visual Cross-modal Retrieval

Cross-modal retrieval aims to retrieve data in one modality by a query i...
research
03/29/2022

On Metric Learning for Audio-Text Cross-Modal Retrieval

Audio-text retrieval aims at retrieving a target audio clip or caption f...
research
11/07/2022

Complete Cross-triplet Loss in Label Space for Audio-visual Cross-modal Retrieval

The heterogeneity gap problem is the main challenge in cross-modal retri...
research
03/27/2019

Image search using multilingual texts: a cross-modal learning approach between image and text

Multilingual (or cross-lingual) embeddings represent several languages i...
research
03/27/2019

Image search using multilingual texts: a cross-modal learning approach between image and text Maxime Portaz Qwant Research

Multilingual (or cross-lingual) embeddings represent several languages i...
research
12/30/2021

Audio-to-symbolic Arrangement via Cross-modal Music Representation Learning

Could we automatically derive the score of a piano accompaniment based o...
research
09/17/2019

Multi-Task Music Representation Learning from Multi-Label Embeddings

This paper presents a novel approach to music representation learning. T...

Please sign up or login with your details

Forgot password? Click here to reset