Learning Explicit and Implicit Latent Common Spaces for Audio-Visual Cross-Modal Retrieval

10/26/2021
by   Donghuo Zeng, et al.
0

Learning common subspace is prevalent way in cross-modal retrieval to solve the problem of data from different modalities having inconsistent distributions and representations that cannot be directly compared. Previous cross-modal retrieval methods focus on projecting the cross-modal data into a common space by learning the correlation between them to bridge the modality gap. However, the rich semantic information in the video and the heterogeneous nature of audio-visual data leads to more serious heterogeneous gaps intuitively, which may lead to the loss of key semantic content of video with single clue by the previous methods when eliminating the modality gap, while the semantics of the categories may undermine the properties of the original features. In this work, we aim to learn effective audio-visual representations to support audio-visual cross-modal retrieval (AVCMR). We propose a novel model that maps audio-visual modalities into two distinct shared latent subspaces: explicit and implicit shared spaces. In particular, the explicit shared space is used to optimize pairwise correlations, where learned representations across modalities capture the commonalities of audio-visual pairs and reduce the modality gap. The implicit shared space is used to preserve the distinctive features between modalities by maintaining the discrimination of audio/video patterns from different semantic categories. Finally, the fusion of the features learned from the two latent subspaces is used for the similarity computation of the AVCMR task. The comprehensive experimental results on two audio-visual datasets demonstrate that our proposed model for using two different latent subspaces for audio-visual cross-modal learning is effective and significantly outperforms the state-of-the-art cross-modal models that learn features from a single subspace.

READ FULL TEXT

page 1

page 2

page 16

research
08/10/2019

Deep Triplet Neural Networks with Cluster-CCA for Audio-Visual Cross-modal Retrieval

Cross-modal retrieval aims to retrieve data in one modality by a query i...
research
11/07/2022

Complete Cross-triplet Loss in Label Space for Audio-visual Cross-modal Retrieval

The heterogeneity gap problem is the main challenge in cross-modal retri...
research
06/25/2021

Graph Pattern Loss based Diversified Attention Network for Cross-Modal Retrieval

Cross-modal retrieval aims to enable flexible retrieval experience by co...
research
07/03/2019

Cascade Attention Guided Residue Learning GAN for Cross-Modal Translation

Since we were babies, we intuitively develop the ability to correlate th...
research
11/11/2021

Learning Signal-Agnostic Manifolds of Neural Fields

Deep neural networks have been used widely to learn the latent structure...
research
04/12/2018

Cross-Modal Retrieval with Implicit Concept Association

Traditional cross-modal retrieval assumes explicit association of concep...
research
03/25/2021

Discriminative Semantic Transitive Consistency for Cross-Modal Learning

Cross-modal retrieval is generally performed by projecting and aligning ...

Please sign up or login with your details

Forgot password? Click here to reset