Integrating Information Theory and Adversarial Learning for Cross-modal Retrieval

04/11/2021
by   Wei Chen, et al.
0

Accurately matching visual and textual data in cross-modal retrieval has been widely studied in the multimedia community. To address these challenges posited by the heterogeneity gap and the semantic gap, we propose integrating Shannon information theory and adversarial learning. In terms of the heterogeneity gap, we integrate modality classification and information entropy maximization adversarially. For this purpose, a modality classifier (as a discriminator) is built to distinguish the text and image modalities according to their different statistical properties. This discriminator uses its output probabilities to compute Shannon information entropy, which measures the uncertainty of the modality classification it performs. Moreover, feature encoders (as a generator) project uni-modal features into a commonly shared space and attempt to fool the discriminator by maximizing its output information entropy. Thus, maximizing information entropy gradually reduces the distribution discrepancy of cross-modal features, thereby achieving a domain confusion state where the discriminator cannot classify two modalities confidently. To reduce the semantic gap, Kullback-Leibler (KL) divergence and bi-directional triplet loss are used to associate the intra- and inter-modality similarity between features in the shared space. Furthermore, a regularization term based on KL-divergence with temperature scaling is used to calibrate the biased label classifier caused by the data imbalance issue. Extensive experiments with four deep models on four benchmarks are conducted to demonstrate the effectiveness of the proposed approach.

READ FULL TEXT

page 12

page 31

research
10/11/2022

Cross-modal Search Method of Technology Video based on Adversarial Learning and Feature Fusion

Technology videos contain rich multi-modal information. In cross-modal i...
research
10/14/2017

CM-GANs: Cross-modal Generative Adversarial Networks for Common Representation Learning

It is known that the inconsistent distribution and representation of dif...
research
11/28/2014

Cross-Modal Learning via Pairwise Constraints

In multimedia applications, the text and image components in a web docum...
research
08/08/2020

Cross-modal Center Loss

Cross-modal retrieval aims to learn discriminative and modal-invariant f...
research
04/04/2018

Self-Supervised Adversarial Hashing Networks for Cross-Modal Retrieval

Thanks to the success of deep learning, cross-modal retrieval has made s...
research
03/10/2021

Cross-modal Image Retrieval with Deep Mutual Information Maximization

In this paper, we study the cross-modal image retrieval, where the input...
research
07/02/2017

Variance Regularizing Adversarial Learning

We introduce a novel approach for training adversarial models by replaci...

Please sign up or login with your details

Forgot password? Click here to reset