Unsupervised Generative Adversarial Cross-modal Hashing

12/01/2017
by   Jian Zhang, et al.
0

Cross-modal hashing aims to map heterogeneous multimedia data into a common Hamming space, which can realize fast and flexible retrieval across different modalities. Unsupervised cross-modal hashing is more flexible and applicable than supervised methods, since no intensive labeling work is involved. However, existing unsupervised methods learn hashing functions by preserving inter and intra correlations, while ignoring the underlying manifold structure across different modalities, which is extremely helpful to capture meaningful nearest neighbors of different modalities for cross-modal retrieval. To address the above problem, in this paper we propose an Unsupervised Generative Adversarial Cross-modal Hashing approach (UGACH), which makes full use of GAN's ability for unsupervised representation learning to exploit the underlying manifold structure of cross-modal data. The main contributions can be summarized as follows: (1) We propose a generative adversarial network to model cross-modal hashing in an unsupervised fashion. In the proposed UGACH, given a data of one modality, the generative model tries to fit the distribution over the manifold structure, and select informative data of another modality to challenge the discriminative model. The discriminative model learns to distinguish the generated data and the true positive data sampled from correlation graph to achieve better retrieval accuracy. These two models are trained in an adversarial way to improve each other and promote hashing function learning. (2) We propose a correlation graph based approach to capture the underlying manifold structure across different modalities, so that data of different modalities but within the same manifold can have smaller Hamming distance and promote retrieval accuracy. Extensive experiments compared with 6 state-of-the-art methods verify the effectiveness of our proposed approach.

READ FULL TEXT
research
02/07/2018

SCH-GAN: Semi-supervised Cross-modal Hashing by Generative Adversarial Network

Cross-modal hashing aims to map heterogeneous multimedia data into a com...
research
10/14/2017

CM-GANs: Cross-modal Generative Adversarial Networks for Common Representation Learning

It is known that the inconsistent distribution and representation of dif...
research
03/26/2019

Unsupervised Concatenation Hashing with Sparse Constraint for Cross-Modal Retrieval

With the advantage of low storage cost and high efficiency, hashing lear...
research
04/04/2018

Self-Supervised Adversarial Hashing Networks for Cross-Modal Retrieval

Thanks to the success of deep learning, cross-modal retrieval has made s...
research
11/11/2021

Learning Signal-Agnostic Manifolds of Neural Fields

Deep neural networks have been used widely to learn the latent structure...
research
04/01/2020

Creating Something from Nothing: Unsupervised Knowledge Distillation for Cross-Modal Hashing

In recent years, cross-modal hashing (CMH) has attracted increasing atte...
research
12/25/2020

Comprehensive Graph-conditional Similarity Preserving Network for Unsupervised Cross-modal Hashing

Unsupervised cross-modal hashing (UCMH) has become a hot topic recently....

Please sign up or login with your details

Forgot password? Click here to reset