Multimodal diff-hash

11/07/2011
by   Michael M. Bronstein, et al.
0

Many applications require comparing multimodal data with different structure and dimensionality that cannot be compared directly. Recently, there has been increasing interest in methods for learning and efficiently representing such multimodal similarity. In this paper, we present a simple algorithm for multimodal similarity-preserving hashing, trying to map multimodal data into the Hamming space while preserving the intra- and inter-modal similarities. We show that our method significantly outperforms the state-of-the-art method in the field.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/09/2019

Deep Semantic Multimodal Hashing Network for Scalable Multimedia Retrieval

Hashing has been widely applied to multimodal retrieval on large-scale m...
research
07/06/2012

Multimodal similarity-preserving hashing

We introduce an efficient computational framework for hashing data belon...
research
08/17/2017

Deep Binary Reconstruction for Cross-modal Hashing

With the increasing demand of massive multimodal data storage and organi...
research
12/25/2020

Comprehensive Graph-conditional Similarity Preserving Network for Unsupervised Cross-modal Hashing

Unsupervised cross-modal hashing (UCMH) has become a hot topic recently....
research
06/01/2016

A Survey on Learning to Hash

Nearest neighbor search is a problem of finding the data points from the...
research
03/25/2017

Learning to Predict: A Fast Re-constructive Method to Generate Multimodal Embeddings

Integrating visual and linguistic information into a single multimodal r...
research
01/18/2021

MONAH: Multi-Modal Narratives for Humans to analyze conversations

In conversational analyses, humans manually weave multimodal information...

Please sign up or login with your details

Forgot password? Click here to reset