Multimodal similarity-preserving hashing

07/06/2012
by   Jonathan Masci, et al.
0

We introduce an efficient computational framework for hashing data belonging to multiple modalities into a single representation space where they become mutually comparable. The proposed approach is based on a novel coupled siamese neural network architecture and allows unified treatment of intra- and inter-modality similarity learning. Unlike existing cross-modality similarity learning approaches, our hashing functions are not limited to binarized linear projections and can assume arbitrarily complex forms. We show experimentally that our method significantly outperforms state-of-the-art hashing approaches on multimedia retrieval tasks.

READ FULL TEXT

page 5

page 7

page 9

research
01/09/2019

Deep Semantic Multimodal Hashing Network for Scalable Multimedia Retrieval

Hashing has been widely applied to multimodal retrieval on large-scale m...
research
11/07/2011

Multimodal diff-hash

Many applications require comparing multimodal data with different struc...
research
02/18/2015

Cross-Modality Hashing with Partial Correspondence

Learning a hashing function for cross-media search is very desirable due...
research
08/17/2017

Deep Binary Reconstruction for Cross-modal Hashing

With the increasing demand of massive multimodal data storage and organi...
research
08/15/2016

Transitive Hashing Network for Heterogeneous Multimedia Retrieval

Hashing has been widely applied to large-scale multimedia retrieval due ...
research
05/29/2018

Hierarchical One Permutation Hashing: Efficient Multimedia Near Duplicate Detection

With advances in multimedia technologies and the proliferation of smart ...
research
09/19/2019

HyperLearn: A Distributed Approach for Representation Learning in Datasets With Many Modalities

Multimodal datasets contain an enormous amount of relational information...

Please sign up or login with your details

Forgot password? Click here to reset