Learning Text-Image Joint Embedding for Efficient Cross-Modal Retrieval with Deep Feature Engineering

10/22/2021
by   Zhongwei Xie, et al.
10

This paper introduces a two-phase deep feature engineering framework for efficient learning of semantics enhanced joint embedding, which clearly separates the deep feature engineering in data preprocessing from training the text-image joint embedding model. We use the Recipe1M dataset for the technical description and empirical validation. In preprocessing, we perform deep feature engineering by combining deep feature engineering with semantic context features derived from raw text-image input data. We leverage LSTM to identify key terms, deep NLP models from the BERT family, TextRank, or TF-IDF to produce ranking scores for key terms before generating the vector representation for each key term by using word2vec. We leverage wideResNet50 and word2vec to extract and encode the image category semantics of food images to help semantic alignment of the learned recipe and image embeddings in the joint latent space. In joint embedding learning, we perform deep feature engineering by optimizing the batch-hard triplet loss function with soft-margin and double negative sampling, taking into account also the category-based alignment loss and discriminator-based alignment loss. Extensive experiments demonstrate that our SEJE approach with deep feature engineering significantly outperforms the state-of-the-art approaches.

READ FULL TEXT

page 3

page 16

page 17

page 18

page 19

page 21

page 22

page 23

research
08/02/2021

Efficient Deep Feature Calibration for Cross-Modal Joint Embedding Learning

This paper introduces a two-phase deep feature calibration framework for...
research
08/09/2021

Learning Joint Embedding with Modality Alignments for Cross-Modal Retrieval of Recipes and Food Images

This paper presents a three-tier modality alignment approach to learning...
research
08/02/2021

Learning TFIDF Enhanced Joint Embedding for Recipe-Image Cross-Modal Retrieval Service

It is widely acknowledged that learning joint embeddings of recipes with...
research
10/10/2022

Semantically Enhanced Hard Negatives for Cross-modal Information Retrieval

Visual Semantic Embedding (VSE) aims to extract the semantics of images ...
research
09/14/2019

Joint Wasserstein Autoencoders for Aligning Multimodal Embeddings

One of the key challenges in learning joint embeddings of multiple modal...
research
05/30/2019

Multitask Text-to-Visual Embedding with Titles and Clickthrough Data

Text-visual (or called semantic-visual) embedding is a central problem i...
research
10/17/2020

DIFER: Differentiable Automated Feature Engineering

Feature engineering, a crucial step of machine learning, aims to extract...

Please sign up or login with your details

Forgot password? Click here to reset