Multi-Modal Cross-Domain Alignment Network for Video Moment Retrieval

09/23/2022
by   Xiang Fang, et al.
0

As an increasingly popular task in multimedia information retrieval, video moment retrieval (VMR) aims to localize the target moment from an untrimmed video according to a given language query. Most previous methods depend heavily on numerous manual annotations (i.e., moment boundaries), which are extremely expensive to acquire in practice. In addition, due to the domain gap between different datasets, directly applying these pre-trained models to an unseen domain leads to a significant performance drop. In this paper, we focus on a novel task: cross-domain VMR, where fully-annotated datasets are available in one domain (“source domain”), but the domain of interest (“target domain”) only contains unannotated datasets. As far as we know, we present the first study on cross-domain VMR. To address this new task, we propose a novel Multi-Modal Cross-Domain Alignment (MMCDA) network to transfer the annotation knowledge from the source domain to the target domain. However, due to the domain discrepancy between the source and target domains and the semantic gap between videos and queries, directly applying trained models to the target domain generally leads to a performance drop. To solve this problem, we develop three novel modules: (i) a domain alignment module is designed to align the feature distributions between different domains of each modality; (ii) a cross-modal alignment module aims to map both video and query features into a joint embedding space and to align the feature distributions between different modalities in the target domain; (iii) a specific alignment module tries to obtain the fine-grained similarity between a specific frame and the given query for optimal localization. By jointly training these three modules, our MMCDA can learn domain-invariant and semantic-aligned cross-modal representations.

READ FULL TEXT

page 1

page 13

page 14

page 15

research
10/25/2021

Domain Adaptation in Multi-View Embedding for Cross-Modal Video Retrieval

Given a gallery of uncaptioned video sequences, this paper considers the...
research
06/01/2017

Cross-modal Common Representation Learning by Hybrid Transfer Network

DNN-based cross-modal retrieval is a research hotspot to retrieve across...
research
08/08/2017

MHTN: Modal-adversarial Hybrid Transfer Network for Cross-modal Retrieval

Cross-modal retrieval has drawn wide interest for retrieval across diffe...
research
05/23/2023

Faster Video Moment Retrieval with Point-Level Supervision

Video Moment Retrieval (VMR) aims at retrieving the most relevant events...
research
09/16/2018

Cross-Domain Labeled LDA for Cross-Domain Text Classification

Cross-domain text classification aims at building a classifier for a tar...
research
04/14/2023

Cross-domain Food Image-to-Recipe Retrieval by Weighted Adversarial Learning

Food image-to-recipe aims to learn an embedded space linking the rich se...
research
02/22/2022

Model Reprogramming: Resource-Efficient Cross-Domain Machine Learning

In data-rich domains such as vision, language, and speech, deep learning...

Please sign up or login with your details

Forgot password? Click here to reset