Cross-modal Search Method of Technology Video based on Adversarial Learning and Feature Fusion

10/11/2022
by   Xiangbin Liu, et al.
0

Technology videos contain rich multi-modal information. In cross-modal information search, the data features of different modalities cannot be compared directly, so the semantic gap between different modalities is a key problem that needs to be solved. To address the above problems, this paper proposes a novel Feature Fusion based Adversarial Cross-modal Retrieval method (FFACR) to achieve text-to-video matching, ranking and searching. The proposed method uses the framework of adversarial learning to construct a video multimodal feature fusion network and a feature mapping network as generator, a modality discrimination network as discriminator. Multi-modal features of videos are obtained by the feature fusion network. The feature mapping network projects multi-modal features into the same semantic space based on semantics and similarity. The modality discrimination network is responsible for determining the original modality of features. Generator and discriminator are trained alternately based on adversarial learning, so that the data obtained by the feature mapping network is semantically consistent with the original data and the modal features are eliminated, and finally the similarity is used to rank and obtain the search results in the semantic space. Experimental results demonstrate that the proposed method performs better in text-to-video search than other existing methods, and validate the effectiveness of the method on the self-built datasets of technology videos.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/11/2021

Integrating Information Theory and Adversarial Learning for Cross-modal Retrieval

Accurately matching visual and textual data in cross-modal retrieval has...
research
09/13/2022

Look Before You Leap: Improving Text-based Person Retrieval by Learning A Consistent Cross-modal Common Manifold

The core problem of text-based person retrieval is how to bridge the het...
research
08/16/2017

Modality-specific Cross-modal Similarity Measurement with Recurrent Attention Network

Nowadays, cross-modal retrieval plays an indispensable role to flexibly ...
research
08/19/2023

Interpretation on Multi-modal Visual Fusion

In this paper, we present an analytical framework and a novel metric to ...
research
12/18/2019

Learning Shared Cross-modality Representation Using Multispectral-LiDAR and Hyperspectral Data

Due to the ever-growing diversity of the data source, multi-modality fea...
research
11/19/2020

Watch and Learn: Mapping Language and Noisy Real-world Videos with Self-supervision

In this paper, we teach machines to understand visuals and natural langu...
research
07/04/2018

Deep Cross-modality Adaptation via Semantics Preserving Adversarial Learning for Sketch-based 3D Shape Retrieval

Due to the large cross-modality discrepancy between 2D sketches and 3D s...

Please sign up or login with your details

Forgot password? Click here to reset