(Un)likelihood Training for Interpretable Embedding

07/01/2022
by   Jiaxin Wu, et al.
0

Cross-modal representation learning has become a new normal for bridging the semantic gap between text and visual data. Learning modality agnostic representations in a continuous latent space, however, is often treated as a black-box data-driven training process. It is well-known that the effectiveness of representation learning depends heavily on the quality and scale of training data. For video representation learning, having a complete set of labels that annotate the full spectrum of video content for training is highly difficult if not impossible. These issues, black-box training and dataset bias, make representation learning practically challenging to be deployed for video understanding due to unexplainable and unpredictable results. In this paper, we propose two novel training objectives, likelihood and unlikelihood functions, to unroll semantics behind embeddings while addressing the label sparsity problem in training. The likelihood training aims to interpret semantics of embeddings beyond training labels, while the unlikelihood training leverages prior knowledge for regularization to ensure semantically coherent interpretation. With both training objectives, a new encoder-decoder network, which learns interpretable cross-modal representation, is proposed for ad-hoc video search. Extensive experiments on TRECVid and MSR-VTT datasets show the proposed network outperforms several state-of-the-art retrieval models with a statistically significant performance margin.

READ FULL TEXT

page 1

page 5

page 7

page 9

page 10

page 11

research
08/08/2017

MHTN: Modal-adversarial Hybrid Transfer Network for Cross-modal Retrieval

Cross-modal retrieval has drawn wide interest for retrieval across diffe...
research
01/11/2022

Boosting Video Representation Learning with Multi-Faceted Integration

Video content is multifaceted, consisting of objects, scenes, interactio...
research
11/14/2019

HUSE: Hierarchical Universal Semantic Embeddings

There is a recent surge of interest in cross-modal representation learni...
research
11/10/2021

SwAMP: Swapped Assignment of Multi-Modal Pairs for Cross-Modal Retrieval

We tackle the cross-modal retrieval problem, where the training is only ...
research
07/16/2022

SVGraph: Learning Semantic Graphs from Instructional Videos

In this work, we focus on generating graphical representations of noisy,...
research
05/04/2023

LLM2Loss: Leveraging Language Models for Explainable Model Diagnostics

Trained on a vast amount of data, Large Language models (LLMs) have achi...
research
03/21/2020

Cross-modal Deep Face Normals with Deactivable Skip Connections

We present an approach for estimating surface normals from in-the-wild c...

Please sign up or login with your details

Forgot password? Click here to reset