Learning 2D Temporal Adjacent Networks for Moment Localization with Natural Language

12/08/2019
by   Houwen Peng, et al.
0

We address the problem of retrieving a specific moment from an untrimmed video by a query sentence. This is a challenging problem because a target moment may take place in relations to other temporal moments in the untrimmed video. Existing methods cannot tackle this challenge well since they consider temporal moments individually and neglect the temporal dependencies. In this paper, we model the temporal relations between video moments by a two-dimensional map, where one dimension indicates the starting time of a moment and the other indicates the end time. This 2D temporal map can cover diverse video moments with different lengths, while representing their adjacent relations. Based on the 2D map, we propose a Temporal Adjacent Network (2D-TAN), a single-shot framework for moment localization. It is capable of encoding the adjacent temporal relation, while learning discriminative features for matching video moments with referring expressions. We evaluate the proposed 2D-TAN on three challenging benchmarks, i.e., Charades-STA, ActivityNet Captions, and TACoS, where our 2D-TAN outperforms the state-of-the-art.

READ FULL TEXT
research
12/04/2020

Multi-Scale 2D Temporal Adjacent Networks for Moment Localization with Natural Language

We address the problem of retrieving a specific moment from an untrimmed...
research
11/30/2018

MAN: Moment Alignment Network for Natural Language Moment Retrieval via Iterative Graph Adjustment

This research strives for natural language moment retrieval in long, unt...
research
02/02/2021

Progressive Localization Networks for Language-based Moment Localization

This paper targets the task of language-based moment localization. The l...
research
10/07/2021

Sonorant spectra and coarticulation distinguish speakers with different dialects

The aim of this study is to determine the effect of language varieties o...
research
12/08/2019

Learning Sparse 2D Temporal Adjacent Networks for Temporal Action Localization

In this report, we introduce the Winner method for HACS Temporal Action ...
research
03/12/2023

Towards Diverse Temporal Grounding under Single Positive Labels

Temporal grounding aims to retrieve moments of the described event withi...
research
06/18/2020

Video Moment Localization using Object Evidence and Reverse Captioning

We address the problem of language-based temporal localization of moment...

Please sign up or login with your details

Forgot password? Click here to reset