DeepAI AI Chat
Log In Sign Up

WSLLN: Weakly Supervised Natural Language Localization Networks

by   Mingfei Gao, et al.
University of Maryland

We propose weakly supervised language localization networks (WSLLN) to detect events in long, untrimmed videos given language queries. To learn the correspondence between visual segments and texts, most previous methods require temporal coordinates (start and end times) of events for training, which leads to high costs of annotation. WSLLN relieves the annotation burden by training with only video-sentence pairs without accessing to temporal locations of events. With a simple end-to-end structure, WSLLN measures segment-text consistency and conducts segment selection (conditioned on the text) simultaneously. Results from both are merged and optimized as a video-sentence matching problem. Experiments on ActivityNet Captions and DiDeMo demonstrate that WSLLN achieves state-of-the-art performance.


wMAN: Weakly-supervised Moment Alignment Network for Text-based Video Segment Retrieval

Given a video and a sentence, the goal of weakly-supervised video moment...

Boosting Weakly-Supervised Temporal Action Localization with Text Information

Due to the lack of temporal annotation, current Weakly-supervised Tempor...

Weakly Supervised Dense Event Captioning in Videos

Dense event captioning aims to detect and describe all events of interes...

Revisit Weakly-Supervised Audio-Visual Video Parsing from the Language Perspective

We focus on the weakly-supervised audio-visual video parsing task (AVVP)...

Video Moment Retrieval from Text Queries via Single Frame Annotation

Video moment retrieval aims at finding the start and end timestamps of a...

Language Segmentation

Language segmentation consists in finding the boundaries where one langu...

Regressing Location on Text for Probabilistic Geocoding

Text data are an important source of detailed information about social a...