Online Multi-modal Person Search in Videos

by   Jiangyue Xia, et al.

The task of searching certain people in videos has seen increasing potential in real-world applications, such as video organization and editing. Most existing approaches are devised to work in an offline manner, where identities can only be inferred after an entire video is examined. This working manner precludes such methods from being applied to online services or those applications that require real-time responses. In this paper, we propose an online person search framework, which can recognize people in a video on the fly. This framework maintains a multimodal memory bank at its heart as the basis for person recognition, and updates it dynamically with a policy obtained by reinforcement learning. Our experiments on a large movie dataset show that the proposed method is effective, not only achieving remarkable improvements over online schemes but also outperforming offline methods.


page 13

page 14


APES: Audiovisual Person Search in Untrimmed Video

Humans are arguably one of the most important subjects in video streams,...

Person Search in Videos with One Portrait Through Visual and Temporal Links

In real-world applications, e.g. law enforcement and video retrieval, on...

Multi-modal Summarization for Video-containing Documents

Summarization of multimedia data becomes increasingly significant as it ...

Dynamic Face Video Segmentation via Reinforcement Learning

For real-time semantic video segmentation, most recent works utilise a d...

A Multi-task Joint Framework for Real-time Person Search

Person search generally involves three important parts: person detection...

FFNet: Video Fast-Forwarding via Reinforcement Learning

For many applications with limited computation, communication, storage a...

A Unified Framework for Shot Type Classification Based on Subject Centric Lens

Shots are key narrative elements of various videos, e.g. movies, TV seri...