Condensed Movies: Story Based Retrieval with Contextual Embeddings

05/08/2020
by   Max Bain, et al.
10

Our objective in this work is the long range understanding of the narrative structure of movies. Instead of considering the entire movie, we propose to learn from the key scenes of the movie, providing a condensed look at the full storyline. To this end, we make the following four contributions: (i) We create the Condensed Movie Dataset (CMD) consisting of the key scenes from over 3K movies: each key scene is accompanied by a high level semantic description of the scene, character face tracks, and metadata about the movie. Our dataset is scalable, obtained automatically from YouTube, and is freely available for anybody to download and use. It is also an order of magnitude larger than existing movie datasets in the number of movies; (ii) We introduce a new story-based text-to-video retrieval task on this dataset that requires a high level understanding of the plotline; (iii) We provide a deep network baseline for this task on our dataset, combining character, speech and visual cues into a single video embedding; and finally (iv) We demonstrate how the addition of context (both past and future) improves retrieval performance.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset