Text Synopsis Generation for Egocentric Videos

05/08/2020
by   Aidean Sharghi, et al.
0

Mass utilization of body-worn cameras has led to a huge corpus of available egocentric video. Existing video summarization algorithms can accelerate browsing such videos by selecting (visually) interesting shots from them. Nonetheless, since the system user still has to watch the summary videos, browsing large video databases remain a challenge. Hence, in this work, we propose to generate a textual synopsis, consisting of a few sentences describing the most important events in a long egocentric videos. Users can read the short text to gain insight about the video, and more importantly, efficiently search through the content of a large video database using text queries. Since egocentric videos are long and contain many activities and events, using video-to-text algorithms results in thousands of descriptions, many of which are incorrect. Therefore, we propose a multi-task learning scheme to simultaneously generate descriptions for video segments and summarize the resulting descriptions in an end-to-end fashion. We Input a set of video shots and the network generates a text description for each shot. Next, visual-language content matching unit that is trained with a weakly supervised objective, identifies the correct descriptions. Finally, the last component of our network, called purport network, evaluates the descriptions all together to select the ones containing crucial information. Out of thousands of descriptions generated for the video, a few informative sentences are returned to the user. We validate our framework on the challenging UT Egocentric video dataset, where each video is between 3 to 5 hours long, associated with over 3000 textual descriptions on average. The generated textual summaries, including only 5 percent (or less) of the generated descriptions, are compared to groundtruth summaries in text domain using well-established metrics in natural language processing.

READ FULL TEXT

page 1

page 3

page 7

research
03/21/2023

VideoXum: Cross-modal Visual and Textural Summarization of Videos

Video summarization aims to distill the most important information from ...
research
08/01/2017

Video as a By-Product of Digital Prototyping: Capturing the Dynamic Aspect of Interaction

Requirements engineering provides several practices to analyze how a use...
research
09/18/2023

Does Video Summarization Require Videos? Quantifying the Effectiveness of Language in Video Summarization

Video summarization remains a huge challenge in computer vision due to t...
research
11/26/2015

TennisVid2Text: Fine-grained Descriptions for Domain Specific Videos

Automatically describing videos has ever been fascinating. In this work,...
research
11/07/2021

NarrationBot and InfoBot: A Hybrid System for Automated Video Description

Video accessibility is crucial for blind and low vision users for equita...
research
05/10/2021

Spoken Moments: Learning Joint Audio-Visual Representations from Video Descriptions

When people observe events, they are able to abstract key information an...
research
01/10/2023

AI based approach to Trailer Generation for Online Educational Courses

In this paper, we propose an AI based approach to Trailer Generation in ...

Please sign up or login with your details

Forgot password? Click here to reset